What happens when we outsource conscience to AI?

I’ve been thinking a lot recently about a TV show that aired more than a decade ago, Person of Interest.

It wasn’t a particularly popular show, although it had enough viewers to last six seasons. But I think it’s become somewhat prophetic in an unanticipated way.

Here is the basic premise: One of the main characters, Harold Finch (played by Michael Emerson), has created an AI (known as “The Machine”) to help the government comb through troves of surveillance data to identify potential terrorist threats.

The AI can predict all sorts of crimes, including murders, but the government considers those “irrelevant” and disregards them.

Rather than let those crimes happen, Emerson creates a backdoor where The Machine can provide alerts about these “persons of interest,” and Emerson and his partner John Reese (Jim Caviezel) can investigate and prevent them.

The human element at the core

One of the interesting wrinkles in this show is that The Machine only provides investigators with a social security number. The number might be connected to the victim of the crime or the perpetrator.

It’s up to the human investigators to find out about the person—their relationships, interests and vulnerabilities—to determine what’s about to happen.

Finch explains this limitation thus: “The Machine only gives us numbers because I would always rather a human element remain in determining something so critical as someone’s fate. We have free will, and with that comes great responsibility.”

AI replacing human judgment

It’s a convenient premise, providing a clever framework for a “case of the week” procedural show.

But as I look at how AI is being rolled out in our world, and sold as an answer to all our problems, I wonder if that show highlights an enormous risk we face—the sidelining of the “human element” in our decision-making.

AI and warfare in Gaza

One of the first times I began thinking about these issues was after stories emerged last year of the Israeli military using AI to target people in Gaza.

According to these reports, the “Lavender” system was allegedly used to identify potential targets based on apparent links to Hamas, which officials used to authorise “large numbers” of civilians to be killed during the conflict.

Reportedly, the “human element” in verifying the accuracy of information was minimal. One user quoted said: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

Once we human beings
lose sight of the process,
we lose ownership
of the results.

Denials and questions

The Israelis have denied using an AI system to identify terrorist operatives or predict whether a person is a terrorist, arguing that Lavender was used to “cross reference intelligence sources.”

While US-based AI services continue to be used by the Israeli military, details of their use are mostly unconfirmed. And of course, Israel isn’t the only nation using AI in war.

There are a lot of unanswered questions about use of AI in conflict zones, particularly where civilians can get caught in the crossfire.

Accuracy, accountability and ethics

A system might use phone calls, text and audio messages gathered via mass surveillance to identify targets, but these are reportedly susceptible to mistranslation and hallucinations (i.e. making stuff up), and presumably also can take words and phrases out of context.

Even if there is a human in charge of reviewing the AI-generated target list, what access do they have to the underlying data? How much time do they have to review it? How is the information contained in that data verified (if at all)? What capacity do they even have to verify the source/accuracy of the AI’s target list while sitting at a computer desk in a war room far removed from the action?

According to the reports last year, a random sampling and cross-checking of predictions indicated that Lavender achieved a “90 per cent accuracy rate” in identifying targets.

Whether you think that’s good depends on your tolerance for innocent casualties. Not to mention that “dumb” bombs used to kill targets also killed anyone else with them at the time.

If it’s being used with such impunity, one might wonder whether AI is less about providing better targets and more about providing a helpful excuse—“it wasn’t me that killed those innocent people, it was what the AI targeting software demanded.”

What about healthcare?

The use of AI in warfare is perhaps the most horrific example, and of course it’s muddied by broader questions around whether this kind of drone warfare can ever be considered ethical.

Setting those questions aside, for the moment, let’s consider another less controversial area where a lack of a “human element” might be concerning.

Much of the hype around AI is its potential in healthcare. AI has the capacity to analyse and crosscheck patient data with massive troves of information. This analysis has the potential to turn up diagnoses that might not have otherwise been anticipated, helping greatly improve health outcomes.

Risks in diagnostic automation

This all sounds like good news so far. But it’s worth also asking what gets lost or missed if the diagnostic process is shortcut—or put another way, if human expert diligence is replaced by AI.

It starts with the separation of the diagnosing medical worker and the set-up of the software itself: What does the person using the AI know about the decisions that have gone into gathering and analysing the data they’re reviewing?

Are there any datasets that have the potential to throw out calculations? One issue is around gender—female bodies responding differently to male bodies—so has that been properly figured into the results?

There are other racial or genetic factors that might also affect outcomes. And with potentially tens of thousands of datasets, how can we know they’re all trustworthy?

AI fatigue and loss of ownership

Even if the systems themselves can be calibrated correctly, there’s still the issue of incorporating AI into the healthcare environment.

Organisations are attracted to AI because it can increase capabilities—both the number of patients seen (increasing income) and the efficiency of the diagnostic process (reducing costs)—and thus make economic sense.

However, what are the associated risks? At the moment, we might have experienced human experts reviewing results along with AI, but soon we may have a generation of medical staff who’ve only ever used AI tools.

Will they be trained to interrogate and question AI data (and if so, how will that be done given the issues outlined earlier)? Or will they—particularly given the time and resource pressures of the corporate workplace—succumb to time and cost pressures and rush through as many diagnoses as quickly as possible?

More broadly, if medical staff don’t own the process, how can they be expected to own the results? Will AI become an excuse—“oh well, it wasn’t me who missed that diagnosis, it was the AI.”

AI can’t replace moral agency

In this light, the premise Person of Interest highlights something vital about how we integrate AI into human endeavours.

It’s not just about having a human working in conjunction with AI. Crucially, humans need to do the work themselves and come to their own conclusions, because there are human aspects to the work that cannot be removed without consequences.

Yet many of those who are pushing AI seem to be promising to cut out human diligence completely.

I’ve named two important areas, but the same questions might be asked about AI use in most industries—from education, to law and even something as generic as marketing.

I’ve personally experimented with using AI to write articles, but quickly realised that nothing the AI could produce could be as interesting as the pieces I’ve written myself, keystroke by keystroke.

The risk of turning everything over to AI is that we give over human responsibility to an algorithm. Once we human beings lose sight of the process, we lose ownership of the results.

We give up our free will, and with it our sense of responsibility for each other.

  • Michael McVeigh is Director of Publishing and Digital Content at Jesuit Communications. He has worked across digital and print publications Australian Catholics, Eureka Street, Madonna and Pray.com.au. He is a former president of the Australasian Catholic Press Association and member of the Australian Catholic Bishops Conference Media Council.
  • First published in Eureka Streeet. Republished with permission.
  • Flashes of Insight is an international publication. The editorial policy is that spelling reflects the country of origin.

Get Flashes of Insight

We respect your email privacy

Search

Donate

All services bringing Flashes of Insight are donated.

Significant costs, such as those associated with site hosting, site design, and email delivery, mount up.

Flashes of Insight will shortly look for donations.