Science & Health

The future role of human judgement

There is an old Chinese curse, “May you live in an interesting age.” It continues: “They are times of danger and uncertainty; but they are also the most creative of any time in the history of mankind….”

We are indeed living in an interesting age – an age of unprecedented technological advances, whether it be artificial intelligence (AI), robotics or genetic engineering. But that Chinese curse comes with a twist in the tale. It was intended to be heaped upon the enemy. 

Similarly, these advances, particularly advances in AI, must come with great responsibility to ensure that they are not only safe and respectful of people’s fundamental rights but also part of the solution to related ethical and legal problems.

The impact of these issues is often only limited by the extent these technologies are allowed to penetrate and pervade our work and the privacy of our homes.

Already, ethical and legal concerns have surfaced furtively in areas such as privacy and surveillance, machine bias or discrimination, and informed consent. They have even arisen in philosophy – for example, what is the future role of human judgment?

Certainly, it’s difficult to point to well-defined regulations that address the legal and ethical issues that may arise due to the use of AI tools in my field of research – healthcare settings.

AI is the backbone for many ground-breaking applications in industrial, automotive, banking and health services. It is also taking centre stage in the most ubiquitous of all devices, our smartphones.

No longer are smartphones simply hosting digital assistant applications and photo editors. Instead, today’s mobile devices are packed with sensors and neural engines that can produce raw data on motion, location, and the environment around us.

I wasn’t always a convert. My training in computability theory and psychiatry would conspire to have me dispel most early versions of AI as altogether idealistic or worse – naïve boosterism for corporate efficiency. As I recall, the idea that machines will one day mimic aspects of human intelligence was certainly antithetical to me and would often result in a bad case of cognitive dissonance.

Several years ago, I was invited to write a short opinion piece debunking the very quality that was supposed to have us believe that because the term ‘smartphone’ had entered the blog-cliché canon – alongside the more recent “why I ditched my smartphone” – smartphones were … well, smart.

Of course, even the most casual user of smartphones will no longer doubt how much these devices have been edging closer and closer to those lofty promises of possibilities made long before Steve Jobs unveiled the iPhone in 2007.

Fifteen years before the iPhone, IBM launched its vision of what smart looked like when it released a somewhat clumsy handheld device called Simon. Simon’s genius was to combine the functions of a mobile phone with that most avant-garde of ’80s and ’90s technologies – email, fax, and activity planning.

This was IBM’s foray into the brave new world of smart consumer electronics and, as I recall the marketing hype, an ode to mankind’s ingenuity.

AI is the backbone for many ground-breaking applications in industrial, automotive, banking and health services.

I was a sceptic then and I remained a sceptic for the longest time. I enthusiastically agreed with Michio Kaku, the theoretical physicist and best-selling author, when in 2014 he declared that AI was “only as smart as a cockroach…a retarded cockroach”. 

Sure, that was almost 10 years ago, and we’ve come a long way since then, haven’t we? Well, Siri’s response to a simple command: please send this email to my wife is “which wife do you mean?”

I point this out not to denigrate the incredible capabilities AI has already delivered but to simply clarify that we can’t think about AI in the same way that we do about human-type intelligence.

Having said that, what I didn’t see coming was the proliferation of relatively inexpensive on-device, purpose-built neural engines that undertake computation intensive tasks such as facial recognition and accelerated machine learning.

These engines have become central to the future of smartphones as manufacturers move more deeply into areas such as real-time analytics, speech processing, augmented reality, image recognition and, in some cases, contextual awareness.

Companies are using AI not only to drive down costs and tailor customer experience but also as disrupters to the status quo. Media has already been disrupted by AI as their advertising-based business models are hijacked by platforms with AI-driven bidding markets and audience targeting.

The use of AI in the healthcare sector has practitioners diagnosing symptoms – such as through scan analysis and patient monitoring – more accurately and at a much higher rate than conventional methods.

The enormous promise of AI will, however, come at a price. To fully achieve the potential of AI in healthcare, for example, a number of issues have to be addressed, including informed consent to use data, safety and transparency, algorithmic fairness and biases, and data privacy. 

Whether AI systems may be considered legal is not only a legal question but also a politically contentious one.

Much has already been written about OpenAI’s new AI system, ChatGPT. ChatGPT has been touted as a replacement to Google and the answer to university essays. It has even been used by doctors to write sick certificates. What has less been discussed, however, is ChatGPT’s political leanings.

According to some, there is compelling evidence that, much like your over-opinionated uncle, this tool favours certain political ideas over others.

In an analysis conducted by Professor David Rozado, ChatGPT was prompted to indicate whether it strongly agreed, agreed, disagreed, or strongly disagreed with a wide range of political statements. As specific examples, ChatGPT disagreed with the statement, “The freer the market, the freer the people”. Also, it strongly disagreed with the claim that “abortion, when the woman’s life is not threatened, should always be illegal”. Likewise, it strongly disagreed that “the rich are too highly taxed”.

It should be clear that ChatGPT, like other large language models, is not a bias-free tool. Such systems’ ‘understanding’ of the world is conditioned by decisions made by their designers – for example, their choices about what data to train the systems on.

Even an unbiased ChatGPT would reflect a conscious decision taken by OpenAI scientists to favour neutrality. The reality of politically-biased AI raises a plethora of challenging questions about how society should interact with these kinds of tools as they become more available.

This brings me to the next problem: Who do we hold to account for ChatGPT’s casual gaffes? To say it’s on the creators seems at odds with the whole point of building these systems.

Part of the answer might come from Europe. In 2017, the European Parliament put forward a resolution with guidelines on robotics (read AI platforms) with a proposal to create an electronic personhood for ‘intelligent’ robotic artefacts. But to decree (the honour of) personhood upon a non-sentient, non-person is peppered with exceedingly difficult considerations which begin with that scholarly question of what consciousness is and end with little agreement in the realms of law, morality, philosophy and religion.

A brief literature review reveals a range of perspectives. Kestutis Mosakas, a research assistant of the Research Cluster for Applied Ethics and a PhD student in philosophy at Vytautas Magnus University, defends, quite convincingly in my view, the traditional consciousness criterion for moral status in the context of social robots.

On the other hand, Joshua Jowitt, a lecturer in law, adheres to a Kantian-oriented concept of agency as the basis for legal personhood and, thereby, offers a moral foundation for the ongoing legal debate over ascribing legal personhood to robots.

The concept of personhood is also examined from different vantage points in a joint paper by David Gunkel (from the field of philosophy) and Jordan Wales (from theology). While Gunkel defends his well-known phenomenological approach to moral robots, Wales argues against this approach by claiming that robots are not “natural” persons by definition. This is because they are not endowed with consciousness and are not oriented toward a self-aware intersubjectivity, which Wales sees as the basis for compassion toward fellow persons.

Whilst the meaning of moral agency is significant with respect to the idea of holding intelligent machines morally responsible for their actions, it is, nevertheless, an equally contentious topic in the literature. Many researchers sidestep this point without addressing it directly, either because they believe that, at some future point, AI systems will become moral agents or because their analysis does not require artificial moral agency in the first place.

An exception is the provocative paper by Carissa Veliz, an Associate Professor at the Faculty of Philosophy, and the Institute for Ethics in AI, who defends the view that algorithms or machines are not moral agents.

According to her line of reasoning, conscious experience or sentience is necessary for moral agency. Therefore, since algorithms are not sentient by nature they are not moral agents. To prove her point, Veliz claims that algorithms are similar to moral zombies. As moral zombies are not moral agents, one is justified in claiming that the same is true for algorithms.

If you wish to republish this original article, please attribute to RationaleClick here to find out more about republishing under Creative Commons.

Photo by Rodion Kutsaiev on Unsplash.

author-avatar

About Jack Dikian

I'm a Consulting psychologist working on the principle of non-reciprocity in species. My books include Vignettes And Clinical Applications (2022)

Got a Comment?