{"id":12983,"date":"2023-03-03T19:48:36","date_gmt":"2023-03-03T08:48:36","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=12983"},"modified":"2023-03-06T11:05:29","modified_gmt":"2023-03-06T00:05:29","slug":"the-future-role-of-human-judgment","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2023\/03\/03\/the-future-role-of-human-judgment\/","title":{"rendered":"The future role of human judgement"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">There is an old Chinese curse, \u201cMay you live in an interesting age.\u201d It continues: \u201cThey are times of danger and uncertainty; but they are also the most creative of any time in the history of mankind\u2026.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We are indeed living in an interesting age \u2013 an age of unprecedented technological advances, whether it be artificial intelligence (AI), robotics or genetic engineering. But that Chinese curse comes with a twist in the tale. It was intended to be heaped upon the enemy.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Similarly, these advances, particularly advances in AI, must come with great responsibility to ensure that they are not only safe and respectful of people\u2019s fundamental rights but also part of the solution to related ethical and legal problems.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The impact of these issues is often only limited by the extent these technologies are allowed to penetrate and pervade our work and the privacy of our homes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Already, ethical and legal concerns have surfaced furtively in areas such as privacy and surveillance, machine bias or discrimination, and informed consent. They have even arisen in philosophy \u2013 for example, what is the future role of human judgment?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Certainly, it\u2019s difficult to point to well-defined regulations that address the legal and ethical issues that may arise due to the use of AI tools in my field of research \u2013 healthcare settings.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI is the backbone for many ground-breaking applications in industrial,\u00a0<\/span><span style=\"font-weight: 400;\">automotive, banking and health services. It is also taking centre stage in the most ubiquitous\u00a0<\/span><span style=\"font-weight: 400;\">of all devices, our smartphones.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">No longer are smartphones simply hosting digital assistant applications and photo editors. Instead, today&#8217;s mobile devices are packed with sensors and neural engines that can produce raw data on motion, location, and the environment around us.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I wasn\u2019t always a convert. My training in computability theory and psychiatry would conspire to have me dispel most early versions of AI as altogether idealistic or worse \u2013 na\u00efve boosterism for corporate efficiency. As I recall, the idea that machines will one day mimic aspects of human intelligence was certainly antithetical to me and would often result in a bad case of cognitive dissonance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Several years ago, <\/span><a href=\"https:\/\/www.academia.edu\/35159207\/The_AI_winter_of_1984_the_end_of_LISP_machines_\"><span style=\"font-weight: 400;\">I was invited to write a short opinion piece<\/span><\/a><span style=\"font-weight: 400;\"> debunking the very quality that was supposed to have us believe that because the term \u2018smartphone\u2019 had entered the blog-clich\u00e9 canon \u2013 alongside the more recent \u201cwhy I ditched my smartphone\u201d \u2013 smartphones were \u2026 well, smart.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Of course, even the most casual user of smartphones will no longer doubt how much these devices have been edging closer and closer to those lofty promises of possibilities made long before Steve Jobs unveiled the iPhone in 2007.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fifteen years before the iPhone, IBM launched its vision of what smart looked like when it\u00a0<\/span><span style=\"font-weight: 400;\">released a <\/span><a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2012-06-29\/before-iphone-and-android-came-simon-the-first-smartphone\"><span style=\"font-weight: 400;\">somewhat clumsy handheld device called Simon<\/span><\/a><span style=\"font-weight: 400;\">. Simon\u2019s genius was to combine the functions of a mobile phone with that most avant-garde of &#8217;80s and &#8217;90s technologies \u2013 email, fax, and activity planning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This was IBM&#8217;s foray into the brave new world of smart consumer electronics and, as I recall the marketing hype, an ode to mankind&#8217;s ingenuity.<\/span><\/p>\n<blockquote><p><strong>AI is the backbone for many ground-breaking applications in industrial,\u00a0automotive, banking and health services. <\/strong><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">I was a sceptic then and I remained a sceptic for the longest time. I enthusiastically agreed with Michio Kaku, the theoretical physicist and best-selling author, <\/span><a href=\"https:\/\/www.nature.com\/scitable\/blog\/scibytes\/scibytes_vs_michio_kaku\/\"><span style=\"font-weight: 400;\">when in 2014 he declared<\/span><\/a><span style=\"font-weight: 400;\"> that AI was \u201conly as smart as a cockroach\u2026a retarded cockroach\u201d.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sure, that was almost 10 years ago, and we\u2019ve come a long way since then, haven\u2019t we? Well, Siri\u2019s response to a simple command: please send this email to my wife is \u201cwhich wife do you mean?\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I point this out not to denigrate the incredible capabilities AI has already delivered but to simply <\/span><span style=\"font-weight: 400;\">clarify that we can&#8217;t think about AI in the same way that we do about <\/span><span style=\"font-weight: 400;\">human-type intelligence.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Having said that, what I didn\u2019t see coming was the <\/span><a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.79.8.2554\"><span style=\"font-weight: 400;\">proliferation of relatively inexpensive on-device, purpose-built neural engines<\/span><\/a><span style=\"font-weight: 400;\"> that undertake computation intensive tasks such as facial recognition and accelerated machine learning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">These engines have become central to the future of smartphones as manufacturers move more deeply into areas such as real-time analytics, speech processing, augmented reality, image recognition and, in some cases, contextual awareness.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Companies are using AI not only to drive down costs and tailor customer experience but also as disrupters to the status quo. Media has already been disrupted by AI as their advertising-based business models are hijacked by platforms with AI-driven bidding markets and audience targeting.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The use of AI in the healthcare sector has practitioners diagnosing symptoms \u2013 such as through scan analysis and patient monitoring \u2013 more accurately and at a much higher rate than conventional methods.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The enormous promise of AI will, however, come at a price. To fully achieve the potential of AI in healthcare, for example, a number of issues have to be addressed, including informed consent to use data, safety and transparency, algorithmic fairness and biases, and data privacy.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Whether AI systems may be considered legal is not only a <\/span><a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC7332220\/\"><span style=\"font-weight: 400;\">legal question<\/span><\/a><span style=\"font-weight: 400;\"> but also a <\/span><a href=\"https:\/\/philpapers.org\/rec\/RODLAH-2\"><span style=\"font-weight: 400;\">politically contentious one<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Much has already been written about OpenAI\u2019s new AI system, ChatGPT. ChatGPT has been touted as a replacement to Google and the answer to university essays. It has even been used by doctors to write sick certificates. What has less been discussed, however, is ChatGPT\u2019s political leanings.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">According to some, there is compelling evidence that, much like your over-opinionated uncle, this tool favours certain political ideas over others.<\/span><\/p>\n<p><a href=\"https:\/\/www.cigionline.org\/articles\/its-time-to-start-thinking-about-politically-biased-ai\/\"><span style=\"font-weight: 400;\">In an analysis conducted by Professor David Rozado<\/span><\/a><span style=\"font-weight: 400;\">, ChatGPT was prompted to indicate whether it strongly agreed, agreed, disagreed, or strongly disagreed with a wide range of political statements. As specific examples, ChatGPT disagreed with the statement, \u201cThe freer the market, the freer the people\u201d. Also, it strongly disagreed with the claim that \u201cabortion, when the woman\u2019s life is not threatened, should always be illegal\u201d. Likewise, it strongly disagreed that \u201cthe rich are too highly taxed\u201d.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It should be clear that ChatGPT, like other large language models, is not a bias-free tool. Such systems\u2019 &#8216;understanding\u2019 of the world is conditioned by decisions made by their designers \u2013 for example, their choices about what data to train the systems on.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Even an unbiased ChatGPT would reflect a conscious decision taken by OpenAI scientists to favour neutrality. The reality of politically-biased AI raises a plethora of challenging questions about how society should interact with these kinds of tools as they become more available.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This brings me to the next problem: Who do we hold to account for ChatGPT\u2019s casual gaffes? To say it\u2019s on the creators seems at odds with the whole point of building these systems.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Part of the answer might come from Europe. In 2017, the European Parliament put forward a resolution with guidelines on robotics (read AI platforms) with a proposal to create an electronic personhood for \u2018intelligent\u2019 robotic artefacts. But to decree (the honour of) personhood upon a non-sentient, non-person is peppered with exceedingly difficult considerations which begin with that scholarly question of what consciousness is and end with little agreement in the realms of law, morality, philosophy and religion.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A brief literature review reveals a range of perspectives. <\/span><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-020-01002-1\"><span style=\"font-weight: 400;\">Kestutis Mosakas<\/span><\/a><span style=\"font-weight: 400;\">, a research assistant of the Research Cluster for Applied Ethics and a PhD student in philosophy at Vytautas Magnus University, defends, quite convincingly in my view, the traditional consciousness criterion for moral status in the context of social robots.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">On the other hand, Joshua Jowitt, a lecturer in law, adheres to a <\/span><a href=\"https:\/\/open.library.okstate.edu\/introphilosophy\/chapter\/a-brief-overview-of-kants-moral-theory\/\"><span style=\"font-weight: 400;\">Kantian-oriented concept<\/span><\/a><span style=\"font-weight: 400;\"> of agency as the basis for legal personhood and, thereby, offers a moral foundation for the ongoing legal debate over ascribing legal personhood to robots.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The concept of personhood is also <\/span><a href=\"https:\/\/philarchive.org\/archive\/GUNDWI\"><span style=\"font-weight: 400;\">examined from different vantage points<\/span><\/a><span style=\"font-weight: 400;\"> in a joint paper by David Gunkel (from the field of philosophy) and Jordan Wales (from theology). While Gunkel defends his well-known phenomenological approach to moral robots, Wales argues against this approach by claiming that robots are not \u201cnatural\u201d persons by definition. This is because they are not endowed with consciousness and are not oriented toward a self-aware intersubjectivity, which Wales sees as the basis for compassion toward fellow persons.<\/span><\/p>\n<p><a href=\"https:\/\/rationalist.com.au\/make-a-donation\/\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-large wp-image-11873\" src=\"https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1024x256.png\" alt=\"\" width=\"1024\" height=\"256\" srcset=\"https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1024x256.png 1024w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-300x75.png 300w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-768x192.png 768w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1536x384.png 1536w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Whilst the meaning of moral agency is significant with respect to the idea of holding intelligent machines morally responsible for their actions, it is, nevertheless, an equally contentious topic in the literature. Many researchers sidestep this point without addressing it directly, either because they believe that, at some future point, AI systems will become moral agents or because their analysis does not require artificial moral agency in the first place.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">An exception is the provocative paper by <\/span><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-021-01189-x\"><span style=\"font-weight: 400;\">Carissa Veliz<\/span><\/a><span style=\"font-weight: 400;\">, an Associate Professor at the Faculty of Philosophy, and the Institute for Ethics in AI, who defends the view that algorithms or machines are not moral agents.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">According to her line of reasoning, conscious experience or sentience is necessary for moral agency. Therefore, since algorithms are not sentient by nature they are not moral agents. To prove her point, Veliz claims that algorithms are similar to moral zombies. As moral zombies are not moral agents, one is justified in claiming that the same is true for algorithms.<\/span><\/p>\n<p><strong><b><i>If you wish to republish this original article, please attribute to\u00a0<\/i><\/b><a href=\"https:\/\/rationalemagazine.com\/\"><b>Rationale<\/b><\/a><b><i>.\u00a0<\/i><\/b><a href=\"https:\/\/rationalemagazine.com\/index.php\/publishing-guidelines\/\"><b><i>Click here<\/i><\/b><\/a><b><i>\u00a0to find out more about republishing under Creative Commons.<\/i><\/b><\/strong><\/p>\n<p><b><i>Photo by <a href=\"https:\/\/unsplash.com\/photos\/0VGG7cqTwCo\">Rodion Kutsaiev<\/a> on Unsplash.<\/i><\/b><\/p>\n","protected":false},"excerpt":{"rendered":"<p>There is an old Chinese curse, \u201cMay you live in an interesting age.\u201d It continues: \u201cThey are times of danger<\/p>\n","protected":false},"author":22,"featured_media":12992,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[65],"tags":[562,563],"coauthors":[128],"class_list":["post-12983","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science-health","tag-artificial-intelligence","tag-technology"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/12983","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=12983"}],"version-history":[{"count":5,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/12983\/revisions"}],"predecessor-version":[{"id":13003,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/12983\/revisions\/13003"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/12992"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=12983"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=12983"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=12983"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=12983"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}