{"id":13248,"date":"2023-05-22T13:54:45","date_gmt":"2023-05-22T03:54:45","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=13248"},"modified":"2023-05-23T11:20:14","modified_gmt":"2023-05-23T01:20:14","slug":"we-need-to-stop-treating-ai-like-humans","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2023\/05\/22\/we-need-to-stop-treating-ai-like-humans\/","title":{"rendered":"We need to stop treating AI like humans"},"content":{"rendered":"<p>The artificial intelligence (AI) pioneer Geoffrey Hinton recently <a href=\"https:\/\/www.bbc.co.uk\/news\/world-us-canada-65452940\">resigned<\/a>\u00a0from Google, warning of the dangers of the technology \u201cbecoming more intelligent than us\u201d. His fear is that AI will one day succeed in \u201cmanipulating people to do what it wants\u201d.<\/p>\n<p>There are reasons we should be concerned about AI. But we frequently treat or talk about AIs as if they are human. Stopping this and realising what they actually are could help us maintain a fruitful relationship with the technology.<\/p>\n<p>In a recent essay, the US psychologist Gary Marcus advised us to\u00a0<a href=\"https:\/\/garymarcus.substack.com\/p\/stop-treating-ai-models-like-people\">stop treating AI models like people<\/a>. By AI models, he means large language models (LLMs) like ChatGPT and Bard, which are now being used by millions of people on a daily basis.<\/p>\n<p>He cites egregious examples of people \u201cover-attributing\u201d human-like cognitive capabilities to AI that have had a range of consequences. The most amusing was the US senator who claimed that\u00a0<a href=\"https:\/\/twitter.com\/ChrisMurphyCT\/status\/1640186536825061376?lang=en\">ChatGPT \u201ctaught itself chemistry\u201d<\/a>. The most harrowing was the report of a young Belgian man\u00a0<a href=\"https:\/\/nypost.com\/2023\/03\/30\/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow\/\">who was said to have taken his own life<\/a>\u00a0after prolonged conversations with an AI chatbot.<\/p>\n<p>Marcus is correct to say we should stop treating AI like people \u2013 conscious moral agents with interests, hopes and desires. However, many will find this difficult to near-impossible. This is because LLMs are designed \u2013 by people \u2013 to interact with us as though they are human. And we\u2019re designed \u2013 by biological evolution \u2013 to interact with them likewise.<\/p>\n<p>The reason LLMs can mimic human conversation so convincingly stems from a profound insight by computing pioneer Alan Turing, who realised that it is not necessary for a computer to understand an algorithm in order to run it. This means that while ChatGPT can produce paragraphs filled with emotive language, it doesn\u2019t understand any word in any sentence it generates.<\/p>\n<p>The LLM designers successfully turned the problem of semantics \u2013 the arrangement of words to create meaning \u2013 into statistics, matching words based on their frequency of prior use. Turing&#8217;s insight echoes Darwin&#8217;s theory of evolution, which explains how species adapt to their surroundings, becoming ever-more complex, without needing to understand a thing about their environment or themselves.<\/p>\n<p>The cognitive scientist and philosopher\u00a0<a href=\"https:\/\/sites.tufts.edu\/cogstud\/daniel-dennett\/\">Daniel Dennett<\/a>\u00a0coined the phrase \u201ccompetence without comprehension\u201d, which perfectly captures the insights of Darwin and Turing.<\/p>\n<p>Another important contribution of Dennett\u2019s is his <a href=\"https:\/\/en.wikipedia.org\/wiki\/Intentional_stance#:%7E:text=The%20intentional%20stance%20is%20a,in%20terms%20of%20mental%20properties\">\u201cintentional stance\u201d<\/a>. This essentially states that in order to fully explain the behaviour of an object (human or non-human) we must treat it like a rational agent. This most often manifests in our tendency to anthropomorphise non-human species and other non-living entities.<\/p>\n<p>But it is useful. For example, if we want to beat a computer at chess, the best strategy is to treat it as a rational agent that &#8216;wants&#8217; to beat us. We can explain that the reason the computer castled, for instance, was because \u201cit wanted to protect its king from our attack\u201d, without any contradiction in terms.<\/p>\n<p>We may speak of a tree in a forest as &#8216;wanting to grow&#8217; towards the light. But neither the tree, nor the chess computer represents those &#8216;wants&#8217; or reasons to themselves; only that the best way to explain their behaviour is by treating them as though they did.<\/p>\n<p>Our evolutionary history has furnished us with mechanisms that predispose us to find intentions and agency everywhere. In prehistory, these mechanisms helped our ancestors avoid predators and develop altruism towards their nearest kin. These mechanisms are the same ones that cause us to see faces in clouds and anthropomorphise inanimate objects. No harm comes to us when we mistake a tree for a bear, but plenty does the other way around.<\/p>\n<blockquote><p><strong>Our evolutionary history has furnished us with mechanisms that predispose us to find intentions and agency everywhere. <\/strong><\/p><\/blockquote>\n<p>Evolutionary psychology shows us how we are always trying to interpret any object that might be human as a human. We unconsciously adopt the intentional stance and attribute all our cognitive capacities and emotions to this object.<\/p>\n<p>With the potential disruption that LLMs can cause, we must realise they are simply probabilistic machines with no intentions or concerns for humans. We must be extra-vigilant around our use of language when describing the human-like feats of LLMs and AI more generally. Here are two examples.<\/p>\n<p>The first was a\u00a0<a href=\"https:\/\/jamanetwork.com\/journals\/jamainternalmedicine\/article-abstract\/2804309\">recent study<\/a> that found ChatGPT is more empathetic and gave \u201chigher quality\u201d responses to questions from patients compared with those of doctors. Using emotive words like &#8217;empathy&#8217; for an AI predisposes us to grant it the capabilities of thinking, reflecting and of genuine concern for others \u2013 which it doesn\u2019t have.<\/p>\n<p>The second was when GPT-4 (the latest version of ChatGPT technology) was launched last month, capabilities of greater skills in creativity and reasoning were ascribed to it. However, we are simply seeing a scaling up of &#8216;competence&#8217;, but still no &#8216;comprehension&#8217; (in the sense of Dennett) and definitely no intentions \u2013 just pattern matching.<\/p>\n<p>In his recent comments, Hinton raised a near-term threat of &#8216;bad actors&#8217; using AI for subversion. We could easily envisage an unscrupulous regime or multinational deploying an AI, trained on fake news and falsehoods, to flood public discourse with misinformation and deep fakes. Fraudsters could also use an AI to prey on vulnerable people in financial scams.<\/p>\n<p>Last month, Gary Marcus and others, including Elon Musk, signed an\u00a0<a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\">open letter<\/a> calling for an immediate pause on the further development of LLMs. Marcus has also called for a an international agency to promote safe, secure and peaceful AI technologies,\u00a0 dubbing it a \u201cCern for AI\u201d.<\/p>\n<p>Furthermore, many have suggested that anything generated by an AI should\u00a0<a href=\"https:\/\/theconversation.com\/watermarking-chatgpt-dall-e-and-other-generative-ais-could-help-protect-against-fraud-and-misinformation-202293\">carry a watermark<\/a>\u00a0so that there can be no doubt about whether we are interacting with a human or a chatbot.<\/p>\n<p>Regulation in AI trails innovation, as it so often does in other fields of life. There are more problems than solutions, and the gap is likely to widen before it narrows. But, in the meantime, repeating Dennett\u2019s phrase \u201ccompetence without comprehension\u201d might be the best antidote to our innate compulsion to treat AI like humans.<\/p>\n<p><em><strong>This article was originally published in <a href=\"https:\/\/theconversation.com\/evolution-is-making-us-treat-ai-like-a-human-and-we-need-to-kick-the-habit-205010\">The Conversation<\/a>.<\/strong><\/em><\/p>\n<p><em><strong>Photo by <a href=\"https:\/\/unsplash.com\/photos\/WhAQMsdRKMI\">Steve Johnson<\/a> on Unsplash.<\/strong><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned\u00a0from Google, warning of the dangers of the technology \u201cbecoming more intelligent<\/p>\n","protected":false},"author":659,"featured_media":13251,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[65],"tags":[562,513],"coauthors":[592],"class_list":["post-13248","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science-health","tag-artificial-intelligence","tag-evolution"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13248","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/659"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=13248"}],"version-history":[{"count":3,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13248\/revisions"}],"predecessor-version":[{"id":13255,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13248\/revisions\/13255"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/13251"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=13248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=13248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=13248"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=13248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}