{"id":13283,"date":"2023-06-05T18:16:42","date_gmt":"2023-06-05T08:16:42","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=13283"},"modified":"2023-06-05T18:16:42","modified_gmt":"2023-06-05T08:16:42","slug":"ai-and-the-problems-of-personification","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2023\/06\/05\/ai-and-the-problems-of-personification\/","title":{"rendered":"AI and the problems of personification"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">In literary criticism, a commonly used term is \u2018personification\u2019, which describes when a writer invests natural phenomena with human qualities. So, for example, \u2018crying clouds\u2019 or \u2018obstinate rocks\u2019. Or it can apply to abstractions, such as Shakespeare\u2019s \u201cNor shall death brag.\u201d\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It is unfortunate that such insights are no longer afforded much attention because, if they were, people would notice that the phrase \u2018artificial intelligence\u2019 (AI) is an instance of personification. It invests inanimate computers with human qualities.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Having pulled off this verbal sleight of hand, the claim is then made that, as computers develop more processing power and have the capacity to create their own software, they will become \u2018smarter\u2019 than humans. Before long, they warn, we will be faced with what is termed \u2018general AI\u2019, where machines will control humans completely.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A little reflection should reveal that AI \u2013 software that can continuously adapt by interacting with the data it is getting \u2013 is at best like only a small part of human thinking. The claim that it will replace human thought is better described as an attempt to convince us that human thought does not exist at all; it is an effort to reduce human beings to no more than processors of information.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Consider the problem of self-consciousness. Humans are aware of their own thoughts but computers are not. They have nothing to be self-aware with; they are machines.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The deception prosecuted by AI exponents is that, because the software can evolve and so is self-referring, it can become the same as human awareness, which is also self-referring (\u201cI am aware of myself: I can see me\u201d). It is a profound misunderstanding of what human consciousness is.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">There are other errors. AI may be quite good at deduction from data, and will certainly help in that area. But <\/span><span style=\"font-weight: 400;\">it is hard to see how human-style induction will become possible.<\/span><span style=\"font-weight: 400;\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Humans are able to create a theory from loose, or poor, information. Computers cannot \u2013 or where they do, the theory is nonsense. With computers, the truism \u2018garbage in, garbage out\u2019 suggests a lack of precise information is an insuperable obstacle.<\/span><\/p>\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=M3Ua_koAnSU\"><span style=\"font-weight: 400;\">One study showed<\/span><\/a><span style=\"font-weight: 400;\"> humans learn to tell similar images from 10 training samples, whereas AI requires 10 million samples.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The pioneering computer scientist John von Neumann noted that the human nervous system is very imprecise and \u201cno known computing machine can operate reliably and significantly on such a low precision level.\u201d AI cannot imagine, it cannot emote, it cannot dream, it cannot appreciate beauty, it can only provide simulacra of some attributes of the human mind.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">All that should be evident enough, and the fact that it is not is partly why AI is dangerous.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the last few months, a series of AI proponents have warned that there is a small chance that AI could wreak havoc on, or even destroy, the human race. Most dramatic was Mo Gawdat, former chief business officer for Google\u2019s research and development wing. He said AI would treat humans as \u2018scum\u2019, and expressed his fears of a dystopian future in which artificial intelligence decides it needs to take over and cull people.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sam Altman, chief executive of OpenAI, is another <\/span><a href=\"https:\/\/moores.samaltman.com\/\"><span style=\"font-weight: 400;\">sounding<\/span><\/a><span style=\"font-weight: 400;\"> the alarm.<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn\u2019t adapt accordingly, most people will end up worse off than they are today.<\/span><\/i><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">Elon Musk \u2013 who claims Tesla, the electric vehicle company he heads, is the most sophisticated AI player in the world \u2013 is calling for a six-month <\/span><a href=\"https:\/\/deadline.com\/2023\/03\/elon-musk-steve-wozniak-open-letter-moratorium-advanced-ai-systems-1235312590\/\"><span style=\"font-weight: 400;\">moratorium<\/span><\/a><span style=\"font-weight: 400;\">. He co-wrote a letter with Apple cofounder\u00a0<\/span><a href=\"https:\/\/deadline.com\/tag\/steve-wozniak\/\"><span style=\"font-weight: 400;\">Steve Wozniak<\/span><\/a><span style=\"font-weight: 400;\"> to that effect.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">There are certainly some troubling aspects of AI, especially what are called \u2018hallucinations\u2019 \u2013 errors that occur either for reasons unknown or because the data is incomplete and the computer does not know how to recognise that it does not know. The term \u2018hallucinations\u2019, though, is another example of personification.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Here lies the biggest danger of AI \u2013 and one that the so-called experts will not know how to recognise. If you invest in a computer with human qualities, then you will expect it to evolve in roughly the same way that human thinking does. But it is not human, and it will evolve as computers do, which is likely to be unexpected and potentially dangerous.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Just as computers cannot \u2018think\u2019 like a human, so humans cannot \u2018evolve\u2019 like a computer. Perhaps we will need AI to tell us what the AI is doing. But what is certain is that, because of the fundamental logical errors being made about the technology, it becomes all but impossible to know how to control it.<\/span><\/p>\n<p><a href=\"https:\/\/rationalist.com.au\/make-a-donation\/\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-large wp-image-11873\" src=\"https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1024x256.png\" alt=\"\" width=\"1024\" height=\"256\" srcset=\"https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1024x256.png 1024w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-300x75.png 300w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-768x192.png 768w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation-1536x384.png 1536w, https:\/\/rationalemagazine.com\/wp-content\/uploads\/2022\/07\/Rationale-donation.png 1600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/a><\/p>\n<p><span style=\"font-weight: 400;\">The current hype around AI has the odour of a public relations campaign. Why are so many people sounding the alarm, and in such a coordinated way? Although there is good reason to be cautious about such a powerful technology that will disrupt many industries, it is also sensible to be sceptical about what the technologists are saying.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">After all, many of them have probably never even heard the word \u2018personification\u2019 let alone realised that they are doing it. There is also a long history, such as with the <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Year_2000_problem\"><span style=\"font-weight: 400;\">Y2K hysteria<\/span><\/a><span style=\"font-weight: 400;\"> as 2000 approached, of technologists predicting disasters that did not occur.<\/span><\/p>\n<p><a href=\"https:\/\/www.zerohedge.com\/political\/ai-going-be-just-another-protected-bubble-elite\"><span style=\"font-weight: 400;\">Adam Mill writes<\/span><\/a><span style=\"font-weight: 400;\"> that the dire warnings are being issued because the players are jockeying for commercial position, trying to use government policy to prevent competitors entering the market:<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">We can assume that, like everything else in our current government, the rules will be applied to help the ruling elite and suppress dissent. They may say that\u2019s the opposite of their intent, but we\u2019ve seen it over and over again:\u00a0once elites get the power, they use it to help themselves.\u00a0<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">ChatGPT might represent a giant leap forward in artificial intelligence, but I haven\u2019t yet seen that. It suffers from the great limiting factor that will hold back all AI from developing to autonomy. It hasn\u2019t been through a selection process that punishes it for wrong answers.<\/span><\/i><\/p><\/blockquote>\n<p><b><i>If you wish to republish this original article, please attribute to\u00a0<\/i><\/b><a href=\"https:\/\/rationalemagazine.com\/\"><b><i>Rationale<\/i><\/b><\/a><b><i>.\u00a0<\/i><\/b><a href=\"https:\/\/rationalemagazine.com\/index.php\/publishing-guidelines\/\"><b><i>Click here<\/i><\/b><\/a><b><i>\u00a0to find out more about republishing under Creative Commons.<\/i><\/b><\/p>\n<p><b><i>Photo by <a href=\"https:\/\/unsplash.com\/photos\/mZ5XFzSeSY8\">Luna Wang<\/a>\u00a0<\/i><\/b><b><i>on Unsplash.<\/i><\/b><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In literary criticism, a commonly used term is \u2018personification\u2019, which describes when a writer invests natural phenomena with human qualities.<\/p>\n","protected":false},"author":7,"featured_media":13286,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[65],"tags":[562],"coauthors":[104],"class_list":["post-13283","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science-health","tag-artificial-intelligence"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13283","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=13283"}],"version-history":[{"count":3,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13283\/revisions"}],"predecessor-version":[{"id":13287,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/13283\/revisions\/13287"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/13286"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=13283"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=13283"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=13283"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=13283"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}