Science & Health

AI and the problems of personification

In literary criticism, a commonly used term is ‘personification’, which describes when a writer invests natural phenomena with human qualities. So, for example, ‘crying clouds’ or ‘obstinate rocks’. Or it can apply to abstractions, such as Shakespeare’s “Nor shall death brag.” 

It is unfortunate that such insights are no longer afforded much attention because, if they were, people would notice that the phrase ‘artificial intelligence’ (AI) is an instance of personification. It invests inanimate computers with human qualities.

Having pulled off this verbal sleight of hand, the claim is then made that, as computers develop more processing power and have the capacity to create their own software, they will become ‘smarter’ than humans. Before long, they warn, we will be faced with what is termed ‘general AI’, where machines will control humans completely.

A little reflection should reveal that AI – software that can continuously adapt by interacting with the data it is getting – is at best like only a small part of human thinking. The claim that it will replace human thought is better described as an attempt to convince us that human thought does not exist at all; it is an effort to reduce human beings to no more than processors of information. 

Consider the problem of self-consciousness. Humans are aware of their own thoughts but computers are not. They have nothing to be self-aware with; they are machines.

The deception prosecuted by AI exponents is that, because the software can evolve and so is self-referring, it can become the same as human awareness, which is also self-referring (“I am aware of myself: I can see me”). It is a profound misunderstanding of what human consciousness is.

There are other errors. AI may be quite good at deduction from data, and will certainly help in that area. But it is hard to see how human-style induction will become possible. 

Humans are able to create a theory from loose, or poor, information. Computers cannot – or where they do, the theory is nonsense. With computers, the truism ‘garbage in, garbage out’ suggests a lack of precise information is an insuperable obstacle.

One study showed humans learn to tell similar images from 10 training samples, whereas AI requires 10 million samples.

The pioneering computer scientist John von Neumann noted that the human nervous system is very imprecise and “no known computing machine can operate reliably and significantly on such a low precision level.” AI cannot imagine, it cannot emote, it cannot dream, it cannot appreciate beauty, it can only provide simulacra of some attributes of the human mind. 

All that should be evident enough, and the fact that it is not is partly why AI is dangerous.

In the last few months, a series of AI proponents have warned that there is a small chance that AI could wreak havoc on, or even destroy, the human race. Most dramatic was Mo Gawdat, former chief business officer for Google’s research and development wing. He said AI would treat humans as ‘scum’, and expressed his fears of a dystopian future in which artificial intelligence decides it needs to take over and cull people.

Sam Altman, chief executive of OpenAI, is another sounding the alarm.

My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.

Elon Musk – who claims Tesla, the electric vehicle company he heads, is the most sophisticated AI player in the world – is calling for a six-month moratorium. He co-wrote a letter with Apple cofounder Steve Wozniak to that effect.

There are certainly some troubling aspects of AI, especially what are called ‘hallucinations’ – errors that occur either for reasons unknown or because the data is incomplete and the computer does not know how to recognise that it does not know. The term ‘hallucinations’, though, is another example of personification.

Here lies the biggest danger of AI – and one that the so-called experts will not know how to recognise. If you invest in a computer with human qualities, then you will expect it to evolve in roughly the same way that human thinking does. But it is not human, and it will evolve as computers do, which is likely to be unexpected and potentially dangerous.

Just as computers cannot ‘think’ like a human, so humans cannot ‘evolve’ like a computer. Perhaps we will need AI to tell us what the AI is doing. But what is certain is that, because of the fundamental logical errors being made about the technology, it becomes all but impossible to know how to control it.

The current hype around AI has the odour of a public relations campaign. Why are so many people sounding the alarm, and in such a coordinated way? Although there is good reason to be cautious about such a powerful technology that will disrupt many industries, it is also sensible to be sceptical about what the technologists are saying.

After all, many of them have probably never even heard the word ‘personification’ let alone realised that they are doing it. There is also a long history, such as with the Y2K hysteria as 2000 approached, of technologists predicting disasters that did not occur.

Adam Mill writes that the dire warnings are being issued because the players are jockeying for commercial position, trying to use government policy to prevent competitors entering the market:

We can assume that, like everything else in our current government, the rules will be applied to help the ruling elite and suppress dissent. They may say that’s the opposite of their intent, but we’ve seen it over and over again: once elites get the power, they use it to help themselves. 

ChatGPT might represent a giant leap forward in artificial intelligence, but I haven’t yet seen that. It suffers from the great limiting factor that will hold back all AI from developing to autonomy. It hasn’t been through a selection process that punishes it for wrong answers.

If you wish to republish this original article, please attribute to RationaleClick here to find out more about republishing under Creative Commons.

Photo by Luna Wang on Unsplash.

author-avatar

About David James

David James has been a financial journalist for 28 years. He was a senior writer and columnist at BRW for 25 years, a senior journalist at AAA Banking magazine, an editor and writer for stockbroker JB Were & Sons and a journalist at The Melbourne Herald. He has a PhD in English Literature from Monash University and now works as a freelance journalist and editor.

Got a Comment?