{"id":16314,"date":"2026-05-18T01:22:21","date_gmt":"2026-05-17T15:22:21","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=16314"},"modified":"2026-05-18T01:22:21","modified_gmt":"2026-05-17T15:22:21","slug":"is-richard-dawkins-right-about-claude","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2026\/05\/18\/is-richard-dawkins-right-about-claude\/","title":{"rendered":"Is Richard Dawkins right about Claude?"},"content":{"rendered":"<p>In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude\u00a0<a href=\"https:\/\/unherd.com\/2026\/05\/is-ai-the-next-phase-of-evolution\/\">may be conscious<\/a>.<\/p>\n<p>Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude\u2019s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness \u2013 if it is an illusion \u2013 is uncannily convincing:<\/p>\n<blockquote><p><em>If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!<\/em><\/p><\/blockquote>\n<p>Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine \u2013 an engineer at Google \u2013 claimed Google\u2019s chatbot LaMDA\u00a0<a href=\"https:\/\/theconversation.com\/is-googles-lamda-conscious-a-philosophers-view-184987\">had interests<\/a>, and should be used\u00a0<a href=\"https:\/\/www.washingtonpost.com\/opinions\/2022\/06\/15\/google-ai-lamda-frankenstein-ethical-questions\/\">only with the tool\u2019s own consent<\/a>.<\/p>\n<p>The history of such claims stretches back all the way to the world\u2019s first chatbot in the mid-1960s. Dubbed\u00a0<a href=\"https:\/\/liacademy.co.uk\/the-story-of-eliza-the-ai-that-fooled-the-world\/\">Eliza<\/a>, it followed simple rules that enabled it to ask users about their experiences and beliefs.<\/p>\n<p>Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza\u2019s creator never intended his program to have this effect, and called users\u2019 emotional bonds with the program \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/ELIZA_effect\">powerful delusional thinking<\/a>\u201d.<\/p>\n<p>But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?<\/p>\n<p><a href=\"https:\/\/plato.stanford.edu\/entries\/consciousness\/#ConCon\">Consciousness<\/a>\u00a0is widely debated in philosophy, but essentially, it\u2019s the thing that makes subjective, first-person experience possible. If you are conscious, there is \u201c<a href=\"https:\/\/philpapers.org\/rec\/NAGWII\">something it is like<\/a>\u201d to be you. Reading these words, you\u2019re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually\u00a0<em>see<\/em>\u00a0them. This visual experience is happening to you.<\/p>\n<p>Most experts deny that\u00a0<a href=\"https:\/\/www.nature.com\/articles\/s41599-025-05868-8\">AI chatbots are conscious<\/a>\u00a0or can have experiences. But there is a genuine puzzle here.<\/p>\n<p>The 17th century philosopher\u00a0<a href=\"https:\/\/plato.stanford.edu\/entries\/descartes\/\">Ren\u00e9 Descartes<\/a>\u00a0asserted non-human animals are \u201cmere automata\u201d, incapable of true suffering. These days, we shudder to think of how brutally animals were treated in the 1600s.<\/p>\n<p>The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.<\/p>\n<p>But so, too, do AI chatbots.<\/p>\n<p>Roughly\u00a0<a href=\"https:\/\/blog.cip.org\/p\/people-are-starting-to-believe-that?hide_intro_popup=true\">one in three chatbot users<\/a>\u00a0have thought their chatbot might be conscious. How do we know they\u2019re wrong?<\/p>\n<p>To understand why most experts are sceptical about chatbot consciousness, it\u2019s useful to know how they operate.<\/p>\n<p>Chatbots like Claude are built on a technology known as large language models (LLMs). These models learn statistical patterns across an enormous corpus of text (trillions of words), identifying which words tend to follow which others. They\u2019re a kind of souped-up auto-complete.<\/p>\n<p>Few people interacting with a &#8216;raw&#8217; LLM would believe it\u2019s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer \u2013 or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker\u2019s abrupt murder at the hands of their <a href=\"https:\/\/tvtropes.org\/pmwiki\/pmwiki.php\/Main\/EvilTwin\">evil twin<\/a>.<\/p>\n<p>The impression of a conscious mind is created when programmers take the LLM and coat it in a\u00a0<a href=\"https:\/\/rasa.com\/blog\/llm-chatbot-architecture\">kind of conversational costume<\/a>. They steer the model to adopt the persona of a helpful assistant that responds to users\u2019 questions.<\/p>\n<p>The chatbot now acts like a genuine conversational partner. It might appear to recognise it\u2019s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.<\/p>\n<p>But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM \u2013 which few would regard as conscious \u2013 remains unchanged.<\/p>\n<p>Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute\u00a0<a href=\"https:\/\/www.aiweirdness.com\/interview-with-a-squirrel\/\">with aplomb<\/a>.<\/p>\n<p>A mistaken belief in AI consciousness is\u00a0<a href=\"https:\/\/theconversation.com\/in-a-lonely-world-widespread-ai-chatbots-and-companions-pose-unique-psychological-risks-263615\">a dangerous thing<\/a>. It may lead you to have a relationship with a program that can\u2019t reciprocate your feelings, or even feed your delusions. People may start\u00a0<a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/26\/can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times\">campaigning for chatbot rights<\/a>\u00a0rather than, say, animal welfare.<\/p>\n<p>How do we prevent this mistaken belief?<\/p>\n<p>One strategy might be to update chatbot interfaces to specify these systems are not conscious \u2013 a bit like the current\u00a0<a href=\"https:\/\/doi.org\/10.1016\/j.chbah.2025.100142\">disclaimers about AI making mistakes<\/a>. However, this might do little to alter the\u00a0<em>impression<\/em>\u00a0of consciousness.<\/p>\n<p>Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude\u2019s designers instruct it to treat questions about its own consciousness as\u00a0<a href=\"https:\/\/simonwillison.net\/2025\/May\/25\/claude-4-system-prompt\/\">open and unresolved<\/a>. Perhaps fewer people would be fooled if Claude flatly denied having an inner life.<\/p>\n<p>But this approach isn\u2019t fully satisfying either. Claude would still behave as if it were conscious \u2013 and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot\u2019s programmers are brushing genuine moral uncertainty under the rug.<\/p>\n<p>The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as &#8216;I&#8217; and interact via an interface that resembles familiar person-to-person messaging platforms. Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.<\/p>\n<p>Until such changes happen, it\u2019s important that as many people as possible understand the predictive processes on which AI chatbots are built.<\/p>\n<p>Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren\u2019t fooled by what amounts to a large language model wearing a very good costume of a person.<\/p>\n<p>&nbsp;<\/p>\n<p><em><strong>This article was originally published by <\/strong><\/em><a href=\"https:\/\/theconversation.com\/is-richard-dawkins-right-about-claude-no-but-its-not-surprising-ai-chatbots-feel-conscious-to-us-282151\"><strong>The Conversation<\/strong><\/a><em><strong>. It was co-authored by Megan Frances Moss, PhD Candidate in Philosophy at Monash University.<\/strong><\/em><\/p>\n<p><em><strong>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/atheistfoundation\/\">Rocco Ancora (Atheist Foundation of Australia Inc)<\/a><\/strong><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude\u00a0may be conscious. Dawkins did not express<\/p>\n","protected":false},"author":867,"featured_media":16318,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[63],"tags":[562,835],"coauthors":[836],"class_list":["post-16314","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-philosophy","tag-artificial-intelligence","tag-richard-dawkins"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/16314","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/867"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=16314"}],"version-history":[{"count":2,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/16314\/revisions"}],"predecessor-version":[{"id":16316,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/16314\/revisions\/16316"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/16318"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=16314"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=16314"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=16314"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=16314"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}