{"id":15240,"date":"2025-02-03T23:35:44","date_gmt":"2025-02-03T12:35:44","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=15240"},"modified":"2025-02-05T08:40:31","modified_gmt":"2025-02-04T21:40:31","slug":"how-psychologists-kick-started-ai-by-studying-the-human-mind","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2025\/02\/03\/how-psychologists-kick-started-ai-by-studying-the-human-mind\/","title":{"rendered":"How psychologists kick-started AI by studying the human\u00a0mind"},"content":{"rendered":"<p class=\"theconversation-article-title\">Many people think of psychology as being primarily about mental health, but its story goes far beyond that. As the science of the mind, psychology has played a pivotal role in shaping artificial intelligence, offering insights into human cognition, learning and behaviour that have profoundly influenced AI\u2019s development.<\/p>\n<div class=\"theconversation-article-body\">\n<p>These contributions not only laid the foundations for AI but also continue to guide its future development. The study of psychology has shaped our understanding of what constitutes intelligence in machines, and how we can address the complex challenges and benefits associated with this technology.<\/p>\n<p>The origins of modern AI can be traced back to psychology in the mid-20th century. In 1949, psychologist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Donald_O._Hebb\">Donald Hebb<\/a> proposed a model for how the brain learns: connections between brain cells grow stronger when they are active at the same time. This idea gave a hint of how machines might learn by mimicking nature\u2019s approach.<\/p>\n<p>In the 1950s, psychologist Frank Rosenblatt <a href=\"https:\/\/doi.org\/10.1037\/h0042519\">built on Hebb\u2019s theory<\/a> to develop a system called the <a href=\"https:\/\/news.cornell.edu\/stories\/2019\/09\/professors-perceptron-paved-way-ai-60-years-too-soon\">perceptron<\/a>. The perceptron was the <a href=\"https:\/\/americanhistory.si.edu\/collections\/object\/nmah_334414\">first artificial neural network<\/a> ever made. It ran on the same principle as modern AI systems, in which computers learn by adjusting connections within a network based on data rather than relying on programmed instructions.<\/p>\n<p>In the 1980s, psychologist <a href=\"https:\/\/en.wikipedia.org\/wiki\/David_Rumelhart\">David Rumelhart<\/a> improved on Rosenblatt\u2019s perceptron. He applied a method called <a href=\"https:\/\/wiki.pathmind.com\/backpropagation\">backpropagation<\/a>, which uses principles of calculus to help neural networks improve through feedback.<\/p>\n<p>Backpropagation was originally developed by Paul Werbos, who <a href=\"https:\/\/www.google.com.au\/books\/edition\/The_Roots_of_Backpropagation\/WdR3OOM2gBwC?hl=en&amp;gbpv=0\">said<\/a> the technique \u201copens up the possibility of a scientific understanding of intelligence, as important to psychology and neurophysiology as Newton\u2019s concepts were to physics\u201d.<\/p>\n<p>Rumelhart\u2019s 1986 <a href=\"https:\/\/www.nature.com\/articles\/323533a0\">paper<\/a>, coauthored with Ronald Williams and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Geoffrey_Hinton\">Geoffrey Hinton<\/a>, is often credited with sparking the modern era of artificial neural networks. This work laid the foundation for deep learning innovations such as large language models.<\/p>\n<p>In 2024, the Nobel Prize for Physics was awarded to Hinton and John Hopfield for work on artificial neural networks. Notably, the Nobel committee, in its <a href=\"https:\/\/www.nobelprize.org\/uploads\/2024\/11\/advanced-physicsprize2024-3.pdf\">scientific report<\/a>, highlighted the crucial role psychologists played in the development of artificial neural networks.<\/p>\n<p>Hinton, who holds a degree in psychology, <a href=\"https:\/\/www.utoronto.ca\/news\/his-words-geoffrey-hinton-reflects-his-nobel-prize-win\">acknowledged<\/a> standing on the shoulders of giants such as Rumelhart when receiving his prize.<\/p>\n<p>Psychology continues to play an important role in shaping the future of AI. It offers theoretical insights to address some of the field\u2019s biggest challenges, including reflective reasoning, intelligence and decision-making.<\/p>\n<p>Microsoft founder Bill Gates recently <a href=\"https:\/\/www.fastcompany.com\/91150606\/bill-gates-ai-superintelligence\">pointed out<\/a> a key limitation of today\u2019s AI systems. They can\u2019t engage in reflective reasoning, or what psychologists call metacognition.<\/p>\n<p>In the 1970s, developmental psychologist <a href=\"https:\/\/doi.org\/10.1037\/0003-066X.34.10.906\">John Flavell<\/a> introduced the idea of metacognition. He used it to explain how children master complex skills by reflecting on and understanding their own thinking.<\/p>\n<p>Decades later, this psychological framework is <a href=\"https:\/\/arxiv.org\/abs\/2411.02478\">gaining attention<\/a> as a potential pathway to advancing AI.<\/p>\n<p>Psychological theory is increasingly being applied to improve AI systems, particularly by enhancing their capacity for solving novel problems.<\/p>\n<p>For instance, computer scientist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fran%C3%A7ois_Chollet\">Fran\u00e7ois Chollet<\/a> highlights the importance of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fluid_and_crystallized_intelligence\">fluid intelligence<\/a>, which psychologists define as the ability to solve new problems without prior experience or training.<\/p>\n<p>In a <a href=\"https:\/\/arxiv.org\/abs\/1911.01547\">2019 paper<\/a>, Chollet introduced a test inspired by principles from cognitive psychology to measure how well AI systems can handle new problems. The test \u2013 known as the <a href=\"https:\/\/arcprize.org\/arc\">Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI)<\/a> \u2013 provided a kind of guide for making AI systems think and reason in more human-like ways.<\/p>\n<blockquote><p><strong>By recognising psychology\u2019s role in AI, we can foster a future in which people and technology work together for a better world.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;\" src=\"https:\/\/counter.theconversation.com\/content\/248542\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/><\/strong><\/p><\/blockquote>\n<p>In late 2024, OpenAI\u2019s o3 model demonstrated <a href=\"https:\/\/www.bloomberg.com\/features\/2025-sam-altman-interview\/\">notable success<\/a> on Chollet\u2019s test, showing progress in creating AI systems that can adapt and solve a wider range of problems.<\/p>\n<p>Another goal of current research is to make AI systems more able to explain their output. Here, too, psychology offers valuable insights.<\/p>\n<p>Computer scientist <a href=\"https:\/\/www.researchgate.net\/publication\/376515428_Deep_Neural_Networks_Explanations_and_Rationality\">Edward Lee<\/a> has drawn on the work of psychologist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Daniel_Kahneman\">Daniel Kahneman<\/a> to highlight why requiring AI systems to explain themselves might be risky.<\/p>\n<p>Kahneman showed how humans often justify their decisions with explanations created after the fact, which don\u2019t reflect their true reasoning. For example, <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/21482790\/\">studies<\/a> have found that judges\u2019 rulings fluctuate depending on when they last ate \u2014 <a href=\"https:\/\/doi.org\/10.1111\/1468-2230.12424\">despite their firm belief in their own impartiality<\/a>.<\/p>\n<p>Lee cautions that AI systems could produce similarly misleading explanations. Because rationalisations can be deceptive, Lee argues AI research should focus on reliable outcomes instead.<\/p>\n<p>The science of psychology remains widely misunderstood. In 2020, for example, the Australian government proposed <a href=\"https:\/\/www.theguardian.com\/australia-news\/2020\/aug\/26\/university-fee-rises-nationals-deal-psychology-social-work\">reclassifying it as part of the humanities<\/a> in universities.<\/p>\n<p>As people increasingly interact with machines, AI, psychology and neuroscience may hold key insights into our future.<\/p>\n<p>Our brains are extremely adaptable, and technology shapes how we think and learn. <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC18253\/\">Research<\/a> by <a href=\"https:\/\/www.bps.org.uk\/psychologist\/royal-society-fellowship\">psychologist<\/a> and neuroscientist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Eleanor_Maguire\">Eleanor Maguire<\/a>, for example, revealed that the brains of London taxi drivers are physically altered by using a car to navigate a complex city.<\/p>\n<p>As AI advances, future psychological research may reveal how AI systems enhance our abilities and unlock new ways of thinking. By recognising psychology\u2019s role in AI, we can foster a future in which people and technology work together for a better world.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" style=\"border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;\" src=\"https:\/\/counter.theconversation.com\/content\/248542\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<p><em><strong>The article was co-authored by\u00a0Chris Ludlow, Lecturer in Psychology, <a href=\"https:\/\/theconversation.com\/institutions\/swinburne-university-of-technology-767\">Swinburne University of Technology.<\/a><\/strong><\/em><\/p>\n<p><em><strong>This article is republished from <\/strong><\/em><strong><a href=\"https:\/\/theconversation.com\">The Conversation<\/a><\/strong><em><strong> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/how-psychologists-kick-started-ai-by-studying-the-human-mind-248542\">original article<\/a>.<\/strong><\/em><\/p>\n<p><em><strong>Image by <a href=\"https:\/\/unsplash.com\/photos\/a-computer-circuit-board-with-a-brain-on-it-_0iV9LmPDn0\">Steve Johnson<\/a> on Unsplash.<\/strong><\/em><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Many people think of psychology as being primarily about mental health, but its story goes far beyond that. As the<\/p>\n","protected":false},"author":795,"featured_media":15248,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[63],"tags":[562],"coauthors":[751],"class_list":["post-15240","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-philosophy","tag-artificial-intelligence"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/15240","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/795"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=15240"}],"version-history":[{"count":7,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/15240\/revisions"}],"predecessor-version":[{"id":15246,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/15240\/revisions\/15246"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/15248"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=15240"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=15240"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=15240"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=15240"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}