{"id":14924,"date":"2024-10-23T19:16:20","date_gmt":"2024-10-23T08:16:20","guid":{"rendered":"https:\/\/rationalemagazine.com\/?p=14924"},"modified":"2024-10-23T20:49:07","modified_gmt":"2024-10-23T09:49:07","slug":"what-will-ai-really-mean-for-science","status":"publish","type":"post","link":"https:\/\/rationalemagazine.com\/index.php\/2024\/10\/23\/what-will-ai-really-mean-for-science\/","title":{"rendered":"What will AI really mean for science?"},"content":{"rendered":"<p>Artificial intelligence (AI) has taken centre stage in basic science. The five winners of the\u00a0<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-03310-8\">2024 Nobel Prizes in Chemistry and Physics<\/a>\u00a0shared a common thread: AI.<\/p>\n<p>Indeed, many scientists \u2013 including the Nobel committees \u2013 are celebrating AI as a force for transforming science. <a href=\"https:\/\/www.theguardian.com\/science\/2024\/oct\/09\/google-deepmind-scientists-win-nobel-chemistry-prize\">As one of the laureates<\/a>\u00a0put it, AI\u2019s potential for accelerating scientific discovery makes it \u201cone of the most transformative technologies in human history\u201d. But what will this transformation really mean for science?<\/p>\n<p>AI promises to help scientists do more, faster, with less money. But it brings a host of new concerns, too \u2013 and if scientists rush ahead with AI adoption they risk transforming science into something that escapes public understanding and trust, and fails to meet the needs of society.<\/p>\n<p>Experts have\u00a0<a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07146-0\">already identified<\/a> at least three illusions that can ensnare researchers using AI. The first is the \u201cillusion of explanatory depth\u201d. Just because an AI model excels at predicting a phenomenon \u2014 like AlphaFold, which won the Nobel Prize in chemistry for its predictions of protein structures \u2014 that doesn\u2019t mean it can accurately explain it. <a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/4vq8f\">Research in neuroscience<\/a>\u00a0has already shown that AI models designed for optimised prediction can lead to misleading conclusions about the underlying neurobiological mechanisms.<\/p>\n<p>Second is the \u201cillusion of exploratory breadth\u201d. Scientists might think they are investigating all testable hypotheses in their exploratory research, when in fact they are only looking at a limited set of hypotheses that can be tested using AI.<\/p>\n<p>Finally, the \u201cillusion of objectivity\u201d. Scientists may believe AI models are free from bias, or that they can account for all possible human biases. In reality, however, all AI models inevitably reflect the biases present in their training data and the intentions of their developers.<\/p>\n<p>One of the main reasons for AI\u2019s increasing appeal in science is its potential to produce more results, faster, and at a much lower cost.<\/p>\n<p>An extreme example of this push is the \u201c<a href=\"https:\/\/sakana.ai\/ai-scientist\/\">AI Scientist<\/a>\u201d machine recently developed by Sakana AI Labs. The company\u2019s vision is to develop a \u201cfully AI-driven system for automated scientific discovery\u201d, where each idea can be turned into a full research paper for just US$15 \u2013 though critics said the system produced \u201c<a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/08\/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime\/\">endless scientific slop<\/a>\u201d.<\/p>\n<p>Do we really want a future where research papers can be produced with just a few clicks, simply to \u201caccelerate\u201d the production of science? This risks inundating the scientific ecosystem with\u00a0<a href=\"https:\/\/theconversation.com\/a-new-ai-scientist-can-write-science-papers-without-any-human-input-heres-why-thats-a-problem-237029\">papers with no meaning and value<\/a>, further straining an already overburdened peer-review system.<\/p>\n<p>We might find ourselves in a world where science, as we once knew it, is buried under the noise of AI-generated content.<\/p>\n<p>The rise of AI in science comes at a time when public trust in science and scientists\u00a0<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00420-1\">is still fairly high<\/a>, but we can\u2019t take it for granted. Trust is complex and fragile.<\/p>\n<p>As we learned during the COVID pandemic, calls to \u201c<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9934084\/\">trust the science<\/a>\u201d can fall short because scientific evidence and computational models are often contested, incomplete, or open to various interpretations.<\/p>\n<p>However, the world faces any number of problems, such as climate change, biodiversity loss, and social inequality, that require public policies crafted with expert judgement. This judgement must also be sensitive to specific situations, gathering input from various disciplines and lived experiences that must be interpreted through the lens of local culture and values.<\/p>\n<p>As an\u00a0<a href=\"https:\/\/council.science\/wp-content\/uploads\/2024\/04\/TheContextualizationDeficit_Nov2023.pdf\">International Science Council report<\/a>\u00a0published last year argued, science must recognise nuance and context to rebuild public trust. Letting AI shape the future of science may undermine hard-won progress in this area.<\/p>\n<p>If we allow AI to take the lead in scientific inquiry, we risk creating a\u00a0<a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07146-0\">monoculture of knowledge<\/a>\u00a0that prioritises the kinds of questions, methods, perspectives and experts best suited for AI.<\/p>\n<p>This can move us away from the\u00a0<a href=\"https:\/\/www.nature.com\/articles\/d41586-019-01251-1\">transdisciplinary approach<\/a>\u00a0essential for responsible AI, as well as the nuanced public reasoning and dialogue needed to tackle our social and environmental challenges.<\/p>\n<blockquote><p><strong>The rise of AI in science comes at a time when public trust in science and scientists\u00a0is still fairly high , but we can\u2019t take it for granted. Trust is complex and fragile.<\/strong><\/p><\/blockquote>\n<p>As the 21st century began, some argued scientists had a\u00a0<a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.279.5350.491\">renewed social contract<\/a>\u00a0in which scientists focus their talents on the most pressing issues of our time in exchange for public funding. The goal is to help society move toward a more sustainable biosphere \u2013 one that is ecologically sound, economically viable and socially just.<\/p>\n<p>The rise of AI presents scientists with an opportunity not just to fulfil their responsibilities but to revitalise the contract itself. However, scientific communities will need to address some\u00a0<a href=\"https:\/\/essopenarchive.org\/doi\/full\/10.22541\/essoar.171136837.71755629\">important questions about the use of AI<\/a>\u00a0first.<\/p>\n<p>For example, is using AI in science a kind of \u201coutsourcing\u201d that could compromise the integrity of publicly funded work? How should this be handled?<\/p>\n<p>What about the\u00a0<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-00478-x\">growing environmental footprint of AI<\/a>? And how can researchers remain aligned with society\u2019s expectations while integrating AI into the research pipeline?<\/p>\n<p>The idea of transforming science with AI without first establishing this social contract risks putting the cart before the horse.<\/p>\n<p>Letting AI shape our research priorities without input from diverse voices and disciplines can lead to a mismatch with what society actually needs and result in poorly allocated resources.<\/p>\n<p>Science should benefit society as a whole. Scientists need to engage in real conversations about the future of AI within their community of practice and with research stakeholders. These discussions should address the dimensions of this renewed social contract, reflecting shared goals and values.<\/p>\n<p>It&#8217;s time to actively explore the various futures that AI for science enables or blocks \u2013 and establish the necessary standards and guidelines to harness its potential responsibly.<\/p>\n<p><em><strong>This article was originally published in <\/strong><\/em><a href=\"https:\/\/theconversation.com\/ai-is-set-to-transform-science-but-will-we-understand-the-results-241760\"><strong>The Conversation<\/strong><\/a><em><strong>.<\/strong><\/em><\/p>\n<p><em><strong>Photo: Shutterstock.<\/strong><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence (AI) has taken centre stage in basic science. The five winners of the\u00a02024 Nobel Prizes in Chemistry and<\/p>\n","protected":false},"author":777,"featured_media":14930,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[65],"tags":[562,371],"coauthors":[730],"class_list":["post-14924","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science-health","tag-artificial-intelligence","tag-science"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/14924","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/users\/777"}],"replies":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=14924"}],"version-history":[{"count":5,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/14924\/revisions"}],"predecessor-version":[{"id":14932,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/posts\/14924\/revisions\/14932"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media\/14930"}],"wp:attachment":[{"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=14924"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/categories?post=14924"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/tags?post=14924"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/rationalemagazine.com\/index.php\/wp-json\/wp\/v2\/coauthors?post=14924"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}