René Descartes’ interests extended to diverse subjects, and one of the most striking subjects he studied was machine thinking. That he was considering this in the early 17th century is remarkable.
When Descartes talks of “thinking machines” in the Discourse on Method and of machines in other places in the Cartesian corpus, he shows that he has an artificial intelligence theory.
It is a theory that he quickly refutes on the basis that reasoning is a uniquely human characteristic and what ultimately distinguishes us from godless brutes, animals and, by implication, machines. Hence, the possibility of “thinking machines” is considered and promptly discounted.
The term itself is held to be a misnomer in a classic case of a straw-man argument. Descartes takes on his own argument, distorts it by placing God in the centre of it before attacking it, as if that was really the claim he was making all along.
A century later, the viability of thinking machines was once again a subject of deep philosophical interest. A historical battle was starting to take hold in Europe. Enlightened, progressive, objective reason (science) continually advanced against blind ignorance, superstition, and prejudice (religion), with Galileo’s trial and condemnation as the central illustration.
In 1739, the Scottish enlightenment philosopher David Hume published his Treatise of Human Nature – an enquiry concerning human understanding. In it, he proposed the epistemological underpinnings for machine learning – a model. The application of this is significant in our current artificial intelligence endeavours.
Descartes, along with Spinoza and Leibniz, are usually credited for laying the groundwork for the Enlightenment movement.
One of the reasons Descartes is regarded as one of the great Western philosophers is that his works contain a rich trove of valuable insights into how one’s individuality comes into being and, more broadly, into human nature as expressed as a relationship between the mind and an outer objective reality. This is borne out in his famous dictum at the outset of the Third Meditation, “cogito ergo sum” – or, “I think therefore I am.”
This claim is: regardless of how we exist, we still exist. If you think, then you are a thinking thing that exists in some way.
It is a foundational knowledge claim that cannot be refuted, even in the face of radical doubt. It is a first step in demonstrating man’s unique ability to attain certain knowledge and to ‘know’ a priori, which is learning without sense experience – something machines are said to be incapable of.
From the beginning of the Discourse on the Method, Descartes makes claims about what it means to be human. For example, in only the second paragraph he writes:
“…as regards reason or sense, inasmuch as it is the only thing that makes us men and distinguishes us from brutes, I should like to hold that it is to be found complete in each of us.”
Here, Descartes is making two claims about reason: firstly, it is a uniquely human characteristic, not shared by other animals and other things; secondly, it’s a characteristic that all humans share.
He later expands this claim to say that what distinguishes us from “other brutes, animals, and machines is in our ability to reason, to develop [adaptive] skills and use language.” This claim sets animals and machines even further away from man.
Yet, Descartes, too, was a product of the orthodoxy of his time – one that had by now reached an existential crossroad.
Early Christian theology adopted a somewhat equivocal attitude toward science. On the one hand, the scriptures viewed the universe as an orderly and purposive realm, originally created by God as ‘good’ and reflecting His nature, above all in man as a rational being made in the image and likeness of God.
On the other hand, the scriptures also depicted the creation as fallen and distorted by sin, with the New Testament emphasising the renunciation of worldly concerns for heavenly ones as vital to personal redemption.
This implied that curiosity about nature should be a matter of relative indifference to the believer, lest it become a snare to sin by distracting attention away from spiritual devotion to God and the pursuit of salvation.
But what is philosophy if it’s not a polysemous endeavour disposed to traditions and knowledge as it ebbs and flows from abstract speculation to the concrete problems of the time?
Descartes’ error, in my opinion, lies in the belief that animals have no ability to reason, and that they are basically complex physical machines without experiences is second only to the error of accepting the existence of God as being likely to turn “weak characters from the strait way of virtue”.
In 1739, we find David Hume refuting Descartes’ thinking. In fact, it can be argued that Hume’s entire philosophical project can be seen as a refutation of the rationalism that Descartes so systematically deploys. He confutes Descartes by name and, later, confutes Descartes by rejecting all of his principal assumptions, thereby removing any warrant for his conclusions.
Hume’s Enquiry Concerning Human Understanding explicitly rejects Descartes’ foundationalism as illusory. And, for good measure, he posits that even if a foundational proposition could be identified, the Evil Genius doubt would make it impossible to infer anything from it, as it renders our cognitive functions unreliable. Descartes’ systematic doubt is entirely incurable in Hume’s estimation.
Hume’s main difference with Descartes is driven by the beliefs about knowledge itself, his empiricist epistemology.
“No truth appears to me to be more evident than that beasts are endow’d with thought and reason as well as man. When a dog avoids fire and strangers, but caresses his master” –
for example, the dog relies on its senses and memory and draws inferences from experience in the same way we do.
In the same year, he compared humans to “induction machines”, a remarkably modern concept for the time considering that today’s machine learning programs are built upon.
As mentioned, there are two ways of ‘knowing’ – a priori, which is learning without sense experience, and a posteriori, which is learning that comes from experience. Knowledge that comes from the former is considered “a self-evident truth” – famously noted by Thomas Jefferson in the Declaration of Independence.
However, as David Hume noted, such self-evident truths are usually statements of bias. Essentially, self-evident statements are statements of faith or desire.
Most religions are based on creeds, which are considered true statements derived from revelation. As such, they cannot necessarily be verified by analysis or empirical evidence; they can only be accepted or rejected.
Once the basic creed is accepted, one can apply logic to see if the laws that are derived are logically consistent with the creed. In other words, the laws are deduced from the initial creed.
A posteriori learning is by gathering individual incidents and generalising rules or laws that follow from these observations. Most science is done by induction; scientists observe events, postulate on a cause and try to repeat the process through experiments. If the process is repeatable, the postulate thought to explain the event can become a theory.
As our epistemic ambitions grow, common and scientific endeavours are becoming increasingly dependent on machine learning.
The field rests on a single experimental paradigm which consists of splitting the available data into a training and testing set, and using the latter to measure how well the trained machine-learning model generalises to unseen samples. If the model reaches acceptable accuracy, then an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments.
Yet, the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained machine-learning model and the targets.
A philosopher will, nevertheless, ask whether and how we can justify the contract between human and machine learning. It can be argued that the justification becomes a pressing issue when we use machine learning to reach ‘elsewhere’ in space and time, or deploy machine learning models in non-benign environments.
Some claim machine learning will be able to learn from experience without being specifically programmed. That I’d like to see.
If you wish to republish this original article, please attribute to Rationale. Click here to find out more about republishing under Creative Commons.
Photo by Newtown grafitti on Flickr (CC)