Demystifying Artificial Intelligence for Everyone: How did we get here?


An artist’s impression of Talos, the robot from Greek mythology.


“I think it is not true that machines would spontaneously become conscious; either evil or intrinsically against human beings… except sort of by accident.”
— Stuart Russell (b. 1962), professor and author of “Artificial Intelligence: A Modern Approach”.

For anyone who doesn’t live in a hole in the ground the term Artificial Intelligence (or AI) conjures an image more like the skinless, metallic face of The Terminator than C-3PO. Doomsday predictions have especially become louder since 2015. That was when Stephen Hawking, Steve Wozniak, Elon Musk, and scores of other technologists and computer scientists signed an open letter warning against “losing control” over powerful machines of the future.

Wozniak has reportedly resigned himself to being a robot’s pet. Musk has recently invested a billion dollars in a non-profit company dedicated to research and development of “ethical AI”. He half jokes about making Mars inhabitable while there is still time.


An ancient question

Different people have different opinions on whether or not these alarms are hyperbolic. What’s certain is that if scientists do succeed in creating an artificial intelligence akin to The Terminator or Skynet, it would be the denouement of collective human effort which started over a millennium ago.

Many a groundbreaking achievements of science are culminations of our quest to understand, harness, or conquer Nature. One such enduring ambition has been to create a machine which could rival humans intellectually or, as Greeks fancied, even embody the wisdom of gods. Ancient Greek myths speak of Hephaestus, the God of blacksmiths and artisans, who could place mind inside matter and create sentient robots.

Through the mediaeval ages, with gradual mastery over mechanical coils, hydraulics, and steam power, these automatons stepped out of myths into the real world. Artisans and engineers began to successfully produce mechanical robots in the shapes of animals and humans. Metallic lions could roar and wag their tails, ornate birds could chirp and swing on branches, and humanoids could beat drums, shoot arrows.
 A drawing created by Henri Maillardet’s automaton, ca. 1810.
Image source: 
Wikimedia Commons

By the 18th century, these robots could be programmed mechanically much like one can program the tracks of a toy train set; except with much more sophistication. Humanoid Automatons could play musical instruments, write predetermined text on paper, and also fill in as extras on stage plays. They became the “boys’ toys” of post renaissance period. Indeed, automatons were marvels of their age but nowhere as creative or thoughtful as humans.


Study of thought has been the realm of the philosophers. Alongside the developments in engineering, philosophy flourished almost as if in a parallel universe. Philosophers devoted themselves to abstract investigations into the nature of thought, consciousness, language, and physical truths. In those early days, with limited scientific implements at hand, philosophical arguments used to be the standard mechanisms of advancing “science”.

The Nyaya and Buddhist schools of Indian philosophy can trace their origins to the 5th century BCE. That was when Panini laid down rules of Sanskrit grammar to codify language, the very medium of knowledge and reason. Europe was heavily influenced by Greek philosophy as championed by Aristotle in 4th century BCE. These schools of philosophy developed rules which allowed philosophers to unambiguously reason about the nature of things.

Since natural languages like Greek and Latin were too vague to be contained within these rules, they developed their own languages of philosophy. These were precise and and unambiguous languages. They let scholars begin at a mutually agreeable set of truths and derive new claims (or truths) in a structured manner. Learned men, and rarely women, would conduct experiments and observations in private and whenever they were ready to propound a new theory, they would deploy such structured arguments to defeat their opponents and convince the authorities. This is how philosophers induced rigour into abstract studies.

Languages for machines

There were of course mathematicians who preceded and succeeded both Panini and Aristotle. Mathematical theorems and proofs were well known, as they are now, for their rigour and perpetuity. But mathematics was not considered rich and mature enough to capture all truth that was there to be known.


Mathematics, although more rigorous than philosophy, was considered incapable of representing the natural world and reasoning about it. On the other hand, Philosophy with its own set of deductive rules was not rigorous enough to establish perpetual, incontrovertible truths. This situation was lost neither on mathematicians nor on philosophers. Gottfried Leibniz, the 17th century German polymath, fancied a Calculus of Reason.

This calculus would essentially be a language so expressive and so precise that human thought could be generated through algorithms. Leibniz hoped that such a calculus would allow disagreeing parties to come together and simply say, “Gentlemen, let’s compute!” Clearly, if one could develop such a calculus, then all human knowledge and reasoning could be codified. In other words, we would be able to reproduce human intellect from a bunch of self-evident, natural truths and a set of rules written on pieces of paper. We would create Artificial Intelligence.


Turn Crank to Compute: Difference Engine, the earliest computing machine, was inspired by Leibniz's Calculus of Reason. Designed ca. 1850, built 2002. Image source: Wikipedia

The two worlds of philosophy and mathematics came together in late 19th century Europe. The outcome was what we recognise today as mathematical logic. For the first time, it offered hope of a framework to describe and compute facts of the natural as well as the mathematical world. Mathematical logic enabled a mathematician to fix a starting point, i.e. an initial state of the world of one’s choosing, and describe the claim s/he was interested in proving or disproving. It then provided all the necessary tools to systematically compute an indisputable conclusion.

The framework of logic was broad enough to allow for representation of physical objects in the form of mathematical entities; it seemed generic enough to codify nearly all of mathematics under one umbrella. But more importantly, scholars hoped that codifying intellect would be just a step away. Researchers love to study logic because it isn’t a single monolithic framework. Instead, it is a collection of numerous logical systems with different expressive powers to serve different needs.

For example, basic operations like addition and subtraction on natural numbers (i.e. 0, 1, 2,…) could be described by choosing a relatively simple logical system. But that system would fail when trying to subtract 2 from 1. No worries! Should that be important, one could choose another logical system containing both negative and positive numbers, and voila! Leibniz’s dream appeared within grasp and, at the turn of the 19th century, serious efforts were beginning to gain steam.

Logic crumbles…

The German mathematician — and it’s so often a German! — David Hilbert threw down the gauntlet in 1900 by stating 23 questions that mathematics must answer before it can call itself the purest of sciences. Among these goals was the one to build the foundations of math with logic.

In mathematics, there is not any ignorabimus. If a question can be posed rigourously, then it must be possible to compute its answer using a sequence of logical steps.

The Englishman Bertrand Russell took this challenge. He strove to prove everything which could be expressed in a rigourous manner. In his view, there ought to be no axioms or ‘natural truths’ accepted at face value. Each and every statement is open to questioning. For example, he spent 362 pages to formally prove the statement “1 + 1 = 2”. But he soon discovered a way of describing paradoxes in logic. Russell’s Paradox is the logical equivalent of the barber’s paradox which states, “In a town, every man must be clean shaven. A barber is one who shaves only those who don’t shave themselves. Does the barber shave himself?”

For all the 20 years that he devoted to them, Russell could not come up with a logical system which was free from paradoxes. Although he did not see it, at the root of Russell’s repeated failures was one axiom, one natural truth, which everyone had accepted at face value: “If a question can be posed rigourously, then it must be possible to compute its answer.”

The Austrian prodigy Kurt Gödel demonstrated, in 1931, that mathematical logic is powerful enough to express questions which are impossible to answer. He did not mean that there are questions whose answers we do not yet know. Nor did he mean that there are some unknown questions. He really meant, to utter disbelief, that there are questions which cannot be answered. We could throw at these questions all the deductions and operations allowed within the system and yet never successfully compute a solution.

It seemed at the time that the edifice of mathematics had crashed. As far as our story of AI is concerned, the import of Gödel's discovery in modern terms can be roughly stated as this.

Machines require a logical system in order to “think”. But given a sufficiently expressive logical system, which doesn’t take a lot, one can ask questions whose answers cannot be computed by any algorithm or any machine. Ever.

It turned out that Hilbert’s goal was vacuous and Leibniz’s Calculus of Reason was untenable, at least in the grand form in which he had originally described it. For if there are unanswerable questions, then the system is worthless.

… and pieces itself back

What makes the British mathematician Alan Turing immortal is the questions he asked next. He wondered whether one could at least find a logical system which is as powerful as it can be without the pitfall that Gödel described. He wondered whether such a system would be any good.

In 1936, he conceptualised one such system. More importantly, he designed a machine which could compute the answer to any question describable in this system by "executing" an algorithm. The "execution" is guaranteed to complete with nothing but the correct answer. With his Turing Machine, he gave us the precursor to the present-day computer.


A model of the Turing Machine with memory tape (on the spools) and program executor (in the middle). Every modern computer is essentially a “fancier” form this machine. Image source: Wikipedia

Finally, in 1948, mathematicians had a computing machine which could impart “reason” to dumb Automatons: an incarnation of Calculus of Reason, albeit not as powerful as Leibniz and others had hoped. We can all attest to it today that the computer is an incredibly powerful and useful machine. Still, is the human intellect “computable” within the logical system of this machine?

Turing was optimistic, and proposed experiments which might one day demonstrate that machines can compete with humans in all intellectual tasks. In his landmark paper in 1950, he asked, “Can machines think?” After nearly two millennia since the germinal ideas, humans were in the best possible position to find out. 
Source: This article was originally published on Medium by Namit Chaturvedi
Namit Chaturvedi holds PhD in theoretical computer science. Practitioner and student of AI and ML. Also interested in science, history of science, education, and startups.

0 comments:

Post a Comment