The Semantic Tree: Artificial Intelligence and Machine Learning



One bit of advice: it is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, ie the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to. 

— Elon Musk, Reddit AMA

Machine learning is one of many subfields of artificial intelligence, concerning the ways that computers learn from experience to improve their ability to think, plan, decide, and act.
 

Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience, and decision theory. Many fields fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing.

Machine learning is a subfield of artificial intelligence. Its goal is to enable computers to learn on their own. A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models.
The AI effect: what actually qualifies as “artificial intelligence”?
The exact standard for technology that qualifies as “AI” is a bit fuzzy, and interpretations change over time. The AI label tends to describe machines doing tasks traditionally in the domain of humans. Interestingly, once computers figure out how to do one of these tasks, humans have a tendency to say it wasn’t really intelligence. This is known as the AI effect.
For example, when IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, people complained that it was using "brute force" methods and it wasn’t “real” intelligence at all. As Pamela McCorduck wrote, “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something — play good checkers, solve simple but relatively informal problems — there was chorus of critics to say, ‘that’s not thinking’”(McCorduck, 2004).
Perhaps there is a certain je ne sais quoi inherent to what people will reliably accept as “artificial intelligence”:
"AI is whatever hasn't been done yet." - Douglas Hofstadter
So does a calculator count as AI? Maybe by some interpretation. What about a self-driving car? Today, yes. In the future, perhaps not. Your cool new chatbot startup that automates a flow chart? Sure… why not.
 

Strong AI will change our world forever; to understand how, studying machine learning is a good place to start

The technologies discussed above are examples of artificial narrow intelligence (ANI), which can effectively perform a narrowly defined task.

Meanwhile, we’re continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or… reprogramming itself.

And this last one is a big deal. Once we create an AI that can improve itself, it will unlock a cycle of recursive self-improvement that could lead to an intelligence explosion over some unknown time period, ranging from many decades to a single day. 
 
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. — I.J. Good, 1965 
You may have heard this point referred to as the singularity. The term is borrowed from the gravitational singularity that occurs at the center of a black hole, an infinitely dense one-dimensional point where the laws of physics as we understand them start to break down. 

A recent report by the Future of Humanity Institute surveyed a panel of AI researchers on timelines for AGI, and found that “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years”(Grace et al, 2017). We’ve personally spoken with a number of sane and reasonable AI practitioners who predict much longer timelines (the upper limit being “never”), and others whose timelines are alarmingly short — as little as a few years.

The advent of greater-than-human-level artificial superintelligence (ASI)could be one of the best or worst things to happen to our species. It carries with it the immense challenge of specifying what AIs will want in a way that is friendly to humans. 

2 comments: