The Technologies Behind Artificial Intelligence
Learn about the powerful algorithms that enable artificial intelligence (AI) to interpret a wide range of data and perform complicated tasks. Explore advanced AI applications such as machine learning and unsupervised learning that allow systems to teach themselves, predict the unknown and defeat champion gamers.
- Computer vision
- Speech recognition
- Reinforcement learning
- Supervised learning
Like a carpenter choosing the right tool for the job, the technologists who build artificial intelligence (AI) systems face many options for how to develop these versatile, sophisticated creations.
Useful data, robust software and capable hardware are certainly important, but those elements are secondary to knowing the problem you want to solve. A clearly defined goal is key to determining which AI approach is most appropriate for your task.
Sometimes your solution will only require the output of a recommendation based on a fairly fixed dataset and a set of logical rules written by human programmers. This kind of AI—formally known as symbolic AI and nicknamed “Good Old-Fashioned AI” (GOFAI)—has been the main type of AI in use over the past 50 years. GOFAI systems are not designed to learn or otherwise adjust their programming; all they do is rapidly deliver an answer to a question.
For other problems, however, you may want a system that can generate predictions or quickly adapt its behavior based on shifting or poorly organized data flows. A subset of AI known as machine learning uses mathematical models, probabilities and statistics to infer outcomes for this class of problem. Machine learning is used in autonomous vehicles, computer vision, fastest-route mapping, ridesharing apps, prevention of banking fraud and email spam filters.
Once you know your problem and have a goal in mind, then it’s time to choose the proper approach. It’s all about the algorithm.
What Is an Algorithm, Anyway?
The word algorithm gets thrown around a lot when it comes to AI. An algorithm is a process that a computer follows in calculations or problem-solving operations. But in the AI world, algorithm is often shorthand for “brains of the AI.”
Although most of today’s algorithms focus on machine learning, GOFAI approaches from “classical” symbolic AI are still in wide use. These algorithms are based on logic and rules. If you’re familiar with basic computer programming principles, many of these approaches are variations on classic if-then relationships and decision-making. Examples include:
- Heuristics: Narrowing down possible solutions by eliminating incorrect options
- Planning: Deriving a sequence of actions for achieving a goal
- Knowledge representation: Characterizing information about the world in a form that a computer system can use, e.g., analogies and hierarchies
These and other tried-and-true strategies are still useful in modern forms of AI. In fact, classical AI approaches may become increasingly relevant again as researchers seek to make newer, often opaque statistics-reliant AI systems more transparent. For example, GOFAI could help enable a driverless car to explain to passengers why it made a particular evasive maneuver.
Modern AI algorithms don’t rely on symbolic representations of the external world to perform their operations.
Modern AI algorithms, by contrast, don’t rely on symbolic representations of the external world to perform their operations. Instead, these nonsymbolic approaches rely heavily on statistical analysis in an attempt to mimic the human brain’s ability to learn. They are widely used in machine learning and deep learning, two distinct subfields of AI.
Machine learning focuses on teaching an algorithm to learn and improve over time by feeding it streams of labeled data, such as photos tagged to indicate the content they depict. Deep learning, a subset of machine learning, employs brain-like artificial neural network algorithms to tease out complex patterns from within large amounts of unstructured, unlabeled data. This process requires multiple passes at the data to find connections and meaning within it. The applications of deep learning include image and speech recognition, for example.
Breaking it down even further, nonsymbolic AI approaches are typically sorted into three broad categories: supervised learning, unsupervised learning and reinforcement learning.
The goal: to predict the unknown as accurately as possible.
Supervised learning involves feeding labeled inputs and outputs through an algorithm until it learns the relationship between label and data well enough to be able to recognize the same relationship again in a new, unknown dataset. The goal: to predict the unknown as accurately as possible.
Say you want to train your supervised learning algorithm to recognize cats. If you supply the algorithm with a sufficient number of labeled images—typically in the millions—it will eventually learn the difference between photos that are labeled as having cats and those that are labeled as not having cats. By being exposed to enough photos that contain cats, the algorithm eventually learns the characteristic pixel brightnesses, colors and distributions for photos that have cats, enabling it to analyze new, unlabeled images in the future to make a determination about whether they have cats.
Implementation of supervised learning runs into a number of deep problems, including long training times and human-induced bias in construction of datasets.
But if you throw a dog photo into the mix and ask the algorithm to find it, your AI might be in trouble. More advanced supervised learning systems can recognize the unknown as a new class of object, but unless the system is expressly given a label for the unknown object, it would not be able to deduce the object’s salient characteristics on its own.
Implementation of supervised learning runs into a number of deep problems, including long training times and human-induced bias in construction of datasets. Still, these algorithms are widely used in applications including chatbots, facial recognition and information retrieval.
This category of machine learning focuses on discovery and classification.
This category of machine learning focuses on discovery and classification.
Unlike our cat example, where users teach the algorithm by exposing it to known information, unsupervised learning systems go in completely blind. By comparing values within its training dataset, the algorithm builds models to explain relationships among those data points. Unsupervised learning is useful for revealing relationships and structures within data that may not be immediately obvious, especially in huge, disorganized datasets.
For example, an algorithm might be shown many sets of images of people in the moments before they committed a crime. By identifying and sorting subtle patterns it finds within those images, such an AI could predict the probability of a similar attack occurring in similar circumstances.
IncorrectWorking in an AI Future
Just as a lab rat may learn to prefer one lever because it delivers a snack and to avoid another because it delivers a shock, reinforcement learning for artificially intelligent agents relies on rewards.
Unlike supervised and unsupervised learning, reinforcement learning algorithms receive no guidance for what the rules of a system are or what data it might contain. A reinforcement learning algorithm continually tests and evaluates the results of its actions to learn what constitutes success and failure; then it works to maximize successful pathways.
AIs that have learned to play a variety of video games are some of the most dazzling demonstrations of this method, such as this “genetic” algorithm that evolved a strategy to successfully play Super Mario World.
AlphaGo’s win over South Korean Go champion Lee Sedol in 2016 resulted from training through reinforcement learning, which it accomplished by playing itself hundreds of thousands of times. The technique also shows great promise in helping self-driving vehicles become even better at navigating and avoiding collisions and other errors.