The History of Artificial Intelligence
Explore a timeline of the history of artificial intelligence (AI). Learn about the significant milestones of AI development, from cracking the Enigma code in World War II to fully autonomous vehicles driving the streets of major cities.
- Machine learning
- Natural language processing
- Artificial neural network
- Turing Test
Depending on how you define them, computers have been around since at least 1946.
That year, ENIAC was launched at the University of Pennsylvania as the first fully digital, electronic, general-purpose computer. view citation
AI is a more recent outgrowth of the information technology revolution that has transformed society. Dive into this timeline to learn more about how AI made the leap from exciting new concept to omnipresent current reality.
AI Through the Years
British mathematician and cryptoanalyst Alan Turing leads the team that cracks Germany’s Enigma Machine coding encryption during World War II. In 1950, Turing says computer programs could be taught to think like humans, and he develops a hypothetical test to determine whether a machine could imitate a human well enough to fool another human. view citation
John McCarthy, an American computer and cognitive scientist, coins the term “artificial intelligence” during a conference at Dartmouth College. view citation
Frank Rosenblatt designs the first neural network for computers, which simulates the thought processes of the brain. view citation
Arthur Samuel, an IBM computer scientist and a pioneer in computer gaming and AI, coins the term “machine learning.” He also creates the first self-teaching program, which becomes so good at playing the game of checkers that it defeats the fourth-ranked checkers player in the United States. view citation
MIT’s Joseph Weizenbaum creates ELIZA, an early natural language processing system. To demonstrate the superficiality of communication between humans and machines, Weizenbaum programs ELIZA to chat with partners in a realistic manner through pattern matching and substitution. Considered one of the first chatbots, the program has no ability to contextualize events. view citation
Stanford Research Institute develops Shakey, one of the earliest AI robots. Connected to a mainframe computer via a tether cord, Shakey could perceive its surroundings, navigate, plan a route, adjust for errors, improve its planning abilities through learning and communicate in simple English. view citation
Waseda University in Japan develops WABOT-1, the first “android”—a computer-controlled humanoid robot that could walk, communicate in Japanese and grip objects with its hands. view citation
AI research languishes as processing power proves unable to keep up with the promising theoretical groundwork being laid by computer scientists. Roboticist Hans Moravec says computers are “still millions of times too weak to exhibit intelligence.” view citation
Computer scientist Edward Feigenbaum helps reignite AI research by leading the charge to develop “expert systems”—programs that learn by ask experts in a given field how to respond in certain situations. view citation Once the system compiles expert responses for all known situations likely to occur in that field, the system can provide field-specific expert guidance to nonexperts.
The Japanese government spends $400 million to support AI research as part of its Fifth-Generation Computer Systems project, which aims to create a more sophisticated computer system that would surpass the extant microprocessor-based systems. view citation
Work on machine learning shifts from knowledge-driven approaches to data-driven approaches. Scientists begin creating computer programs to analyze vast amounts of data and draw conclusions, or “learn,” from the results. view citation
IBM’s Deep Blue supercomputer defeats world chess champion Garry Kasparov in a pair of six-game chess matches, ushering in a new era of AI. Kasparov had won his first match with the computer just a year earlier. view citation
"Machine learning research that began in the 1980s achieves widespread practical use in major software service and mobile devices. One example: Intuitive Surgical’s da Vinci robotics-assisted surgical system becomes the first such device to gain U.S. Food and Drug Administration approval for general laparoscopic surgery. view citation Since then, da Vinci has been used for more than 5 million minimally invasive procedures in multiple specialties including urology, gynecology, thoracic surgery and cardiac surgery. view citation "
Computer scientist Sebastian Thrun and a team from the Stanford Artificial Intelligence Laboratory build Stanley, the first autonomous vehicle to win the Defense Advanced Research Projects Agency Grand Challenge by successfully navigating a 132-mile course in the Mojave Desert. view citation
Geoffrey Hinton coins the term “deep learning” to explain new algorithms that can be trained to recognize objects and text in images and videos. view citation
Google builds a self-driving car that drives the streets of Mountain View, California. The car uses sensors, GPS, cameras, radar and lasers, detecting objects as far as two football fields away. view citation
Apple introduces its virtual assistant, Siri, for iPhones. Originally developed at SRI International’s Artificial Intelligence Center, view citation Siri uses voice-activated queries and a natural-language interface to answer questions, make recommendations and perform administrative tasks using the phone’s on-board apps and access to the internet.
Microsoft demonstrates its Kinect system, able to track 20 human features at a rate of 30 times per second. view citation The development enables people to interact with a computer via movements and gestures.
IBM’s Watson supercomputing system beats the two best human players of the TV game show Jeopardy!, demonstrating an ability to understand and answer nuanced questions that stumped earlier computer programs. view citation
Boston Dynamics unveils Atlas, an advanced humanoid robot designed for various search-and-rescue tasks.
Physicist Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak join 3,000 researchers in AI and robotics to write an open letter calling for a ban on the development of autonomous weapons. view citation
Google’s AlphaGo program beats world champion Lee Sedol at the ancient Chinese game Go. view citation
Major advancements in AI have huge implications for health care; some systems prove more effective than human doctors at detecting and diagnosing cancer.
Swarm AI, a real-time online tool, predicts the winning horse of the Kentucky Derby. view citation
Physicists use AI to search data for evidence of previously undetected particles and other phenomena.
Google’s DeepMind AI teaches itself to walk. view citation
Machine-learning applications begin to replace text-based passwords. Biometric protections, such as using your fingerprint or face to unlock your smartphone, become more common. Behavior-based security monitors how and where a consumer uses a device.
Astronomers use AI to spot 6,000 new craters on the moon’s surface. view citation
Paul Rad, assistant director of the University of Texas-San Antonio Open Cloud Institute, and Nicole Beebe, director of the university’s Cyber Center for Security and Analytics, describe a new cloud-based learning platform for AI that teaches machines to learn like humans. view citation
Google demonstrates its Duplex AI, a digital assistant that can make appointments via telephone calls with live humans. Duplex uses natural language understanding, deep learning and text-to-speech capabilities to understand conversational context and nuance in ways no other digital assistant has yet matched.
“The Brief History of the ENIAC Computer.” Smithsonian Magazine. November 2013. View Source
“Computing Machinery and Intelligence.” A. M. Turing in Mind. 1950. View Source
“John McCarthy: Computer scientist known as the father of AI.” Independent. November 2011. View Source
“Rosenblatt’s perceptron, the first modern neural network.” Towards Data Science. March 2019. View Source
“Some studies in machine learning using the game of checkers.” A. L. Samuel in IBM Journal of Research and Development. 1959. View Source
“ELIZA—a computer program for the study of natural language communication between man and machine.” Joseph Weizenbaum in Communications of the ACM. 1966. View Source
“Shakey the Robot.” SRI International. View Source
“The evolution of humanoid robots.” CNET. July 2013. View Source
“The History of Artificial Intelligence.” Harvard University. August 2017. View Source
“Expert system.” Britannica. View Source
“Fifth generation computer.” Wikipedia. View Source
“The history of machine learning.” BBC Academy. View Source
“Deep Blue beat G. Kasparov in 1997.” Eustake via YouTube. View Source
“Da Vinci Intuitive Surgical HD.” H+ Magazine via YouTube. View Source
“Intuitive for Patients.” Intuitive. View Source
“DARPA Grand Challenge - Stanley Wins.” Rod Fritz via YouTube. View Source
“The history of machine learning.” BBC Academy. View Source
“How Google's self-driving car project rose from a crazy idea to a top contender in the race toward a driverless future.” Business Insider. October 2016.View Source
“Meet the Engineers Who Created and Sold Siri to Steve Jobs.” BigSpeak. May 2017. View Source
“Microsoft E3 2010 Kinect Sports Demo.” GameSpot via YouTube. View Source
“Watson and the Jeopardy! Challenge.” IBM Research via YouTube. View Source
“Elon Musk leads 116 experts calling for outright ban of killer robots.” The Guardian. August 2017. View Source
“All Tesla Cars Being Produced Now Have Full Self-Driving Hardware.” Tesla. October 2016. View Source
“Ford Targets Fully Autonomous Vehicle For Ride Sharing In 2021.” Ford. August 2016. View Source
“Google AI defeats human Go champion.” BBC. May 2017. View Source
“A 'human swarm' predicted the winners of the Kentucky Derby at 540 to 1 odds.” Business Insider. May 2016. View Source
“Google's DeepMind AI Just Taught Itself To Walk.” Tech Insider via YouTube. View Source
“New technique uses AI to locate and count craters on the moon.” Phys.org. March 2018. View Source
“UTSA researchers want to teach computers to learn like humans.” UTSA. March 2018. View Source