Unit.5 | Artificial Intelligence
Learning Unit | Artificial Intelligence

The History of Artificial Intelligence

Chapter 05/07

The History of Artificial Intelligence

  1. AI Through the Years

Snapshot

Explore a timeline of the history of artificial intelligence (AI). Learn about the significant milestones of AI development, from cracking the Enigma code in World War II to fully autonomous vehicles driving the streets of major cities.

Key Terms:

  • Iteration
  • Machine learning
  • Natural language processing
  • Artificial neural network
  • Robotics
  • Supercomputer
  • Turing Test

Depending on how you define them, computers have been around since at least 1946.

That year, ENIAC was launched at the University of Pennsylvania as the first fully digital, electronic, general-purpose computer. view citation[1]

AI is a more recent outgrowth of the information technology revolution that has transformed society. Dive into this timeline to learn more about how AI made the leap from exciting new concept to omnipresent current reality.

AI Through the Years

1940s-1950

British mathematician and cryptoanalyst Alan Turing leads the team that cracks Germany’s Enigma Machine coding encryption during World War II. In 1950, Turing says computer programs could be taught to think like humans, and he develops a hypothetical test to determine whether a machine could imitate a human well enough to fool another human. view citation[2]

Archival image of Alan Turing at age 16.

Via Wikimedia Commons

1956

John McCarthy, an American computer and cognitive scientist, coins the term “artificial intelligence” during a conference at Dartmouth College. view citation[3]

Archival image of John McCarthy.

Via Stanford University

1957

Frank Rosenblatt designs the first neural network for computers, which simulates the thought processes of the brain. view citation[4]

Archival image of Frank Rosenblatt.

Via Pace University

1959-1962

Arthur Samuel, an IBM computer scientist and a pioneer in computer gaming and AI, coins the term “machine learning.” He also creates the first self-teaching program, which becomes so good at playing the game of checkers that it defeats the fourth-ranked checkers player in the United States. view citation[5]

Archival image of Arthur Samuel.

Via History-Computer.com

1964-1966

MIT’s Joseph Weizenbaum creates ELIZA, an early natural language processing system. To demonstrate the superficiality of communication between humans and machines, Weizenbaum programs ELIZA to chat with partners in a realistic manner through pattern matching and substitution. Considered one of the first chatbots, the program has no ability to contextualize events. view citation[6]

Archival image of an ELIZA screen.

Via Steemit.com

1966-1972

Stanford Research Institute develops Shakey, one of the earliest AI robots. Connected to a mainframe computer via a tether cord, Shakey could perceive its surroundings, navigate, plan a route, adjust for errors, improve its planning abilities through learning and communicate in simple English. view citation[7]

Archival image of Shakey.

Via SRI.com

1972

Waseda University in Japan develops WABOT-1, the first “android”—a computer-controlled humanoid robot that could walk, communicate in Japanese and grip objects with its hands. view citation[8]

1976

AI research languishes as processing power proves unable to keep up with the promising theoretical groundwork being laid by computer scientists. Roboticist Hans Moravec says computers are “still millions of times too weak to exhibit intelligence.” view citation[9]

1980

Computer scientist Edward Feigenbaum helps reignite AI research by leading the charge to develop “expert systems”—programs that learn by ask experts in a given field how to respond in certain situations. view citation[10] Once the system compiles expert responses for all known situations likely to occur in that field, the system can provide field-specific expert guidance to nonexperts.

1982-1992

The Japanese government spends $400 million to support AI research as part of its Fifth-Generation Computer Systems project, which aims to create a more sophisticated computer system that would surpass the extant microprocessor-based systems. view citation[11]

1990s

Work on machine learning shifts from knowledge-driven approaches to data-driven approaches. Scientists begin creating computer programs to analyze vast amounts of data and draw conclusions, or “learn,” from the results. view citation[12]

1997

IBM’s Deep Blue supercomputer defeats world chess champion Garry Kasparov in a pair of six-game chess matches, ushering in a new era of AI. Kasparov had won his first match with the computer just a year earlier. view citation[13]

2000

"Machine learning research that began in the 1980s achieves widespread practical use in major software service and mobile devices. One example: Intuitive Surgical’s da Vinci robotics-assisted surgical system becomes the first such device to gain U.S. Food and Drug Administration approval for general laparoscopic surgery. view citation[14] Since then, da Vinci has been used for more than 5 million minimally invasive procedures in multiple specialties including urology, gynecology, thoracic surgery and cardiac surgery. view citation[15] "

the da Vinci robotics-assisted surgical system.

by Cmglee Via Wikimedia Commons

2005

Computer scientist Sebastian Thrun and a team from the Stanford Artificial Intelligence Laboratory build Stanley, the first autonomous vehicle to win the Defense Advanced Research Projects Agency Grand Challenge by successfully navigating a 132-mile course in the Mojave Desert. view citation[16]

2006

Geoffrey Hinton coins the term “deep learning” to explain new algorithms that can be trained to recognize objects and text in images and videos. view citation[17]

2009

Google builds a self-driving car that drives the streets of Mountain View, California. The car uses sensors, GPS, cameras, radar and lasers, detecting objects as far as two football fields away. view citation[18]

2010

Apple introduces its virtual assistant, Siri, for iPhones. Originally developed at SRI International’s Artificial Intelligence Center, view citation[19] Siri uses voice-activated queries and a natural-language interface to answer questions, make recommendations and perform administrative tasks using the phone’s on-board apps and access to the internet.

Microsoft demonstrates its Kinect system, able to track 20 human features at a rate of 30 times per second. view citation[20] The development enables people to interact with a computer via movements and gestures.

An iphone on a desk.

2011

IBM’s Watson supercomputing system beats the two best human players of the TV game show Jeopardy!, demonstrating an ability to understand and answer nuanced questions that stumped earlier computer programs. view citation[21]

2013

Boston Dynamics unveils Atlas, an advanced humanoid robot designed for various search-and-rescue tasks.

2015

Physicist Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak join 3,000 researchers in AI and robotics to write an open letter calling for a ban on the development of autonomous weapons. view citation[22]

2016

Tesla view citation[23] and Ford view citation[24] announce timelines for the development of fully autonomous vehicles.

Google’s AlphaGo program beats world champion Lee Sedol at the ancient Chinese game Go. view citation[25]

Major advancements in AI have huge implications for health care; some systems prove more effective than human doctors at detecting and diagnosing cancer.

Swarm AI, a real-time online tool, predicts the winning horse of the Kentucky Derby. view citation[26]

2017

Physicists use AI to search data for evidence of previously undetected particles and other phenomena.

Google’s DeepMind AI teaches itself to walk. view citation[27]

Machine-learning applications begin to replace text-based passwords. Biometric protections, such as using your fingerprint or face to unlock your smartphone, become more common. Behavior-based security monitors how and where a consumer uses a device.

2018

Astronomers use AI to spot 6,000 new craters on the moon’s surface. view citation[28]

Paul Rad, assistant director of the University of Texas-San Antonio Open Cloud Institute, and Nicole Beebe, director of the university’s Cyber Center for Security and Analytics, describe a new cloud-based learning platform for AI that teaches machines to learn like humans. view citation[29]

Google demonstrates its Duplex AI, a digital assistant that can make appointments via telephone calls with live humans. Duplex uses natural language understanding, deep learning and text-to-speech capabilities to understand conversational context and nuance in ways no other digital assistant has yet matched.

References

  1. “The Brief History of the ENIAC Computer.” Smithsonian Magazine. November 2013. View Source

  2. “Computing Machinery and Intelligence.” A. M. Turing in Mind. 1950. View Source

  3. “John McCarthy: Computer scientist known as the father of AI.” Independent. November 2011. View Source

  4. “Rosenblatt’s perceptron, the first modern neural network.” Towards Data Science. March 2019. View Source

  5. “Some studies in machine learning using the game of checkers.” A. L. Samuel in IBM Journal of Research and Development. 1959. View Source

  6. “ELIZA—a computer program for the study of natural language communication between man and machine.” Joseph Weizenbaum in Communications of the ACM. 1966. View Source

  7. “Shakey the Robot.” SRI International. View Source

  8. “The evolution of humanoid robots.” CNET. July 2013. View Source

  9. “The History of Artificial Intelligence.” Harvard University. August 2017. View Source

  10. “Expert system.” Britannica. View Source

  11. “Fifth generation computer.” Wikipedia. View Source

  12. “The history of machine learning.” BBC Academy. View Source

  13. “Deep Blue beat G. Kasparov in 1997.” Eustake via YouTube. View Source

  14. “Da Vinci Intuitive Surgical HD.” H+ Magazine via YouTube. View Source

  15. “Intuitive for Patients.” Intuitive. View Source

  16. “DARPA Grand Challenge - Stanley Wins.” Rod Fritz via YouTube. View Source

  17. “The history of machine learning.” BBC Academy. View Source

  18. “How Google's self-driving car project rose from a crazy idea to a top contender in the race toward a driverless future.” Business Insider. October 2016.View Source

  19. “Meet the Engineers Who Created and Sold Siri to Steve Jobs.” BigSpeak. May 2017. View Source

  20. “Microsoft E3 2010 Kinect Sports Demo.” GameSpot via YouTube. View Source

  21. “Watson and the Jeopardy! Challenge.” IBM Research via YouTube. View Source

  22. “Elon Musk leads 116 experts calling for outright ban of killer robots.” The Guardian. August 2017. View Source

  23. “All Tesla Cars Being Produced Now Have Full Self-Driving Hardware.” Tesla. October 2016. View Source

  24. “Ford Targets Fully Autonomous Vehicle For Ride Sharing In 2021.” Ford. August 2016. View Source

  25. “Google AI defeats human Go champion.” BBC. May 2017. View Source

  26. “A 'human swarm' predicted the winners of the Kentucky Derby at 540 to 1 odds.” Business Insider. May 2016. View Source

  27. “Google's DeepMind AI Just Taught Itself To Walk.” Tech Insider via YouTube. View Source

  28. “New technique uses AI to locate and count craters on the moon.” Phys.org. March 2018. View Source

  29. “UTSA researchers want to teach computers to learn like humans.” UTSA. March 2018. View Source

Next Section

Societal and Ethical Concerns of AI

Chapter 06 of 07

Learn how the capabilities of artificial intelligence (AI) are raising troubling concerns about its unintended consequences.