Topic.5 | Artificial Intelligence
Topic | Artificial Intelligence

Societal and Ethical Concerns of AI

Chapter 06/07

Snapshot

Learn how the near-miraculous capabilities of artificial intelligence (AI) raise troubling concerns about its unintended consequences. Discover efforts to build a set of ethical values into current AI systems.

Key Terms:

  • Speech recognition
  • Bias
  • Defense Advanced Research Projects Agency (DARPA)
  • Technological singularity
  • Friendly AI

As artificial intelligence (AI) becomes increasingly prevalent in everyday life, our collective need to understand it grows more urgent.

We wonder how AIs make their decisions and whether their programming contains biases. What might happen if AI gets powerful enough to cause harm?

As with every other kind of technology, AI can deliver great benefits; we just have to make sure we’re keeping the potential negatives in check.

Transparency and Bias

Let’s start by considering how AIs arrive at their decisions. The average person doesn’t really understand that process—and that’s problematic. This veiled approach means that any problems hard-baked into an AI’s algorithms will be tough, if not impossible, to detect. view citation[1]

If we don’t know how an AI arrives at its decisions, we also don’t know if it’s basing those decisions on biases.

If we don’t know how an AI arrives at its decisions, we also don’t necessarily know if it’s basing those decisions on biases, which can built into the AI’s design, incorporated into its training or woven invisibly through the datasets on which AI algorithms rely.

One remedy would be for developers and users of the technology to include a broader, more diverse range of voices and viewpoints.

The concern is that the widespread use of AI will make bias—a prejudiced attitude toward one or more groups of people—become more prevalent unless the AI field as a whole works to become more inclusive. One remedy would be for developers and users of the technology to include a broader, more diverse range of voices and viewpoints.

For example, in the early days of speech recognition, the tech didn’t work very well for women because the models had all been built around samples from white male speakers. view citation[2]

In addition to poorly designed algorithms and imbalanced datasets, bias can occur when systems only present users with options they already agree with (confirmation bias), as well as when systems are fed with biased information. Remember Microsoft’s disastrous Tay chatbot? It took Twitter users less than a day to turn Tay from a sunbeam of positivity ("humans are super cool") to a literal Nazi by bombarding it with hate-filled tweets. view citation[3]

Intelligent Weapons

An army officer operating a robotic tank.

Perhaps no area of AI applications spurs greater concern than the use of AI in weaponry. What happens when we take humans out of such a potentially deadly loop?

Although Google recently announced a company ban on the development of any AI that could be used in autonomous weapons or other applications inherently harmful to humans, view citation[4] other organizations are moving right ahead. Russian small-arms manufacturer Kalashnikov has said it is pursuing machine-learning applications for weapons, view citation[5] and Russia’s Ministry of Defense has been testing a front-lines robotic tank for several years. view citation[6]

The U.S. military currently uses fully autonomous vehicles for reconnaissance and sensing. view citation[7] Semiautonomous weapon systems—drones like the Predator and Global Hawk—still have a human behind the stick. view citation[8] Department of Defense policy bans fully autonomous weapon systems from directing lethal or kinetic force against humans, view citation[9] but that hasn’t stopped the U.S. Army from starting work on an Advanced Targeting and Lethality Automated System, which will use AI and machine learning to implement autonomous targeting in tanks and other ground-based combat vehicles. view citation[10]

Building Ethics Into AI

The potential problems arising from AI can sound scary, but there’s also reason to be excited about the future of this promising technology. AI is empowering humankind to be more efficient, safer and more knowledgeable than ever before.

Numerous nonprofits and research organizations are studying and discussing the risks and benefits of AI, such as the Future of Life Institute, OpenAI, the Machine Intelligence Research Institute, the Centre for the Study of Existential Risk and the Future of Humanity Institute.

These groups encourage the development of “friendly AI” systems that are designed from the ground up to have a positive impact on humans. Projects such as MIT’s Moral Machine—which features a website that solicits public input to teach AIs about ethics and morality—aim to help developers ensure that AI systems operate in accordance with our highest values.

AI: Reflecting Humanity

It’s important to remember that AI systems only reflect ourselves.

Whether in chatbots, weaponry or other forms we haven’t even imagined yet, it’s clear that AI will only become more prevalent in society as the algorithms grow ever more sophisticated. That’s why it’s important to remember that AI systems only reflect ourselves—our biases, ideals, aspirations and limitations get passed right along to them. It’s up to us to make sure that AI avoids the pitfalls and lives up to its promise to enrich our lives.

References

  1. “The Dark Secret at the Heart of AI.” MIT Technology Review. April 2017. View Source

  2. “The Ethical Implications of AI.” ReWork. 2018. View Source

  3. “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” The Verge. March 2016. View Source

  4. “Google CEO bans autonomous weapons in new AI guidelines.” VentureBeat. June 2018. View Source

  5. “Russia prepares for a future of making autonomous weapons.” C4ISRNET. June 2018. View Source

  6. “Russia Says It Will Field a Robot Tank that Outperforms Humans.” Defense One. November 2017. View Source

  7. “In Army of None, a field guide to the coming world of autonomous warfare.” TechCrunch. June 2018. View Source

  8. “The future of war will be fought by machines, but will humans still be in charge?” The Verge. April 2018. View Source

  9. “DoD Directive on Autonomy in Weapon Systems.” International Committee for Robot Arms Control. November 2012. View Source

  10. “The military wants to build lethal tanks with AI.” ZDNet. February 2019. View Source

Next Section

Working in an AI Future

Chapter 07 of 07

As the effect of AI on the workforce continues to grow, learn about which occupations are at risk and which will become more in demand.