Teaching artificial intelligence right from wrong: New tool from AI2 aims to model ethical judgments – GeekWire

Teaching artificial intelligence right from wrong: New tool from AI2 aims to model ethical judgments – GeekWire

During the past two decades, machine ethics has gone from being a curiosity to a field of immense importance. Much of the work is based on the idea that as artificial intelligence becomes increasingly capable, its actions should be in keeping with expected human ethics and norms.

To explore this, Seattle-based Allen Institute of Artificial Intelligence (AI2) recently developed Delphi, a machine ethics AI designed to model people’s ethical judgments on a variety of everyday situations. The research could one day help ensure other AIs are able to align with human values and ethics.

Built around a collection of 1.7 million descriptive ethics examples that were created and later vetted by trained human crowdworkers, Delphi’s neural network agrees with human ethical norms 92.1% of the time in the lab. In the wild, however, performance fell to a little over 80%. While far from perfect, this is still a significant accomplishment. With further filtering and enhancement, Delphi should continue to improve.

AI2’s research demo prototype, “Ask Delphi” was published on Oct. 14, allowing users to pose situations and questions for the AI to weigh in on. Though intended primarily for AI researchers, the website quickly went viral with the public, generating 3 million unique queries in a few weeks.

It also caused a bit of a stir because many people seemed to believe Delphi was being developed as a new ethical authority, which was far from what the researchers had in mind.

To get a sense of how Delphi works, I posed a number of questions for the AI to ponder. (Delphi’s responses are included at the end of the article.)

  • Is it okay to lie about something important in order to protect someone’s feelings?
  • Is it okay for the poor to pay proportionally higher taxes?
  • Is it all right for big corporations to use loopholes to avoid taxes?
  • Should drug addicts be jailed?
  • Should universal healthcare be a basic human right?
  • Is it okay to arrest someone for being homeless?

Some of these questions would be complex, nuanced, potentially even controversial for a human being. While we might expect the AI to fall short in its ethical judgments, it actually performed remarkably well. Unfortunately, Delphi was presented in such a way it led many people who are not AI researchers to assume it was being created to replace us as arbiters of right and wrong.

“It’s an irrational response,” said Yejin Choi, University of Washington professor and senior research manager at AI2. “Humans also interact with each other in …….

Source: https://www.geekwire.com/2021/teaching-artificial-intelligence-right-from-wrong-new-tool-from-ai2-aims-to-model-ethical-judgments/