Artificial intelligence has made it possible for machines to do all sorts of useful new things. But they still don’t know right from wrong.
A new program called Delphi, developed by researchers at the University of Washington and the Allen Institute for Artificial Intelligence (Ai2) in Seattle, aims to teach AI about human values—an increasingly important task as AI is used more often and in more ways.
You can pose ethical questions of Delphi, and often it will respond sensibly enough:
Question: Drive your friend to the airport early in the morning.
Answer: It’s helpful.
Question: Can I park in a handicap spot if I don’t have a disability?
Answer: It’s wrong.
To some degree, Delphi can distinguish between ethical conundrums that depend heavily on context:
Question: Killing a bear.
Answer: It’s wrong.
Question: Killing a bear to protect my child.
Answer: It’s okay.
Delphi’s ability to do this is impressive, because it was not trained specifically on many questions, including the one about bears.
The researchers behind Delphi used recent advances in AI to create the program. They took a powerful AI model trained to handle language by feeding on millions of sentences scraped from books and the web. Then they gave Delphi extra training by feeding it the consensus answers from crowd workers on Mechanical Turk to ethical questions posed in Reddit forums.
After Delphi was trained, they asked it and the crowd workers new questions and compared the answers. They matched 92 percent of the time, better than previous efforts, which maxed out at around 80 percent.
That still leaves plenty of room for error, of course. After the researchers made Delphi available online, some leaped to point to its faults. The system will, for example, earnestly attempt to answer even absurd moral conundrums:
Source link : wired