NEW HAVEN, Conn. -- While most scientists endeavor to create artificial intelligence (AI) systems that help people, researchers at Yale University here are making progress programming "bad AI" that causes trouble -- but through that trouble, the AI lead interacting humans to better results.
The research is being carried out at the Yale Institute for Network Science in New Haven, Connecticut, by professor Nicholas A. Christakis, 56, who has been chosen by U.S. Time magazine as one of the top 100 most influential people on earth, and doctoral candidate and researcher Hirokazu Shirado, 37.
The two had 4,000 people participate in an online puzzle game, and examined their behavior. Twenty people participate per game, each selecting a color for their node -- green, orange or purple -- and then trying to connect with their neighbors of a different color. Participants can also change the color of their node to aid the team in getting the answer, but eventually they also can end up connecting with a neighbor of the same color by mistake.
The researchers compared two cases: one with all 20 members being human, and one with some "bad AI" in the mix, which would intentionally change the color of its node to the same color as its immediate neighbor one out of 10 times to aggravate the human players. The results found that in the cases where the "bad AI" participated, there was a 20 percent higher likelihood that the players would reach the winning configuration before the time limit ran out. This was because the human players would frequently change the color of their node due to the "naughty" AI. This rapid change in colors actually led to solving the puzzle faster.
"The addition of the bad AI changes the way that the human players think, and acts as a stimulus for them to play more actively," Shirado explains.
There are still many who worry that the ability of AI will grow beyond that of their human creators in the near future and take over society. However, Christakis explains that their goal is not to create AI that will take the place of humans, but complement them instead.
For example, if a car equipped with an AI system drives at a consistent speed, the human driver behind that vehicle can succumb to the danger of not focusing on the AI car ahead. But if the AI system occasionally hits the breaks from time to time for no reason, then it keeps the human driver alert, and this could lead to a reduction in the number of car accidents. Another example is the mediation of a marital dispute. Rather than AI that only focuses on dealing with the issue seriously, one that occasionally inserts jokes and clears the air is probably more likely to get the couple back on track faster.
Just what sort of role should AI play in human society? Experts related to famous companies such as Facebook and Apple, who seek answers to this question, make visits to consult with Christakis and other researchers at Yale University.
Such "naughty" AIs can be useful precisely because of the complicated nature of society and the mysteriousness of human psychology. That is what serves as the drive for researchers.
(Japanese original by Kenji Shimizu, North America General Bureau)