Technology
Unethical uses of AI need scrutiny to maintain public support: pioneers
MONTREAL — Two artificial intelligence pioneers warn that unscrupulous or unethical uses of the technology risk undermining the public image of an area of research undergoing rapid change.
Canadian deep learning “godfather” Yoshua Bengio believes the sector and governments need to address concerns including the building of so-called killer robots and development of facial recognition that can be used by authoritarian regimes to repress their citizens.
Facebook’s director of AI Research Yann LeCun adds that large companies involved in the research need to create a partnership to discuss issues such as the potential use of the technology to manipulate democracy and develop guidelines on the appropriate ways to construct, train, test and deploy discoveries.
“One danger is that the image of artificial intelligence in the public will degrade because of bad uses of AI,” LeCun told a ReWork Deep Learning Summit panel discussion in Montreal last week.
Bengio, the head of the Montreal Institute for Learning Algorithms, said his greatest concern is the misuse of lethal autonomous weapons.
He said governments around the world should sign treaties to ban arms that can kill without human intervention.
“I’m hoping that Canada will be among the countries that will push that forward,” Bengio said in an interview.
He said some countries have opposed such efforts and that many AI researchers refuse to work on military projects.
Bengio has previously said that the risk of job losses due to artificial intelligence is real, and that politicians should plan accordingly.
“I believe that governments should start thinking right now about how to adapt to this in the next decade, how to change our social safety net to deal with that,” he added.
Also of concern to some is the concentration of power and access to data by large companies.
LeCun warned that malice isn’t the only worry: Removing bias from data is also important.
Technology can harm people — if AI used to determine who gets out on bail, for example, employs biased data that comes up with biased results.
Hundreds of AI experts will gather Nov. 2 and 3 in Montreal for a forum on the socially responsible development of artificial intelligence.
“The sense is that there are important social, political and ethical questions raised by recent developments in AI and so it’s putting together people to discuss these issues,” said Christine Tappolet, a University of Montreal philosophy professor and director of the Centre for Ethics Research.
“(AI) is going to change the game completely so what you can do now and what probably will be possible in two, five years is massive,” she said.
The two-day session plans to develop a Montreal declaration for the ethical development of artificial intelligence that focuses on values, principles and guidelines for promising fields of research.
Tappolet said the event is taking place in Montreal because the city has become one of the foremost places for AI research and has a long tradition of being interested in the social implications of technological developments.