Connect with us

Canada News

Experts say Canada lacks laws to tackle life and death problems posed by AI

Published

on

“Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said. (Shutterstock photo)

TORONTO — The role of artificial intelligence in Netflix’s movie suggestions and Alexa’s voice commands is commonly understood, but less known is the shadowy role AI now plays in law enforcement, immigration assessment, military programs and other areas.

Despite its status as a machine-learning innovation hub, Canada has yet to develop a regulatory regime to deal with issues of discrimination and accountability to which AI systems are prone, prompting calls for regulation — including from business leaders.

“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada.

The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored.

“Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said.

Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals.

Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing — automated decision-making based on data that predicts where a crime will occur or who will commit it.

The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law.

AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment.

“When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute.

She pointed to “horror cases,” including a predictive policing strategy in Chicago where the majority of people on a list of potential perpetrators were black men who had no arrests or shooting incidents to their name, “the same demographic that was targeted by over-policing and discriminatory police practices.”

Richardson says it’s time to move from lofty guidelines to legal reform. A recent AI Now Institute report states federal governments should “oversee, audit, and monitor” the use of AI in fields like criminal justice, health care and education, as “internal governance structures at most technology companies are failing to ensure accountability for AI systems.”

Oversight should be divided up among agencies or groups of experts instead of hoisting it all onto a single AI regulatory body, given the unique challenges and regulations specific to each industry, the report says.

In health care, AI is poised to upend the way doctors practice medicine as machine-learning systems can now analyze vast sets of anonymized patient data and images to identify health problems ranging from osteoporosis to lesions and signs of blindness.

Carolina Bessega, co-founder and chief scientific officer of Montreal-based Stradigi AI, says the regulatory void discourages businesses from using AI, holding back innovation and efficiency — particularly in hospitals and clinics, where the implications can be life or death.

“Right now it’s like a grey area, and everybody’s afraid making the decision of, ‘Okay, let’s use artificial intelligence to improve diagnosis, or let’s use artificial intelligence to help recommend a treatment for a patient,”’ Bessega said.

She is calling for “very strong” regulations around treatment and diagnosis and for a professional to bear responsibility for any final decisions, not a software program.

Critics say Canada lags behind the U.S. and the EU on exploring AI regulation. None has implemented a comprehensive legal framework, but Congress and the EU Commission have produced extensive reports on the issue.

“Critically, there is no legal framework in Canada to guide the use of these technologies or their intersection with foundational rights related to due process, administrative fairness, human rights, and justice system transparency,” states a March briefing by Citizen Lab, the Law Commission of Ontario and other bodies.

Divergent international standards, trade secrecy and algorithms’ constant “fluidity” pose obstacles to smooth regulation, says Miriam Buiten, junior professor of law and economics at the University of Mannheim.

Canada was among the first states to develop an official AI research plan, unveiling a $125-million strategy in 2017. But its focus was largely scientific and commercial.

In December, Prime Minister Trudeau and French President Emmanuel Macron announced a joint task force to guide AI policy development with an eye to human rights.

Minister of Innovation, Science and Economic Development Navdeep Bains told The Canadian Press in April a report was forthcoming “in the coming months.” Asked whether the government is open to legislation around AI transparency and accountability, he said: “I think we need to take a step back to determine what are the core guiding principles.

“We’ll be coming forward with those principles to establish our ability to move forward with regards to programming, with regards to legislative changes — and it’s not only going to be simply my department, it’s a whole government approach.”

The Treasury Board of Canada has already laid out a 119-word set of principles on responsible AI use that stress transparency and proper training. The Department of Innovation, Science and Economic Development highlighted the Personal Information Protection and Electronic Documents Act, privacy legislation that applies broadly to commercial activities and allows a privacy commissioner to probe complaints.

“While AI may present some novel elements, it and other disruptive technologies are subject to existing laws and regulations that cover competition, intellectual property, privacy and security,” a department spokesperson said in an email.

As of April 1, 2020, government departments seeking to deploy an automated decision system must first conduct an “algorithmic impact assessment” and post the results online.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest

Entertainment2 hours ago

“ASAP Natin ‘To” features grad acts from Gary, Martin, Regine, Kim, Joshua, Janella and many more

Plus exciting dance performances from Jake, Chie, Jackie, and Loisa   Rewind your favorite acts from singing icons Gary Valenciano,...

ICC Headquarters ICC Headquarters
News2 hours ago

US hostility towards the ICC is nothing new – it supports the court only when it suits American interests

This week, the prosecutor of the International Criminal Court (ICC) applied for arrest warrants for three Hamas leaders, as well...

Instagram2 hours ago

‘Woke’ and ‘gaslight’ don’t mean what you think they do – here’s why that’s a problem

Words and phrases change their meaning often as language evolves. In the past, something was “awful” if it was amazing...

Environment & Nature2 hours ago

Hurricane forecast points to a dangerous 2024 Atlantic season, with La Niña and a persistently warm ocean teaming up to power fierce storms

The U.S. is in for another busy hurricane season. Here are hurricanes Irma, Jose and Katia in 2017. NOAA  ...

Canada News3 hours ago

Vaping in schools: Ontario’s $30 million for surveillance and security won’t address student needs

Ontario’s recent education budget announced a “back to basics” funding formula, which includes $30 million to install vape detectors and...

Canada News3 hours ago

What the International Criminal Court’s anticipated arrest warrants against Netanyahu and Hamas leaders mean for Canada

  On May 20, Karim Khan, prosecutor of the International Criminal Court (ICC), announced that he has applied for arrest...

News3 hours ago

How Modi is using TV, film and social media to sway voters in India’s election

  As the world’s largest electorate goes to the polls in India, political parties are seeking to sway voters through...

Canada News3 hours ago

Ontario auto insurance reforms offer no real “choice” for low-wage workers

When the Ontario government released its 2024 budget at the end of March, it included few new affordability measures for...

Canada News3 hours ago

Aupaluk residents fed up with lack of safe drinking water

By Samuel Wat · CBC News  Rebecca Wynn describes what she sees coming out of her taps in Aupaluk, Que. as “yellow, pee-ish...

Canada News3 hours ago

Arviat, Nunavut airport reopens after major fuel spill

By Mah Noor Mubarik · CBC News The leak was caused by a fuel system failure, Nunavut government says Passengers are now...

WordPress Ads