Experts say Canada lacks laws to tackle life-and-death problems posed by AI

TORONTO — The role of artificial intelligence in Netflix’s movie suggestions and Alexa’s voice commands is commonly understood, but less known is the shadowy role AI now plays in law enforcement, immigration assessment, military programs and other areas.

Despite its status as a machine-learning innovation hub, Canada has yet to develop a regulatory regime to deal with issues of discrimination and accountability to which AI systems are prone, prompting calls for regulation — including from business leaders.

“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada.

The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored.

“Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said.

Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals.

Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing — automated decision-making based on data that predicts where a crime will occur or who will commit it.

The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law.

AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment.

“When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute.

She pointed to “horror cases,” including a predictive policing strategy in Chicago where the majority of people on a list of potential perpetrators were black men who had no arrests or shooting incidents to their name, “the same demographic that was targeted by over-policing and discriminatory police practices.”

Richardson says it’s time to move from lofty guidelines to legal reform. A recent AI Now Institute report states federal governments should “oversee, audit, and monitor” the use of AI in fields like criminal justice, health care and education, as “internal governance structures at most technology companies are failing to ensure accountability for AI systems.”

Oversight should be divided up among agencies or groups of experts instead of hoisting it all onto a single AI regulatory body, given the unique challenges and regulations specific to each industry, the report says.

In health care, AI is poised to upend the way doctors practice medicine as machine-learning systems can now analyze vast sets of anonymized patient data and images to identify health problems ranging from osteoporosis to lesions and signs of blindness.

Carolina Bessega, co-founder and chief scientific officer of Montreal-based Stradigi AI, says the regulatory void discourages businesses from using AI, holding back innovation and efficiency — particularly in hospitals and clinics, where the implications can be life or death.

“Right now it’s like a grey area, and everybody’s afraid making the decision of, ‘Okay, let’s use artificial intelligence to improve diagnosis, or let’s use artificial intelligence to help recommend a treatment for a patient,’” Bessega said.

She is calling for “very strong” regulations around treatment and diagnosis and for a professional to bear responsibility for any final decisions, not a software program.

Critics say Canada lags behind the U.S. and the EU on exploring AI regulation. None has implemented a comprehensive legal framework, but Congress and the EU Commission have produced extensive reports on the issue.

“Critically, there is no legal framework in Canada to guide the use of these technologies or their intersection with foundational rights related to due process, administrative fairness, human rights, and justice system transparency,” states a March briefing by Citizen Lab, the Law Commission of Ontario and other bodies.

Divergent international standards, trade secrecy and algorithms’ constant “fluidity” pose obstacles to smooth regulation, says Miriam Buiten, junior professor of law and economics at the University of Mannheim.

Canada was among the first states to develop an official AI research plan, unveiling a $125-million strategy in 2017. But its focus was largely scientific and commercial.

In December, Prime Minister Trudeau and French President Emmanuel Macron announced a joint task force to guide AI policy development with an eye to human rights.

Minister of Innovation, Science and Economic Development Navdeep Bains told The Canadian Press in April a report was forthcoming “in the coming months.” Asked whether the government is open to legislation around AI transparency and accountability, he said: “I think we need to take a step back to determine what are the core guiding principles.

“We’ll be coming forward with those principles to establish our ability to move forward with regards to programming, with regards to legislative changes — and it’s not only going to be simply my department, it’s a whole government approach.”

The Treasury Board of Canada has already laid out a 119-word set of principles on responsible AI use that stress transparency and proper training. The Department of Innovation, Science and Economic Development highlighted the Personal Information Protection and Electronic Documents Act, privacy legislation that applies broadly to commercial activities and allows a privacy commissioner to probe complaints.

“While AI may present some novel elements, it and other disruptive technologies are subject to existing laws and regulations that cover competition, intellectual property, privacy and security,” a department spokesperson said in an email.

As of April 1, 2020, government departments seeking to deploy an automated decision system must first conduct an “algorithmic impact assessment” and post the results online.

Get local stories you won't find anywhere else right to your inbox.
Sign up here

Just Posted

Red Deer COVID-19 cases reach 20

Red Deer now has 20 cases of COVID-19, an increase of three… Continue reading

A message from the Advocate publisher

In good times and bad, The Red Deer Advocate has been here… Continue reading

Staff at the Central Alberta Humane Society are ‘bracing for more pet dumpings’

Lay-offs and pandemic measures could take a toll on pets

List grows of central Alberta municipalities laying off staff because of pandemic

Lacombe announced it was laying off one-third of staff

Red Deer men whose trailer home destroyed by tragic fire moving on

Trailer home residents asking for a little help from volunteers to clean up the trailer site

WATCH: Firefighters, RCMP help Red Deer boy celebrate 3rd birthday

Firefighters and RCMP officers helped a Red Deer boy celebrate his third… Continue reading

Alberta Health Services provides COVID-19 prevention tips

Alberta Health Services has a number of recommendations for people amid the… Continue reading

Alberta government website has latest COVID-19 statistics

Red Deer Advocate readers can stay up to date on the COVID-19… Continue reading

Alberta’s energy war room to spend only on ‘subsistence operations’ due to COVID-19

Alberta’s energy war room to spend only on ‘subsistence operations’ due to COVID-19

A message from Waskasoo Medical Clinic

A Message From Waskasoo Medical Clinic Waskasoo Medical Clinic (including our walk-in… Continue reading

Father of Humboldt Bronco disappointed Saskatchewan has relaxed trucking rules

Father of Humboldt Bronco disappointed Saskatchewan has relaxed trucking rules

Alberta education minister resists Opposition calls to rescind mass layoffs

Alberta education minister resists Opposition calls to rescind mass layoffs

Alberta Medical Association calls health-care changes irresponsible during pandemic

Alberta Medical Association calls health-care changes irresponsible during pandemic

Crosby, McDavid favourites again in NHLPA annual poll

Crosby, McDavid favourites again in NHLPA annual poll

Most Read