Canada News
Race-based COVID-19 data may be used to discriminate against racialized communities
For some, the current demands for race-based data reflect a desire to ensure the experiences of anti-Black discrimination in Canada during the pandemic are not denied or erased. But there are other more powerful forces clamouring for Canada’s race-based data, and the well-being of Black communities is not at the top of their minds.
Read more:
Collecting race-based data during coronavirus pandemic may fuel dangerous prejudices
In April, the Ontario government posted the Digital Health Information Exchange Policy that comes into effect on Oct. 1. The policy makes it easier for someone’s data to move among companies, organizations and institutions, and without someone’s knowledge or consent.
Health data is a hot commodity. Global earnings related to health data management systems — also known as electronic health and electronic medical records (EHRs/EMRs) — are forecast to exceed US$36 billion by 2021.
In Canada, five companies dominate the EHR/EMR market, with Google about to join. As the largest data mining company in the world, Google’s ability to gobble up the competition is unparalleled.
Currently, personal health data has been rapidly repurposed without consent, in ways previously not imagined. This is amplified by the potential for profit.
Amazon’s deal-making during the pandemic now includes a new contract with Canada’s federal government for personal protective equipment (PPE). That puts Amazon right in the middle of our publicly funded universal health care logistics, with access to a robust cache of data.
The false promise of anonymity
Tech, privacy and health data experts warn that we must remain vigilant and cautious with tech companies. Investigative journalists have already uncovered secret deals between the government and data-driven tech corporations, unethical conduct and failed track records when it comes to health and data privacy.
Claims that our health data are protected due to de-identification or anonymization ring hollow. Data can be re-identified or de-anonymized by linking health-care data to other information. As privacy lawyer David Holtzman indicated, “the widespread availability of new tools and technologies makes the current de-identification standards meaningless.”
So why are we being lulled into a false sense of security?
The European Union and the United Kingdom are protecting their citizens’ data, halting the predatory behaviour of tech companies within their jurisdictions. Canada is wide open, comparatively speaking, and as such, Canadian data has become an attractive target for companies seeking to profit from health data.
These companies use the data to inform predictive algorithms used by health systems planners. This is of particular concern because it has been repeatedly demonstrated that algorithms reinforce bias. Algorithms are increasingly dictating our choices, interests, insurance rates, access to loans, housing, job opportunities and more.
Data harms and benefits
Research by scholars like sociologist Ruha Benjamin and mathematician Cathy O’Neil reveal how data collection and discriminatory algorithms pose the greatest threat to minoritized people and democratic processes.
Benjamin’s scholarship reveals that Black communities are the primary targets and recipients of algorithmic racism. Without laws that protect data from data brokers, we have no way of knowing where or how our data is being used, and by whom.
Adding more race-based markers to small populations — like the Black population in Canada — increases the risk of re-identification by corporations, surveillance agencies and tech companies that hold massive global, military and security contracts.
Impact in Ontario
If the Ontario government continues on the austerity path and delists additional health services, what are the implications — especially for marginalized populations — of adding detailed socio-demographic data to health records?
For example, how will data labelled as Black, poor, disabled or all three impact a person’s insurance rates? Current legislation will not protect patients from this type of algorithmic discrimination. Only updated data laws can protect us from the perils of monetized data and the discriminatory algorithms they are generating.
Right now, the data pouring in about how COVID-19 is affecting Black communities in the United States has not affected the rising death toll. Predictably, in the U.S., race-based data has already been used to undermine Black people, their health and dignity. And in Canada, it’s more of the same: in Nova Scotia, two African Canadian communities were singled out by the province’s chief medical officer of health. The political will to act and protect Black people in the U.S. and in Canada is still missing.
Protecting rights
At minimum, Canadians must demand new data laws, enforceable penalties and the resources to be proactive.
If the purpose of collecting race-based data is to address anti-Black racism, equity or accountability, then the priority must be anti-Black racism.
Do the risks of race-based data outweigh the harms? The stakes are much higher, and more insidious and dangerous than we were led to believe.
Personal information, including health data, must be protected whether it is identifiable, de-identified or anonymized. Laws, regulation, policies and substantive enforceable penalties are the minimum pre-conditions that must be in place before more race-based data is collected and circulated.
LLana James, PhD Candidate, Faculty of Medicine, University of Toronto
This article is republished from The Conversation under a Creative Commons license. Read the original article.