Canada News
Social media researchers are under attack. The online harms bill can help them fight back
The federal government recently introduced its long-promised Online Harms Act, Bill C-63.
Reaction so far has largely focused on the provisions to keep children safe online, on the extent to which the bill does or does not overreach in its definition of hate speech, as Conservative Leader Pierre Poilievre thinks, and whether it encroaches on freedom of expression, as the Canadian Civil Liberties Association has warned.
These are worthwhile discussions as the bill moves through the Commons, but it is critical to focus public attention on a central aspect of this legislation – the proposed duty for digital platforms to keep and share data. Retaining this section is of paramount importance because it offers the greatest potential to reshape our digital public sphere for the better.
A key metric of the success of this bill will be how well it protects and enables research into online harms because this has the potential to empower both the public and the government to adapt and respond to rapidly changing digital technologies in a way that supports a resilient and strong democratic culture.
Disappointingly, this essential aspect of the bill is being overlooked in most media coverage and in public discussions.
This part of the bill is crucial because researchers have long been kneecapped in their efforts to analyze how social media platform policies affect the well-being of the public and the health of our democracy.
Platform owners collect an abundance of data that is readily shared with marketers and advertisers for profit, but they heavily restrict and curtail access to researchers who work in the public interest.
Recent restrictions in API (application programming interface) access on X (formerly Twitter), Reddit and TikTok have increased the challenge for researchers.
More than 100 studies about X Corp. have been cancelled, suspended or significantly altered since the new restrictions were implemented, a survey conducted by the Coalition for Independent Technology Research found.
The big online platforms are also increasingly taking legal action against small non-profit organizations that have compellingly demonstrated harm perpetuated by these companies.
X recently filed a lawsuit against the Center for Countering Digital Hate, a non-profit committed to stopping the spread of online hate and disinformation via research, education and advocacy.
It has also taken legal action against Media Matters, a non-profit research organization that published a report exposing how X places advertisements for major brands next to pro-Nazi content.
Yet litigation is just one part of an arsenal of concerning new tactics being used to restrict research and suppress efforts to hold Big Tech corporations such as X, Meta (which owns Facebook and Instagram), and Alphabet (Google) accountable when they amplify harmful content or distort our social norms.
Digital violence is becoming more pervasive around the world. Social media platforms facilitate the spread of abusive content that has offline consequences, including widespread polarization, alienation and physical violence.
Algorithms that recommend content to users on social media accelerate the distribution of this material, allowing it to reach new audiences and normalize harmful discourse.
There is evidence that under X Corp.’s new ownership, hateful content is not only being under-moderated but has increased, including targeted hate such as antisemitism, Islamophobia and anti-LGBTQ+ rhetoric.
This is a Canadian problem as well as a global one. High levels of online abuse have been quantified in Canadian federal, Ontario and municipal elections through the Samara Centre for Democracy’s SAMbot project.
Pre-pandemic, more than 40 per cent of Canadians didn’t feel safe sharing their political views online. Since then, online (and offline) hate has increased dramatically, according to the B.C. Office of the Human Rights Commissioner.
A 2021 Canadian Race Relations Foundation poll found that 93 per cent of Canadians believe online hate speech and racism are a problem. Seventy-nine per cent want online hate speech and racism treated by lawmakers with the same seriousness as in-person hate crimes.
Increasing transparency is one of the most recommended, evidence-based strategies to address digital violence, and there are encouraging efforts underway internationally to increase these requirements.
Under its Digital Services Act, the European Commission is drafting regulations that would require the large tech platforms to provide data access for research purposes in the EU.
In the U.S., the Platform Accountability and Transparency Act, which would require platforms to make some data publicly available, among other research supports, has been reintroduced in the Senate.
With Bill C-63, Canada has the opportunity to position itself as a global leader in digital democracy research. The proposed bill creates a new Digital Safety Commission of Canada with the power to accredit certain people or groups and provide them access to relevant data from digital platforms if their work is intended for educational or advocacy purposes.
Online Harms Act: a step in the right direction to protect Canadians online
How a standards council could help curb harmful online content
The banality of online toxicity
We await clearer directives about how accessible this data will be, how inclusive the data access system will be for all types of researchers, how transparent this accreditation process will be and how much resistance researchers will face from federal organizations in accessing this data.
The full benefit of this crucial aspect of the bill will be realized only if research projects, large or small, led by civil society researchers, have equitable access to data.
If we can achieve broad and diverse research accreditation, Canada will have the opportunity to drive research that informs digital-policy legislation internationally – to transform our online spaces for the better, with democratic values in mind.
We see much of the content of this proposed legislation as a positive first step in effectively regulating digital platforms to act in the interests of democratic expression. But no single piece of legislation can address every social harm facilitated by digital spaces.
That is why ensuring Canadian researchers can quickly and equitably access comprehensive data from major digital platforms is so vital. Quality Canadian research should directly inform future legislative efforts.
Canada’s digital-rights strategy needs to continue to progress regardless of the status of C-63.
We need empirical evidence on how digital technologies are affecting our social fabric so policymakers can draft effective digital policy. That all starts with requiring tech companies to be more transparent as well as permitting broad data access for civil-society researchers.
To begin fostering a healthier digital media landscape in Canada, our best defence is transparency, research and public accountability.
This article first appeared on Policy Options and is republished here under a Creative Commons license.