Connect with us

Canada News

The Online Harms Act should target social media’s greatest harm

Published

on

hand with gloves holding phone

People get hooked on slot machines because random winnings train them to believe the next win may be just around the corner. (Pexels Photo)

A bill currently before Parliament aims to make social media safer by targeting the sharing of non-consensual sexual images and content fostering self-harm, bullying and hatred. 

But social media does vastly greater harm through the design and function of its algorithms. Ample evidence shows that algorithmic feeds make products such as TikTok, Snapchat and Instagram highly addictive and harmful, causing enormous damage to health and welfare, especially for teens. 

The federal government should amend its Online Harms Act to follow the example of Europe, California and other governments that have passed laws to make the Big Tech platforms give users a choice to opt out of algorithmic feeds or choose what user data they can exploit. 

As drafted, the legislation targets only “harmful content,” imposing a duty on platforms to prevent sharing that content or to remove it promptly when flagged. The bill does not address the design or function of algorithmic feeds. 

Algorithms are not the sole cause of social media harm. Notifications, metrics and viral posts also drive addiction. But algorithms are at the heart of it. 

People get hooked on slot machines because random winnings train them to believe the next win may be just around the corner. Algorithmic feeds train us in a similar way, using a “variable reinforcement schedule” to keep us holding on for yet more stimulating content. 

They draw on user data for “preference amplification,” showing ever more extreme content on topics such as anorexia or suicide to boost engagement. “Trending challenges” provoke young people to post videos of trashing a school washroom or choking to the point of passing out in the hope of the video going viral. 

It can no longer be doubted that algorithmic social media have led to an epidemic of addiction. 

An Ontario lawsuit alleges that 91 per cent of high-school students in the province use social media daily, with 31 per cent of these students using it for five hours a day and a stunning 14 per cent for seven hours a day. 

Evidence points to a strong correlation between time spent on social media and high rates of suicidal ideation, bullying and poor concentration. 

These harms have led to numerous lawsuits. Some 42 states attorneys general in the U.S. are suing Meta for the harmful effects of its addictive algorithms.  

Hundreds of school boards across the U.S., along with four in Ontario, are suing TikTok and other platforms for the costs incurred to deal with the fallout of social-media addiction, such as poor concentration, violence and depression. 

Lawsuits are an important avenue of reform in this area, raising the prospect of compelling a provider to change its algorithms. However, legislation is a better option because it would enable a single set of rules to be established for all social media. 

The European Union and a handful of U.S. states have led the way. Europe’s digital services regulations force platforms to be clear about how their recommendation and content moderation systems work.  

Social media researchers are under attack. The online harms bill can help them fight back 

They have to give users a choice to opt out of an algorithmic feed based on based on profiling. Platforms can’t “design, organise or operate their online interfaces in a way that deceives or manipulates” the ability “to make free and informed decisions.” 

Several U.S. states have passed laws targeting addictive algorithms in social media. California led the way with its Kids Online Safety Act in 2022, which prohibits profiling or nudging young persons, or tracking their location. 

 A recently tabled Protecting our Kids from Social Media Addiction Act in California would also prohibit a platform from providing an “addictive feed” unless it takes reasonable steps to ascertain that a user is not a minor or has “verifiable parental consent.”  

Members of both parties in Congress hope to take these ideas national in a bill currently before the Senate. 

That Kids Online Safety Act would force TikTok and Instagram to “limit features that increase, sustain, or extend use” of the app by a minor, such as “automatic playing of media, rewards for time spent on the platform, notifications, and other features that result in compulsive usage.” 

These bills have raised valid concerns about imposing undue limits on free speech. If social media platforms have a duty of care to young persons to avoid harm, or to the public generally, who is to say when that duty is breached? What sounds like harm to one person could be censorship to another.  

But the harm that algorithmic feeds pose to young people isn’t about content. It’s about manipulation through function and design. We don’t allow companies to sell products known to harm consumers and we know that these algorithmic feeds harm young people. 

The Online Harms Act needs to address this. 

This article first appeared on Policy Options and is republished here under a Creative Commons license.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *