Skip to main content

Australia passes law to hold social media companies responsible for “abhorrent violent material”

On Thursday, Australia lawmakers ushered through what is perhaps the toughest legal measures to hold social media companies accountable for the content they share. Only a few weeks after the massacre of 50 people at mosques in Christchurch, New Zealand, Australia’s House of Representatives passed a law requiring social media platforms to “expeditiously” remove content […]

On Thursday, Australia lawmakers ushered through what is perhaps the toughest legal measures to hold social media companies accountable for the content they share.

Only a few weeks after the massacre of 50 people at mosques in Christchurch, New Zealand, Australia’s House of Representatives passed a law requiring social media platforms to “expeditiously” remove content that shows kidnapping, murders, rape, or terrorist attacks. If the platforms fail to get rid of the content in a timely fashion employees could face prison time in Australia and companies could be subject to fines of up to 10% of their annual profit.

“These platforms should not be weaponized for these purposes,” Christan Porter, Australia’s attorney general, was quoted as saying in a New York Times report. “Internet platforms must take the spread of abhorrent violent material online seriously,” he added.

The law puts Australia at the center of a contentious debate around free speech, censorship, and content moderation that’s now raging across the globe.

India has also proposed measures to limit the spread of misinformation on social media platforms, which has given rise to questions about whether the laws are tantamount to censorship of speech that the ruling power might find offensive. And the European Union has said that social media platforms are struggling to comply with the regulations it enacted in 2016 and 2017 to combat hate speech.

Already, the advocacy group that represents Facebook, Google, and other companies is speaking out against the regulations in Australia.

“This law, which was conceived and passed in five days without any meaningful consultation, does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” Sunita Bose, the managing director of the Digital Industry Group, which represents social media companies told The New York Times .

Social media companies are having enough problems adhering to the standards they set for themselves. Despite pledging to remove posts that advocate for white supremacy or white nationalism, Facebook has not removed content posted just this week that flies in the face of that decision.

Facebook is finally banning white supremacy that goes by other names

Notorious Canadian white supremacist Faith Goldy posted content earlier this week that would seem to qualify as promoting white supremacism — demanding Jews and people of color pay back white European countries that they “invaded”, according to a report in HuffPo (a sister publication own by Verizon Media Group).

In a video entitled “RACE AGAINST TIME”, which Goldy posted after Facebook’s commitment to ban white supremacist content, Goldy said:

“The Great White North is destined to become a majority minority country in less than a generation… Stateside, even with President Trump at the helm, the United States is not being spared from the ongoing relentless process of population replacement. … Whites will be a minority in America in less than a generation.”

YouTube, which has come under fire for its own reluctance and recalcitrance when it comes to removing hate speech, will demonetize content, but allows them to remain circulating on its video platform.

As Bloomberg reported earlier this week, several employees inside the company have raised concerns about the platforms role in spreading lies, propaganda, and hate speech. Many employees, according to the Bloomberg report, even tried to take steps to stop the spread of malicious videos that contained misinformation, hate speech, or questionable content, and at every turn these employees were stymied by executives.

Even the U.S. government is becoming more aware and paying more attention to the problem of white supremacy and the role that social media platforms have in spreading hate speech. In testimony today before the House Appropriations Committee FBI Director Christopher Wray was questioned about the rise of white supremacist content.

“The danger — I think — of white supremacist violent extremism or any other kind of violent extremism is of course significant. We assess that it is a persistent pervasive threat,” said Wray. “In general domestic terrorism in this country has changed in the sense that it’s less structured, less organized, more uncoordinated one-off individuals as opposed to some structured hierarchy….There’s a lot of social media exploitation that comes with it.”

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.