Sermons urging social media fasts or sabbaticals from screen time are woefully inadequate attempts to combating the racial and religious hatred individuals and faith groups are increasingly experiencing online, Interfaith Alliance President Paul Brandeis Raushenbush said.
Religious leaders and communities have a duty to sound the alarm about the rise of white supremacy, Christian nationalism and anti-democracy messaging online, Raushenbush said during the livestreamed release of the Alliance’s new report, “Big Tech, Hate and Religious Freedom Online.”
“Right now, religious communities have no understanding of their mandate” to push back against the cyber hate aimed at them and others, he said. They can do that by “asking the question, ‘How can we help shape this? How can we be trained in disrupting hate? How can we figure out how to be better citizens online?”
During the Jan. 25 roll-out of the report, the Baptist minister moderated a panel discussion with three leading experts on the proliferation of hate on social media, gaming and other online platforms. Each agreed that faith groups must have a role in holding tech firms accountable for the mushrooming level of race-based and religious bigotry on their sites.
Faith groups must have a role in holding tech firms accountable for the mushrooming level of race-based and religious bigotry on their sites.
Civic and religious groups “have to figure out ways to bring pressure to bear on these companies, to bring light to what they’re doing and, frankly, to put pressure on them and to embarrass them for the things they’re not doing and hope that, through that process, there will be incremental change,” said Paul Barrett, deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business.
The challenge is exacerbated by ongoing economic realities that have led some tech firms to downsize or eliminate content moderation staffs, he said.
That means many companies are not hearing those protesting hate-filled content, said Zaki Barzinji, program director of Aspen Digital, where he manages projects involving the convergence of policy, tech, justice and equity for marginalized communities.
But the challenge is much deeper than that, he added. “The fundamental problem? It’s the lack of meaningful inclusion of marginalized communities at every step of the design and implementation of emerging technologies. … If you’re not doing a good enough job of including them in the development of these tools and platforms, even unintentionally, you’re going to give rise to the spread of hateful and uniquely targeted content against those communities.”
The Alliance report highlights the dangerous connection between online hate and real-world violence.
It cites the May 2022 shooting massacre in a Buffalo, N.Y., grocery store as an example. “The perpetrator of the devastating attack … streamed the massacre on Twitch. The shooter wrote a manifesto on Google Docs filled with white supremacist ideology, stating that he was radicalized on 4chan in 2020.”
While the Twitch livestream was taken down within minutes, a video of the assault remained on Facebook for more than 10 hours, giving almost 50,000 people time to share it.
“His actions, and the failure of platforms to identify and take down content like this immediately, created further extremist material for other users to view,” the report states.
“Social media platforms and their parent companies must be held accountable for their role in the spread of hateful content.”
Other social media platforms offer even less content monitoring, resulting in virtual gatherings of like-minded users sharing false narratives and ideologies of hate “however outlandish they may be,” the report says.
“Social media platforms and their parent companies must be held accountable for their role in the spread of hateful content,” the reports advises. “When one industry wields such an enormous amount of power over how we connect, we must address critical failures in content moderation to protect the safety and well-being of our communities.”
The Interfaith Alliance recommends numerous solutions, from providing social media literacy training for youth to laws and regulations designed to hold tech firms liable for speech and violence inspired on their platforms.
“Real-world violence, inspired by online hate and harassment, is growing. It’s impossible to fulfill our inclusive vision of religious freedom without addressing the role of social media in disseminating hateful ideologies and the acts of violence they inspire,” the report says.
The challenges cited in the report should be of special interest to religious groups because people and communities of faith are “harassed online at alarming rates,” said Lauren Krapf, counsel for technology policy and advocacy for the Anti-Defamation League.
A 2022 ADL study found 78% of Muslim and 68% of Jewish respondents were attacked online due to their religious identity. Of those victims, 44% said they feared experiencing more intimidation in the future.
“These high numbers really illustrate the unique and problematic characteristics of harassment and hate online when it comes to religion,” she said. “This is not only about the individual, but it’s about the community, generally.”
Communities of faith can push back against the online harassment aimed at them and others by engaging in “authentic and curious” conversations about how they engage with social media, Krapf suggested. They also can connect with advocacy groups who are engaged in campaigns designed to hold tech firms accountable.
“We’re seeing this conversation at all levels and not just with folks that are deeply entrenched in the technology policy space, and not just with industry folks,” she said. “I think that is hugely important. It’s showing that we’re getting smarter as a community and that we’re not going to put up with the status quo.”
Related articles:
White supremacy and firearm idolatry: America’s Baal | Opinion by Bill Leonard