Empathy-based counter speech can reduce hate speech

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account


Online hate speech can be curbed by inducing empathy for those affected. In contrast, the use of humour or warnings of possible consequences have little effects. A team of social scientists and 13 ETH Zurich students has demonstrated this in a new scientific publication.


Illustration with Twitter birds on a platform: An ETH research team tested three different counterspeech strategies on twitter to reduce hate speech. Most effective were comments that encouraged empathy with those targeted by hate speech.

The research team tested three different counterspeech strategies on twitter to reduce hate speech. Most effective were comments that encouraged empathy with those targeted by hate speech. (Graphic: alliance F)


Online hate speech has become a pressing issue worldwide. On social networks, sexual minorities are vilified, members of particular religions are intimidated and ethnic groups are discriminated. In addition, hate speech is a threat to democracy, as it can prevent those who are targeted from participating in public debate.
To moderate hateful comments, many social media platforms have developed sophisticated filters. However, these alone are not sufficient to fix the problem. For example, Facebook estimates (according to the internal documents leaked in October 2021) that it is not able to delete more than 5 percent of the hate comments posted. Furthermore, automatic filters are imprecise and could harm freedom of speech.

Inducing empathy with those affected

An alternative to deleting problematic comments is the use of targeted counterspeech. Counterspeech is used by numerous organisations aiming to tackle online hate speech. However, until now, little is known about which counterspeech strategies are most effective in addressing online hostility. A team of researchers led by Dominik Hangartner, Professor of Public Policy at ETH, have now joined forces with colleagues at the University of Zurich to investigate what kind of messages could encourage authors of hate speech to refrain from such postings in the future
Using machine learning methods, the researchers identified 1,350 English-speaking Twitter users who had published racist or xenophobic content. They randomly assigned these accounts to a control group or one of following three, often-used counterspeech strategies: messages that elicit empathy with the group targeted by racism, humour, or a warning of possible consequences.
The results, which have just been published in the Proceedings of the National Academy of Sciences (PNAScall_made), are clear: only counterspeech messages that elicit empathy with the people affected by hate speech are likely to persuade the senders to change their behaviour. An example of such a response could be: “Your post is very painful for Jewish people to read...” Compared to the control group, the authors of hate tweets posted around one-third fewer racist or xenophobic comments after such an empathy-inducing intervention. Additionally, the probability that a hate tweet was deleted by its author increased significantly. In contrast, the authors of hate tweets barely reacted to humorous counterspeech. Even a message that reminded the sender that their family, friends and colleagues could see their hateful comments, too, were not effective. This is striking because these two strategies are frequently used by organisations that are committed to combatting hate speech.
“We have certainly not found a panacea against hate speech on the internet, but we have uncovered important clues about which strategies might work, and which do not,” says Hangartner. What remains to be studied is whether all empathy-based responses work similarly well, or whether particular messages are more effective. For example, hate speech authors could be encouraged to put themselves in the victim’s shoe or be asked to adopt an analogous perspective (“How would you feel if people talked about you like that?”).

Blending teaching and research

Alongside Professors Karsten Donnay and Fabrizio Gilardi from the University of Zurich’s Digital Democracy Labcall_made, 13 Master’s students from the ETH Center for Comparative and International Studies (CIS) were also heavily involved in the project. The students participated in all phases of the project, from developing an algorithm to detect hate tweets, to testing the strategies on Twitter, to statistical analysis and project management. “To me, this new type of collaborative seminar exemplifies a form of education that not only equips students with important tools for data science and social science, but also for research ethics. My hope is that this hands-on education enables them to make a positive impact in the field of digitalisation and social media,” says Hangartner.
The students involved take a similar view. “We haven’t just read about other people’s research; now we also know how a big research project works,” says Laurenz Derksen. “Although there was a lot of work involved, this experiment lit a fire in me and got me excited about ambitious and collaborative research,” Derksen continues.
Buket Buse Demirci, now a doctoral student, felt that the project went far beyond the normal scope of seminars. As an example, she cites the Pre-Analysis Plan: the public registration of every single research step before the start of the experiment, thus increasing the credibility of the statistical analyses as well as the reliability of the results. Another motivating factor, she says, is that all 13 students are listed as co-authors on the study detailing the results, which is published in one of the most prestigious interdisciplinary science journals. “I’ve contributed to a study that has not only been published in a scientific journal, but could also have an impact in the real world,” says Demirci.

Practical applications through NGO and media

Hangartner is aware that this type of research, embedded in a seminar, may sometimes also yield null results. Yet the experience is valuable for the students in any case, he says. It can help them anticipate what to expect in case they embark on PhD studies and provides hands-on research experience, which is an asset for many different careers inside and outside of academia.
The collaborative research seminar is part of a more comprehensive project to develop algorithms that detect hate speech, and to test and refine further counterspeech strategies. To this end, the research team is collaborating with the Swiss women’s umbrella organisation alliance F, which has initiated the civil society project Stop Hate Speechcall_made. Through this collaboration the scientists are able to directly translate their research insights into practice, and to provide an empirical basis for alliance F to optimise the design and content of their counterspeech messages.
“The research findings make me very optimistic. For the first time, we now have experimental evidence that show the efficacy of counterspeech in real-life conditions,” says Sophie Achermann, Executive Director of alliance F and co-initiator of Stop Hate Speech. Also involved in the research project, which was sponsored by the Swiss innovation agency Innosuisse, are the media companies Ringier and TX Group via their newspapers Blick and 20 Minuten respectively.


Reference​

Hangartner, D, Gennaro, G, Alasiri, S, Bahrich, N, Bornhoft, A, Boucher, J, Demirci, BB, Derkse, L, Hall, A, Jochum, M, Murias Munoz, M, Richter, M, Vogel, F, Wittwer, S, Wüthrich, F, Gilardi, F, Donnay, K. Empathy-based counterspeech can reduce racist hate speech in a social media field experiment. Proceedings of the National Academy of Sciences. PNAS December 14, 2021 118 (50) e2116310118. DOI: 10.1073/pnas.2116310118call_made.
 
>“Your post is very painful for Jewish people to read...”
>“How would you feel if people talked about you like that?”

So the solution to "hate speech" is to be a condescending asshole?
 
I mean... Maybe... I know I'm biased by interacting with a lot of assholes but I kind of feel like using "I feel" statements on anyone who isn't already generally your friend/ally in other contexts is a futile effort.

If you've gotten to the point where you need to use those kinds of statements, then something has gone horribly wrong already, or you're butting into a conversation between total randos. In the former case, things probably devolved that far because at least one party legitimately doesn't care how the other feels, and if that respect for other people's mental states is not mutual then things are unstable and going to degenerate. In the latter case, you're an unwanted interloper. In either case because you have no rapport, you're going to get accused of lying about your emotions to manipulate people. (Well, you're trying to manipulate people regardless, but somehow it's ok if you're being honest about it).

The only cases I can see "I feel" statements genuinely changing behaviors are kind of not suitable for Twitter. Pleas for empathy may work if they are accompanied by long power leveling tales that explain why things hurt so the target can reverse engineer the logic in the future, but even if that weren't dumb to do on such a publicly visible soapbox as Twitter... How are you going to fit that in the character limit? They could also work when it's skin deep behavioral change that has no value, like soyboy feminist sex pests. "I did what you asked for, am I worthy yet?" No I know you're fucking faking it and the moment you have me "locked in" and unable to leave, you'll revert to your true face.
 
It's very interesting to hear these people talk of empathy when they would never consider the possibility that the people they disagree with may have valid points.
 
Calling yourself empathetic these days is like calling yourself a skeptic or a "realist". Its bullshit people label themselves with because they have a ridiculously high opinion of themselves.
 
Maybe spend less time on Twitter, and spend more time on trying to make IRL friends.
The kind of people who spend that much time on Twitter probably aren't allowed to see people irl anymore. Or if they can, they need to be like sextuple masked so they're faceless abominations that can't socialize properly anymore.

You don't need to feel too sorry for them though, it's the future they choose.
 
I mean... Maybe... I know I'm biased by interacting with a lot of assholes but I kind of feel like using "I feel" statements on anyone who isn't already generally your friend/ally in other contexts is a futile effort.
The instant response is "FACTS OVER FEELS LOL!"
1639066086547.png
 
Calling yourself empathetic these days is like calling yourself a skeptic or a "realist". Its bullshit people label themselves with because they have a ridiculously high opinion of themselves.
I've grown to despise the word "empathy". It's such a soy-filled bugman word.
 
“empathy-based”? Because I thought calling everyone I disagreed with a MAGAt Nazi inbred CHUD virgin oxy addict with a tinfoil hat was the way to go?
 
I mean... Maybe... I know I'm biased by interacting with a lot of assholes but I kind of feel like using "I feel" statements on anyone who isn't already generally your friend/ally in other contexts is a futile effort.

If you've gotten to the point where you need to use those kinds of statements, then something has gone horribly wrong already, or you're butting into a conversation between total randos. In the former case, things probably devolved that far because at least one party legitimately doesn't care how the other feels, and if that respect for other people's mental states is not mutual then things are unstable and going to degenerate. In the latter case, you're an unwanted interloper. In either case because you have no rapport, you're going to get accused of lying about your emotions to manipulate people. (Well, you're trying to manipulate people regardless, but somehow it's ok if you're being honest about it).

The only cases I can see "I feel" statements genuinely changing behaviors are kind of not suitable for Twitter. Pleas for empathy may work if they are accompanied by long power leveling tales that explain why things hurt so the target can reverse engineer the logic in the future, but even if that weren't dumb to do on such a publicly visible soapbox as Twitter... How are you going to fit that in the character limit? They could also work when it's skin deep behavioral change that has no value, like soyboy feminist sex pests. "I did what you asked for, am I worthy yet?" No I know you're fucking faking it and the moment you have me "locked in" and unable to leave, you'll revert to your true face.
Or tldr, if simply saying "I don't like that" makes the other person stop using hate speech, then you weren't dealing with hate speech in the first place. If they really hated you then you getting sad would be motivation for them to keep going.
 
Realize that this article and the research isn't promoting empathy generally, which is indeed fanciful for a wokester to do, but rather specifically empathy "with those affected" rather than the "hate speech authors".
 
As if the media wasn't already saturated with messages of, "But don't you just feel so bad for the poor wittle queers/trannies/nigs/Asians/women?" being used as tools to manipulate people into submission.
 
Last edited:
Before that it was sympathy, then they wore it out & went with empathy instead.
At least sympathy was honest. I would totally believe they feel sorry for me because they think I'm a poor little retard who doesn't know that actually I want exactly what they want and they're oppressing me for my own good.

I don't believe they feel with me at all, or else they'd stop insisting I have internalized racism/sexism. If they felt with me, they could disagree about my conclusions, but at least they'd admit that what they want for me is not what I want for myself.
 
I don't care at all about how other people online feel, half of them are bots. I don't particularly care how people I don't personally know feel IRL either.

I have serious doubts this approach would work to reduce "hate speech" on most platforms.

I've grown to despise the word "empathy". It's such a soy-filled bugman word.
fuckwebp.jpg
 
Back
Top Bottom