🐱 Berkeley scientists developing artificial intelligence tool to combat ‘hate speech’ on social media

  • ⚙️ Performance issue identified and being addressed.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
CatParty
https://www.thecollegefix.com/berke...e-tool-to-combat-hate-speech-on-social-media/

Scientists at the University of California, Berkeley, are developing a tool that uses artificial intelligence to identify “hate speech” on social media, a program that researchers hope will out-perform human beings in identifying bigoted comments on Twitter, Reddit and other online platforms.

Scientists at Berkeley’s D-Lab “are working in cooperation with the [Anti-Defamation League] on a ‘scalable detection’ system—the Online Hate Index (OHI)—to identify hate speech,” the Cal Alumni Association reports.

In addition to artificial intelligence, the program will use several different techniques to detect offensive speech online, including “machine learning, natural language processing, and good old human brains.” Researchers aim to have “major social media platforms” one day utilizing the technology to detect “hate speech” and eliminate it, and the users who spread it, from their networks.

Current technology mainly involves the use of “keyword searches,” one researcher states, which are “fairly imprecise and blunt.” Current algorithms can be fooled by simply spelling words differently, for instance:

The OHI intends to address these deficiencies. Already, their work has attracted the attention and financial support of the platforms that are most bedeviled—and that draw the most criticism—for hate-laced content: Twitter, Google, Facebook, and Reddit…

D-Lab initially enlisted ten students of diverse backgrounds from around the country to “code” the posts, flagging those that overtly, or subtly, conveyed hate messages. Data obtained from the original group of students were fed into machine learning models, ultimately yielding algorithms that could identify text that met hate speech definitions with 85 percent accuracy, missing or mislabeling offensive words and phrases only 15 percent of the time.

Though the initial ten coders were left to make their own evaluations, they were given survey questions (e.g. “…Is the comment directed at or about any individual or groups based on race or ethnicity?) to help them differentiate hate speech from merely offensive language. In general, “hate comments” were associated with specific groups while “non-hate” language was linked to specific individuals without reference to religion, race, gender, etc. Under these criteria, a screed against the Jewish community would be identified as hate speech while a rant—no matter how foul—against an African-American celebrity might get a pass, as long as his or her race wasn’t cited.

One researcher warned against the possibility of inadvertent censorship: “Unless real restraint is exercised, free speech could be compromised by overzealous and self-appointed censors.” The lab is thus “working to minimize bias with proper training and online protocols that prevent operators from discussing codes or comments with each other.”

Read the report here.
 
Isn't this basically what Candid was used to develop? I'm not well versed on the shitshow surrounding it besides that youtube skeptics were paid to shill it vehemently, but I'm pretty sure the ultimate fate of its AI moderation was making Google's censors better.
 
One researcher warned against the possibility of inadvertent censorship: “Unless real restraint is exercised, free speech could be compromised by overzealous and self-appointed censors.”

I just discovered a person made entirely out of irony. Nobody tell him what's already going on, he might just break.
 
If its ok with all you faggets, I vote that the theme tune for this cringy fail-fuckery be a crappy recorder cover of the Terminator soundtrack
 
Just wait until it starts targeting trannies and the woke crowd unless they specifically program it not to identify anti-straight/cis/white posts as hate speech (which is most likely).
 
Just wait until it starts targeting trannies and the woke crowd unless they specifically program it not to identify anti-straight/cis/white posts as hate speech (which is most likely).
When, and if, it does catch all forms of hate speech the backpedaling will be hilarious. Triple bonus points if trying to get it to read "it's okay to be white" as hate speech causes a fatal exception error so massive it completely toasts the server it is uploaded to.
 
How is this going to protect actual free speech? Actual free speech includes uncomfortable speech.

All this will do is protect the vitriol of those who are under no legitimate threat from society.
 
How is this going to protect actual free speech? Actual free speech includes uncomfortable speech.

All this will do is protect the vitriol of those who are under no legitimate threat from society.
Exactly. The first step is to lock down having special protection under the law, the second is to police speech so they don't have to hear or read about how they all need to "shut the fuck up and actually do something useful" and "we seriously can't get any work done with all those dick sucking noises, stop it!".
 
Last edited:
Exactly. The first step is lock down having special protection under the law, the second is to police speech so they don't have to hear or read about how they all need to "shut the fuck up and actually do something useful" and "we seriously can't get any work done with all those dick sucking noises, stop it!".

You can't create a law under the assumption of an injustice that is ambiguous and not demonstrable. And you certainly should not let an AI enforce such stupid ideas. The cascading effects will reach beyond the scope of the people allegedly needing protection from words.

These people do not contribute much to society other than methane.
 
index.php-2.jpg
 
You can't create a law under the assumption of an injustice that is ambiguous and not demonstrable. And you certainly should not let an AI enforce such stupid ideas. The cascading effects will reach beyond the scope of the people allegedly needing protection from words.

These people do not contribute much to society other than methane.
Which is why "hate speech" becomes such a hard thing to pin down. There are blatant examples like "gas the kikes, race war now" and "The new Star Wars trilogy was okay and Hayden Christensen was good in it" because you're clearly inciting anger and violence. Then you've got these Berkeley kids who ( I guarantee they fucking do) define hate speech as "anything I don't like" and "white people". This creates a conundrum , where do we draw the line between blatant, legitimate, hate speech and someone's hurt feelings?
 
Considering it's going to be created by diversity hires, I wouldn't be too concerned about it actually working or anything.
 
People would just find new ways to troll, inventing new words and changing the meaning of old ones. Pretty much exactly what the sjws are doing right now except they're serious about it.

Allah forbid these people can work on something productive and beneficial
 
Back
Top Bottom