🐱 Berkeley scientists developing artificial intelligence tool to combat ‘hate speech’ on social media

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
CatParty
https://www.thecollegefix.com/berke...e-tool-to-combat-hate-speech-on-social-media/

Scientists at the University of California, Berkeley, are developing a tool that uses artificial intelligence to identify “hate speech” on social media, a program that researchers hope will out-perform human beings in identifying bigoted comments on Twitter, Reddit and other online platforms.

Scientists at Berkeley’s D-Lab “are working in cooperation with the [Anti-Defamation League] on a ‘scalable detection’ system—the Online Hate Index (OHI)—to identify hate speech,” the Cal Alumni Association reports.

In addition to artificial intelligence, the program will use several different techniques to detect offensive speech online, including “machine learning, natural language processing, and good old human brains.” Researchers aim to have “major social media platforms” one day utilizing the technology to detect “hate speech” and eliminate it, and the users who spread it, from their networks.

Current technology mainly involves the use of “keyword searches,” one researcher states, which are “fairly imprecise and blunt.” Current algorithms can be fooled by simply spelling words differently, for instance:

The OHI intends to address these deficiencies. Already, their work has attracted the attention and financial support of the platforms that are most bedeviled—and that draw the most criticism—for hate-laced content: Twitter, Google, Facebook, and Reddit…

D-Lab initially enlisted ten students of diverse backgrounds from around the country to “code” the posts, flagging those that overtly, or subtly, conveyed hate messages. Data obtained from the original group of students were fed into machine learning models, ultimately yielding algorithms that could identify text that met hate speech definitions with 85 percent accuracy, missing or mislabeling offensive words and phrases only 15 percent of the time.

Though the initial ten coders were left to make their own evaluations, they were given survey questions (e.g. “…Is the comment directed at or about any individual or groups based on race or ethnicity?) to help them differentiate hate speech from merely offensive language. In general, “hate comments” were associated with specific groups while “non-hate” language was linked to specific individuals without reference to religion, race, gender, etc. Under these criteria, a screed against the Jewish community would be identified as hate speech while a rant—no matter how foul—against an African-American celebrity might get a pass, as long as his or her race wasn’t cited.

One researcher warned against the possibility of inadvertent censorship: “Unless real restraint is exercised, free speech could be compromised by overzealous and self-appointed censors.” The lab is thus “working to minimize bias with proper training and online protocols that prevent operators from discussing codes or comments with each other.”

Read the report here.
 
Ah yes, writing an AI engine to censor speech will have NO negative consequences at all. This will never be turned against the people writing the code and they'll never wonder 'What happened?'
 
Reminds me of that video Tom Scott did about an AI developed to combat theft of copyrighted material. Essentially the AI goes rouge and erases a century of human culture from our brains.


I can’t see how something equally scary couldn’t happen with this SJW bot
 
Of course it's Berkeley. Also,
D-Lab initially enlisted ten students of diverse backgrounds from around the country to “code” the posts, flagging those that overtly, or subtly, conveyed hate messages.
:story: if this means what I'm 99.9% sure it means then that means the entire English lexicon is about to get flagged by TriggerTron 5000™
 
The report is gold (http://archive.is/eaG9Y)

Screen Shot 2018-12-11 at 3.11.50 PM.png 3Edited.png
 
:story: if this means what I'm 99.9% sure it means then that means the entire English lexicon is about to get flagged by TriggerTron 5000™

That's why it says "subtly" in there. Don't want to miss things like m.ilk, the o.k. symbol, or any number of other dog whistles that are out there!

Edit: wait, so they're admitting there is overall more non hate than hate? Doesn't that kind of invalidate the point?
 
It's been years now. And I still can't bring myself to believe these people want to condemn racial division as "hate speech" when they are so determined at every moment to remind me that my white skin color is an inexcusable sin. Every bone in my body tells me I need to resist these totalitarians.

(Some of those links are calling out the problem, not contributing to it, in case you don't feel like reading all of it it's not all bad at least.)
 
Imagine being so stupid and arrogant you think you can regulate the emotion of hate out of people.

What a bunch of faggots.
 
I have mentioned before how woke corporate/political authoritarianism was like a really shit pantomime adaptation of Nineteen Eighty Four...now it looks to become a really shit pantomime adaptation of The Terminator
 
Machines are more efficient at banning than humans, but, imagine the efficiency in a system where you don't spend a cent in payroll or development.... because you dont' try and take on the hopeless task of finding a censorship model everyone likes and allow the end user to just stop reading stuff they don't like?
 
yeah nothing could ever backfire on a large scale, especially this.

if and when this AI come to light will it be able to withstand the autism that will come crashing into it, much like its predecessors?.
 
machine learning models
e5a.gif
Machine learning is quite possibly the worst invention of the decade, for allowing incompetent "developers" to implement incompetent policy.
Wasn't this the same stuff that was supposed to stop "hate speech" already (e.g. Perspective API)?
ML is the new blockchain.
 
Back
Top Bottom