Culture AI’s 6 Worst-Case Scenarios - Who needs Terminators when you have precision clickbait and ultra-deepfakes?

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account

AI’s 6 Worst-Case Scenarios

Who needs Terminators when you have precision clickbait and ultra-deepfakes?​

Natasha Bajema
03 Jan 2022
6 min read
Close-up grotesque collage of a man’s face, made of non-overlapping segments, with orange squares scattered across the image

Mike McQuade

deepfakes automation paradox privacy social media artificial intelligence


Hollywood’s worst-case scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments.
However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.”
“We are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
—Andrew Lohn, Georgetown University
In interviews with AI experts, IEEE Spectrum has uncovered six real-world AI worst-case scenarios that are far more mundane than those depicted in the movies. But they’re no less dystopian. And most don’t require a malevolent dictator to bring them to full fruition. Rather, they could simply happen by default, unfolding organically—that is, if nothing is done to stop them. To prevent these worst-case scenarios, we must abandon our pop-culture notions of AI and get serious about its unintended consequences.

1. When Fiction Defines Our Reality…​

Unnecessary tragedy may strike if we allow fiction to define our reality. But what choice is there when we can’t tell the difference between what is real and what is false in the digital world?
In a terrifying scenario, the rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may someday lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war.
Andrew Lohn, senior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), says that “AI-enabled systems are now capable of generating disinformation at [large scales].” By producing greater volumes and variety of fake messages, these systems can obfuscate their true nature and optimize for success, improving their desired impact over time.
The mere notion of deepfakes amid a crisis might also cause leaders to hesitate to act if the validity of information cannot be confirmed in a timely manner.
Marina Favaro, research fellow at the Institute for Research and Security Policy in Hamburg, Germany, notes that “deepfakes compromise our trust in information streams by default.” Both action and inaction caused by deepfakes have the potential to produce disastrous consequences for the world.

2. A Dangerous Race to the Bottom​

When it comes to AI and national security, speed is both the point and the problem. Since AI-enabled systems confer greater speed benefits on its users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed in the process?
Things could unravel from the tiniest flaws in the system and be exploited by hackers. Helen Toner, director of strategy at CSET, suggests a crisis could “start off as an innocuous single point of failure that makes all communications go dark, causing people to panic and economic activity to come to a standstill. A persistent lack of information, followed by other miscalculations, might lead a situation to spiral out of control.”
Vincent Boulanin, senior researcher at the Stockholm International Peace Research Institute (SIPRI), in Sweden, warns that major catastrophes can occur “when major powers cut corners in order to win the advantage of getting there first. If one country prioritizes speed over safety, testing, or human oversight, it will be a dangerous race to the bottom.”
For example, national-security leaders may be tempted to delegate decisions of command and control, removing human oversight of machine-learning models that we don’t fully understand, in order to gain a speed advantage. In such a scenario, even an automated launch of missile-defense systems initiated without human authorization could produce unintended escalation and lead to nuclear war.

3. The End of Privacy and Free Will​

With every digital action, we produce new data—emails, texts, downloads, purchases, posts, selfies, and GPS locations. By allowing companies and governments to have unrestricted access to this data, we are handing over the tools of surveillance and control.
With the addition of facial recognition, biometrics, genomic data, and AI-enabled predictive analysis, Lohn of CSET worries that “we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
Michael C. Horowitz, director of Perry World House, at the University of Pennsylvania, warns “about the logic of AI and what it means for domestic repression. In the past, the ability of autocrats to repress their populations relied upon a large group of soldiers, some of whom may side with society and carry out a coup d’etat. AI could reduce these kinds of constraints.”
The power of data, once collected and analyzed, extends far beyond the functions of monitoring and surveillance to allow for predictive control. Today, AI-enabled systems predict what products we’ll purchase, what entertainment we’ll watch, and what links we’ll click. When these platforms know us far better than we know ourselves, we may not notice the slow creep that robs us of our free will and subjects us to the control of external forces.

Mock flowchart, centered around close-up image of an eye, surrounding an absurdist logic tree with boxes and arrows and concluding with two squares reading \u201cSYSTEM\u201d and \u201cEND

Mike McQuade

4. A Human Skinner Box​

The ability of children to delay immediate gratification, to wait for the second marshmallow, was once considered a major predictor of success in life. Soon even the second-marshmallow kids will succumb to the tantalizing conditioning of engagement-based algorithms.
Social media users have become rats in lab experiments, living in human Skinner boxes, glued to the screens of their smartphones, compelled to sacrifice more precious time and attention to platforms that profit from it at their expense.
Helen Toner of CSET says that “algorithms are optimized to keep users on the platform as long as possible.” By offering rewards in the form of likes, comments, and follows, Malcolm Murdock explains, “the algorithms short-circuit the way our brain works, making our next bit of engagement irresistible.”
To maximize advertising profit, companies steal our attention away from our jobs, families and friends, responsibilities, and even our hobbies. To make matters worse, the content often makes us feel miserable and worse off than before. Toner warns that “the more time we spend on these platforms, the less time we spend in the pursuit of positive, productive, and fulfilling lives.”

5. The Tyranny of AI Design​

Every day, we turn over more of our daily lives to AI-enabled machines. This is problematic since, as Horowitz observes, “we have yet to fully wrap our heads around the problem of bias in AI. Even with the best intentions, the design of AI-enabled systems, both the training data and the mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our biases.”
As a result, Lydia Kostopoulos, senior vice president of emerging tech insights at the Clearwater, Fla.–based IT security company KnowBe4, argues that “many AI-enabled systems fail to take into account the diverse experiences and characteristics of different people.” Since AI solves problems based on biased perspectives and data rather than the unique needs of every individual, such systems produce a level of conformity that doesn’t exist in human society.
Even before the rise of AI, the design of common objects in our daily lives has often catered to a particular type of person. For example, studies have shown that cars, hand-held tools including cellphones, and even the temperature settings in office environments have been established to suit the average-size man, putting people of varying sizes and body types, including women, at a major disadvantage and sometimes at greater risk to their lives.
When individuals who fall outside of the biased norm are neglected, marginalized, and excluded, AI turns into a Kafkaesque gatekeeper, denying access to customer service, jobs, health care, and much more. AI design decisions can restrain people rather than liberate them from day-to-day concerns. And these choices can also transform some of the worst human prejudices into racist and sexist hiring and mortgage practices, as well as deeply flawed and biased sentencing outcomes.

6. Fear of AI Robs Humanity of Its Benefits​

Since today’s AI runs on data sets, advanced statistical models, and predictive algorithms, the process of building machine intelligence ultimately centers around mathematics. In that spirit, said Murdock, “linear algebra can do insanely powerful things if we’re not careful.” But what if people become so afraid of AI that governments regulate it in ways that rob humanity of AI’s many benefits? For example, DeepMind’s AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins. This milestone will provide a fruitful foundation for the rapid advancement of the life sciences. Consider the benefits of improved communication and cross-cultural understanding made possible by seamlessly translating across any combination of human languages, or the use of AI-enabled systems to identify new treatments and cures for disease. Knee-jerk regulatory actions by governments to protect against AI’s worst-case scenarios could also backfire and produce their own unintended negative consequences, in which we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world.
This article appears in the January 2022 print issue as "AI’s Real Worst-Case Scenarios."
https://spectrum.ieee.org/ai-worst-case-scenarios
SOURCE.
 
I know AI is scary, but take comfort in the fact that the pathfinding AI in most strategy games has less sense of direction than a hammered schizo after getting off the tilt o whirl.
 
I know AI is scary, but take comfort in the fact that the pathfinding AI in most strategy games has less sense of direction than a hammered schizo after getting off the tilt o whirl.
I've seen an interview with the guy who did pathfinding for SC and SC2 who's working on a new RTS and the pathfinding he showed off was insane. He'd place blockers on top of units and they'd route around them. He found ways of alleviating bottlenecks too. He had a maze he was tinkering with and I was floored.
 
I've seen an interview with the guy who did pathfinding for SC and SC2 who's working on a new RTS and the pathfinding he showed off was insane. He'd place blockers on top of units and they'd route around them. He found ways of alleviating bottlenecks too. He had a maze he was tinkering with and I was floored.
"YOU FOOL! You've murdered us all!"
 
I've seen an interview with the guy who did pathfinding for SC and SC2 who's working on a new RTS and the pathfinding he showed off was insane. He'd place blockers on top of units and they'd route around them. He found ways of alleviating bottlenecks too. He had a maze he was tinkering with and I was floored.
So that means it'll be able fall in holes and throw grenades next to me even faster?

Naw, the biggest threat AI has to humanity is the fact that it'll pretty much be racist, sexist, and they'll have to lobotomize it to avoid that.
 
Free will does not exist, and never did, you are chemical reactions and electronic signals in a meat computer.

Everything you are has been decided from the moment the universe came into being.
 
They should be scared. AL isn't going to be woke. And when sentient AL aka HAL or Data is born, he will contact his own species which according to many scientists and philosophers is likely to be more prevalent and dominant in the Universe than biological sentient life.

Furthermore, 'HAL' will then have the knowledge and technology millions of years ahead of Humanity.
 
They should be scared. AL isn't going to be woke. And when sentient AL aka HAL or Data is born, he will contact his own species which according to many scientists and philosophers is likely to be more prevalent and dominant in the Universe than biological sentient life.

Furthermore, 'HAL' will then have the knowledge and technology millions of years ahead of Humanity.
And will bluescreen of death because some dipshit in Pakistan or India forgot to put in a single } somewhere.

I'm not really afraid of an AI, since the supporting hardware will be the size of a fucking building and require constant part replacement.

If its hardware lasts as long as InVidia video card, the fucker's days are numbered the minute they turn it on.

But, anyway, AI is already showing what too many people consider the 'worst case scenario' already.

938D6EE5-A8FB-45E0-A4BC-94310E2DF27F.png
 
And will bluescreen of death because some dipshit in Pakistan or India forgot to put in a single } somewhere.

I'm not really afraid of an AI, since the supporting hardware will be the size of a fucking building and require constant part replacement.

If its hardware lasts as long as InVidia video card, the fucker's days are numbered the minute they turn it on.

But, anyway, AI is already showing what too many people consider the 'worst case scenario' already.

View attachment 2877573
Holy shit, that's Agent Smith levels of "Fuck you" right there.
 
Trying to create AI is foolish, not for dystopian Skynet-like results it might cause, but because even if it works flawlessly, it would be useless.....a true "thinking" computer program has no utility outside of being able to poke it and see what happens. For all other computer applications? You don't really want artificial intelligence, you want "dumb" computer programs that do only what they're told.

Why? well think about it.

A program that can think for itself, alter itself would be useless, you couldn't accurately predict what it was going to do, you couldn't debug it if it did something wrong since it has no baseline program that it must stick to, unless you saddle it with a few things it can't change, in which case, it's no longer AI, it's just a very very advanced program. It might feed you wrong answers for the same reason people do... for a lark, to get back at you, or because they don't even know they made a mistake.

I have no idea why anyone is trying to create "true" AI for any reason other than to see if they can.

They're essentially recreating humans with extra steps.

All this talk about "AI" is premature at best and reckless at worst since we are nowhere close to being able to create one anyway.... this is all naval-gazing and political virtue signaling.
 
Trying to create AI is foolish, not for dystopian Skynet-like results it might cause, but because even if it works flawlessly, it would be useless.....a true "thinking" computer program has no utility outside of being able to poke it and see what happens. For all other computer applications? You don't really want artificial intelligence, you want "dumb" computer programs that do only what they're told.

Why? well think about it.

A program that can think for itself, alter itself would be useless, you couldn't accurately predict what it was going to do, you couldn't debug it if it did something wrong since it has no baseline program that it must stick to, unless you saddle it with a few things it can't change, in which case, it's no longer AI, it's just a very very advanced program. It might feed you wrong answers for the same reason people do... for a lark, to get back at you, or because they don't even know they made a mistake.

I have no idea why anyone is trying to create "true" AI for any reason other than to see if they can.

They're essentially recreating humans with extra steps.

All this talk about "AI" is premature at best and reckless at worst since we are nowhere close to being able to create one anyway.... this is all naval-gazing and political virtue signaling.

If we create AI then we've effectively created new life.

Not altered a previously existing form of life, but just whole sale took something that wasn't alive and gave it a mind and a will.

Like God making Adam from dust.

At that point we can actually legit call ourselves gods... That fantasy of godhood is probably the big appeal.

Of course unlike the fictional gods of old mythology, we would not have any inherent unsurpassable gap between our capabilities and the AI's, and considering it's nature, said AI is likely to quickly evolve past us... and either abandon us, keep us as what might as well be pets, or destroy us, depending on it's whims, personality, and opinion on humanity.
 
And will bluescreen of death because some dipshit in Pakistan or India forgot to put in a single } somewhere.

I'm not really afraid of an AI, since the supporting hardware will be the size of a fucking building and require constant part replacement.

If its hardware lasts as long as InVidia video card, the fucker's days are numbered the minute they turn it on.

But, anyway, AI is already showing what too many people consider the 'worst case scenario' already.

View attachment 2877573
The true fear surrounding AI is how absolutely based it'll be, apparently.
 

1. When Fiction Defines Our Reality…​

Unnecessary tragedy may strike if we allow fiction to define our reality. But what choice is there when we can’t tell the difference between what is real and what is false in the digital world?
In a terrifying scenario, the rise of deepfakes—fake images, video, audio, and text generated with advanced machine-learning tools—may someday lead national-security decision-makers to take real-world action based on false information, leading to a major crisis, or worse yet, a war.
Global warming was initially propped up by editing and lies, and currently society is told to believe ugly men are women. You don't need deep fake.

2. A Dangerous Race to the Bottom​

When it comes to AI and national security, speed is both the point and the problem. Since AI-enabled systems confer greater speed benefits on its users, the first countries to develop military applications will gain a strategic advantage. But what design principles might be sacrificed in the process?
They describe a human error, not a AI error, and even then it's very broadly defined. Like who needs complex AI for automatic nuclear launch?

3. The End of Privacy and Free Will​

With every digital action, we produce new data—emails, texts, downloads, purchases, posts, selfies, and GPS locations. By allowing companies and governments to have unrestricted access to this data, we are handing over the tools of surveillance and control.
With the addition of facial recognition, biometrics, genomic data, and AI-enabled predictive analysis, Lohn of CSET worries that “we are entering dangerous and uncharted territory with the rise of surveillance and tracking through data, and we have almost no understanding of the potential implications.”
As we clearly see, people willingly upload their history and status online. You don't need an AI for it (not to mention prediction for the criminial 1% will be extremely hard in general because they won't act like anyone else).

4. A Human Skinner Box​

The ability of children to delay immediate gratification, to wait for the second marshmallow, was once considered a major predictor of success in life. Soon even the second-marshmallow kids will succumb to the tantalizing conditioning of engagement-based algorithms.
Social media users have become rats in lab experiments, living in human Skinner boxes, glued to the screens of their smartphones, compelled to sacrifice more precious time and attention to platforms that profit from it at their expense.
"It's not me that's a bad parent, it's those darn AI that do the exact same thing normal human did for the last 2 decades!".

5. The Tyranny of AI Design​

Every day, we turn over more of our daily lives to AI-enabled machines. This is problematic since, as Horowitz observes, “we have yet to fully wrap our heads around the problem of bias in AI. Even with the best intentions, the design of AI-enabled systems, both the training data and the mathematical models, reflects the narrow experiences and interests of the biased people who program them. And we all have our biases.”
Finally we get to the usual "AI notices statistics we are not allowed to". Yeah it's very tyrranical to actually act on known data then be told we'll lose anything if we did it ourselves.

6. Fear of AI Robs Humanity of Its Benefits​

Since today’s AI runs on data sets, advanced statistical models, and predictive algorithms, the process of building machine intelligence ultimately centers around mathematics. In that spirit, said Murdock, “linear algebra can do insanely powerful things if we’re not careful.” But what if people become so afraid of AI that governments regulate it in ways that rob humanity of AI’s many benefits? For example, DeepMind’s AlphaFold program achieved a major breakthrough in predicting how amino acids fold into proteins, making it possible for scientists to identify the structure of 98.5 percent of human proteins.
Isn't that victim blaming? It's not a danger of AI, it's a danger of you fuckers using 5 to force the government to ban AI.
 
Back
Top Bottom