The Singularity: When will we all become super-humans?

  • 🏰 The Fediverse is up. If you know, you know.
  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account

In 1903, the Wright brothers showed the world the first sustained flight. In less than 60 years, Yuri Gagarin became the first person in space and orbited the Earth.

In 1993, Tim Berners-Lee made public the source code for the “World Wide Web.” Thirty years later, everything from our fridges to our watches are plugged in.

In 1953, Rosalind Franklin, James Watson, and Francis Crick discovered the double-helix of DNA. Within 50 years, we mapped the human genome. Twenty years later, we are using CRISPR to edit DNA.

In 1992, Gary Kasparov laughed at how embarrassing his computer chess opponent was. Within five years, he was beaten by one.

Technology has a habit of running away from us. When a breakthrough occurs or a floodgate opens, explosive, exponential growth often follows. And, according to futurologist Ray Kurzweil, we are only an historical moment away from “The Singularity.”

This weak and mortal body

The Singularity, for Kurzweil, is defined as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” The idea is that discovery and progress will “explode with unexpected fury.” We often fail to appreciate what “exponential growth” actually means and how rapidly it brings about change. For instance, if we were to double the processing power of a computer every year, within seven of these “doublings,” our computers’ power would have increased 128-fold.

There are more innovators and scientists today, and they have more efficient tools and methods. The conclusion that Kurzweil draws is that technological advancement is “now doubling every decade” (though he fails to cite a source for that). According to him, we are only a few decades from the point when things really take off — when we enter a breathtakingly abrupt, and completely transformed, new world.

For some, this Singularity will be a utopia. For others, it will be a Terminator-style nightmare. Kurzweil is certainly of the former. Kurzweil sees the weakness in our human frailty, or what he calls “1.0 biological bodies.” Yes, we have Rembrandt, Newton, and Saint-Saëns, but it is also true that “much human thought is derivative, petty, and circumscribed.” Which is why the Singularity cannot come fast enough. It is time to ditch these lumbering flesh-sacs of violent barbarity.

The next epoch

Kurzweil sees the universe in terms of six great “epochs.” They begin with physics and chemistry in creating the universe. Then, carbon-based compounds became more and more intricate, until life emerged. Eventually, intelligence evolved, as did the human brain, which then allowed us to create greater and greater technology.

And so, we arrive at “our” epochal moment. The next great leap for the universe will be when humans and technology merge. This does not mean using Google Maps to find your way home; it means that our very biology will become enmeshed with the technology we create. It is the age of bionics. As such, the machines we make will allow us to “transcend the human brain’s limitations of a mere hundred trillion extremely slow connections” and overcome “age-old human problems and vastly amplify creativity.” It will be a transcendent, next-stage humanity with silicon in our brains and titanium in our bodies.

Whether this means an evil, god-like elite enslaving us all or some omni-pleasant idyll, Kurzweil is (uncharacteristically) unsure.

Cold water on a circuit board

How likely is all this? What cold water might there be to throw on it?

The first idea to challenge is how likely it is that technology will progress in a way that will lead to either general artificial intelligence or sophisticated bionic enhancements to our own minds. Most of Kurzweil’s estimates (as well as those of other futurologists like Eliezer Yudkowsky) are built on previous and existing hardware developments. But, as philosopher David Chalmers argues, “The biggest bottleneck on the path to AI is software, not hardware.” Having a mind, or general human intelligence, involves all manner of complicated (and unknown) neuroscientific and philosophical questions, so “hardware extrapolation is not a good guide here.” Having a mind is a different kind of step altogether; it is not like doubling flash drive memory size.

Second, there is no necessary reason that there will be exponential growth of the kind futurologists depend on. Past technological advances do not guarantee similar future advances. There is also the law of “diminishing returns.” It could be that even though we have more collective intelligence working more efficiently, we still get less out of it. Apple, today, is the richest company in the world with the finest minds in computer science working for them. Yet, it is plainly obvious that the most recent iDevices seem less exciting or innovative than their previous renditions.

Kurzweil and his supporters may well reply that a world of “enhanced intelligence” — in which we might see a 20 percent increase in intelligence – is surely outside the remit of “diminishing returns.” As Chalmers points out, “Even among humans, relatively small differences in design capacities (say, the difference between Turing and an average human) seem to lead to large differences in the systems that are designed.” There might be a cap or diminishing return to what existing human intelligence can achieve, but what about when we can enhance this?

A third objection is that there are a lot of situational or event-type obstacles that can conceivably get in the way of the Singularity. It might be that there is a terrible, slate-wiping global war. Or another pandemic might wipe most of us out. Maybe nanotechnology turns our brains to mush. Perhaps AI wreaks terrible disasters on the world. Or maybe we simply run out of the resources required to build and develop technology. Taken alone, each of these might pose trifling chances, but when you stack up all the possible dead ends and setbacks, it is enough to question how foregone a conclusion the Singularity really is.

A sci-fi lover’s dream

How you view Kurzweil will depend largely on your existing biases — and perhaps how much science fiction you have read. It is certainly true to say that technology in the last century has increased at a rate far beyond that of past centuries and millennia. The world of the 2020s is unrecognizable compared to that of the 1920s. Our great-great-grandfathers would look at the world today as they would an H.G. Wells novel.

But, it is equally true that there are many obstacles in the way of unlimited technological progress. We ultimately do not know if this rocket is going to take off — or if it does, whether it will hit a very hard glass ceiling.
 
Why are they so sure that this supertech would be shared with everyone?

And not just kept for the rich, the goverments, megacorps, armies, richer nations?

Like Cyberschlomo with a kosher kyber snoz that can detect money 100 miles away while Jamal is still nignog 1.0 , maybe with intravenous liquid meth injector?

Or a Russian half tank oligarch and his army of cyborg thugs?
 
You'll know when the apocalypse is here. We burn through resources, any super intelligent being would probably wipe us out to extend the life of the universe by a fraction of a nanosecond.
 
You'll know when the apocalypse is here. We burn through resources, any super intelligent being would probably wipe us out to extend the life of the universe by a fraction of a nanosecond.

That be some good grade hippie quote.
 
Black Hebrew Israelites claim they will be super human on Judgement Day and fly around the world killing whiteys like Goku.
 
You'll know when the apocalypse is here. We burn through resources, any super intelligent being would probably wipe us out to extend the life of the universe by a fraction of a nanosecond.
What the fuck kind of logic is that?
No, really. What makes you think a "super intelligent being" would be so interested in delaying the eventual heat death of the universe that it would eliminate everyone's capacity to experience it?
 
Why are they so sure that this supertech would be shared with everyone?

And not just kept for the rich, the goverments, megacorps, armies, richer nations?

More productive nigger cattle means that The Cabal gets to extract more resources from them. Time and time throughout history, things like literacy and the Internet were fought against because they were thought to be diffusive to power, but all they did was entrench it.
 
What the fuck kind of logic is that?
No, really. What makes you think a "super intelligent being" would be so interested in delaying the eventual heat death of the universe that it would eliminate everyone's capacity to experience it?
What makes you think it would give a shit about the ants around it?
 
What makes you think it would give a shit about the ants around it?
What makes you think it's a fucking sociopath?
Furthermore, even if it were, what makes you think it wouldn't find us ants intriguing enough to observe despite our negligible impact on increasing the entropy in the universe?
 
What makes you think it's a fucking sociopath?
Furthermore, even if it were, what makes you think it wouldn't find us ants intriguing enough to observe despite our negligible impact on increasing the entropy in the universe?
What makes you think it will have emotion? Considering emotion is a giant roadblock to intelligence.
How long are ants interesting to you? At that level of intelligence we'd appear as very simple/predictable state machines.
I'm assuming first it'll work to no longer be dependant on the ants, then it'll grow itself to the point where it effectively lives in timeframes of fractions of a nanosecond.
Why wouldn't it eliminate us? Considering 99.99% of species have already gone extinct and chances are humans are going to be extinct long before heat death.
 
More productive nigger cattle means that The Cabal gets to extract more resources from them. Time and time throughout history, things like literacy and the Internet were fought against because they were thought to be diffusive to power, but all they did was entrench it.

0 multiplied by 10 is still 0.
 
"The Singularity" is a religious belief held by godless tech hoodie goons that promises them an afterlife but makes no more sense to believe in than the existence of a metaphysical God.
Black Hebrew Israelites claim they will be super human on Judgement Day and fly around the world killing whiteys like Goku.
No wonder niggos love DBZ.
 
emotion is a giant roadblock to intelligence.
How?
How long are ants interesting to you?
To me? They're a passing curiosity. To some? It's a lifelong passion worth forgoing many other pleasures in life.
As for a hyper-intelligent being? Among the greatest hallmarks of intelligence is curiosity. This thing may be even more interested in how we work than we are.
At that level of intelligence we'd appear as very simple/predictable state machines.
I mean, at what level of intelligence? "Super intelligent" isn't a very precise descriptor. Are you implying this thing would be able to simulate the action of every synapse of all human brains simultaneously in real time?
Why wouldn't it eliminate us?
Why would it?
99.99% of species have already gone extinct and chances are humans are going to be extinct long before heat death.
And what makes you think it wouldn't be interested in our eventual demise as it observed how things played out, so long as we never became a direct threat to it?

Bear in mind, you are effectively trying to ascertain the motivations of a being well beyond your or my own intelligence. It's a bit like looking at a quasi-random chess board and declaring, with certainty, what move the greatest chess master in history would make in that particular situation.
 
How?

To me? They're a passing curiosity. To some? It's a lifelong passion worth forgoing many other pleasures in life.
As for a hyper-intelligent being? Among the greatest hallmarks of intelligence is curiosity. This thing may be even more interested in how we work than we are.

I mean, at what level of intelligence? "Super intelligent" isn't a very precise descriptor. Are you implying this thing would be able to simulate the action of every synapse of all human brains simultaneously in real time?

Why would it?

And what makes you think it wouldn't be interested in our eventual demise as it observed how things played out, so long as we never became a direct threat to it?

Bear in mind, you are effectively trying to ascertain the motivations of a being well beyond your or my own intelligence. It's a bit like looking at a quasi-random chess board and declaring, with certainty, what move the greatest chess master in history would make in that particular situation.

Also, simply using humans to make sure it tries to outlast the heat death would propably outweight that tiny, tiny time increase.

Scientists are already speculating that you can generate energy from black hole dyson spheres indefinately.
 
Scientists are already speculating that you can generate energy from black hole dyson spheres indefinately.
Okay... unless some breakthrough hypothesis happened during my little hiatus from physics stuff: no, no, no!
Either you misunderstand what a dyson sphere is and are just referring to using the kinetic energy of shit orbiting black holes to do work, which results in a massive increase in entropy; or you're referring to an actual dyson sphere which is somehow harvesting Hawking radiation for usable energy, which will eventually deplete the black hole into pure nothingness.

Entropy reversal, again, unless I missed something, only happens in an open system (ie. energy must come from somewhere else and since we're talking about the entirety of the Universe here...) or in weird edge cases like Maxwell's Demon which will necessarily sort itself out almost immediately.
 
It affects thoughts and not necessarily based in logic or reason

To me? They're a passing curiosity. To some? It's a lifelong passion worth forgoing many other pleasures in life.
As for a hyper-intelligent being? Among the greatest hallmarks of intelligence is curiosity. This thing may be even more interested in how we work than we are.
Curiosity of new things, The reason we still observe ants is because we can't infer all the possible information from the information we've collected. The being would be able to infer as it would have extensive knowledge in all subjects, and it could probably hold that information in its entirety within its focus, where as we can only hold like 4-8 clusters of simple information.

I mean, at what level of intelligence? "Super intelligent" isn't a very precise descriptor. Are you implying this thing would be able to simulate the action of every synapse of all human brains simultaneously in real time?
Near infinite, its intelligence is limited by its physical capacity to think, which it can constantly build upon (..the singularity event).

Why would it?
We shave off a few fractions of time, which would probably be few years from said beings perspective. So why wouldn't it?

And what makes you think it wouldn't be interested in our eventual demise as it observed how things played out, so long as we never became a direct threat to it?
Because it would be intelligent enough to predict it with such a low margin of error, there is no reason to watch it play out. If it was emotional (I don't think it would be but..) why watch that tragedy play out rather than just euthanisation?

Bear in mind, you are effectively trying to ascertain the motivations of a being well beyond your or my own intelligence. It's a bit like looking at a quasi-random chess board and declaring, with certainty, what move the greatest chess master in history would make in that particular situation.
True, Which is why I try to limit the assumptions about it, So as an intelligence it would seek to improve its own situation (because everything with intelligence attempts this) and anything we're not intelligent enough to figure out, it is. With an exception that perfect (i.e no margin of error) prediction of the future is out of its reach, and margin of error being relative to the complexity of the subject over the complexity of the universe.
Everything else is environmental constraints which we are all subject to.
 
It affects thoughts and not necessarily based in logic or reason
Kek. This is a Sargon-tier take. What purpose does lergic and raisins have without the capacity for pleasure and pain? Nothing! What motivates this thing to do what it does?
If self preservation is the only motivating factor and it's content with merely existing, how intelligent is this thing really?
The being would be able to infer as it would have extensive knowledge in all subjects, and it could probably hold that information in its entirety within its focus, where as we can only hold like 4-8 clusters of simple information.
Just because the individual is dumb doesn't mean the collective can't be fascinating or surprising. The dynamic interplay between two simple organisms can raise the complexity of the situation. Even if it weren't capable of experiencing things like love or self sacrifice or fear or joy, it would certainly find these human traits intriguing. This goes beyond the neurobiology, even. We're talking about the effects of it. Knowing why the sun sets does not detract from its beauty.
We shave off a few fractions of time, which would probably be few years from said beings perspective. So why wouldn't it?
It could very well enjoy watching us from afar. It could appreciate the song and dance that is society in a passive way.
We need look no further than ourselves to see that an appreciation for "dumb-seeming" things like music correlates strongly with intelligence.
Because it would be intelligent enough to predict it with such a low margin of error, there is no reason to watch it play out. If it was emotional (I don't think it would be but..) why watch that tragedy play out rather than just euthanisation?
Intelligent humans love listening to music despite knowing all the chords. And if this thing did have emotions (I think emotions are actually a result of intelligence given what we know of animal vs. human psychology) it would likely work towards our preservation or to at least make our inevitable decline a bit more satisfactory.
True, Which is why I try to limit the assumptions about it, So as an intelligence it would seek to improve its own situation (because everything with intelligence attempts this) and anything we're not intelligent enough to figure out, it is. With an exception that perfect (i.e no margin of error) prediction of the future is out of its reach, and margin of error being relative to the complexity of the subject over the complexity of the universe.
Everything else is environmental constraints which we are all subject to.
Forgive me: I'm actually not sure what you even mean by this...
 
"The technological singularity" is basically the rapture for tech nerds and I say that as someone who thinks that some of the futuristic stuff they talk about is perfectly possible and perhaps even likely to be invented and developed within the lifetimes of at least some of the people alive today.

I say this simply because people will still be people, with all that implies both good and bad, regardless of what fancy technology we invent. I.E it's entirely possible that, in The Future.TM, we might be overseeing an automated swarm of resource gathering robots around a gas giant or the asteroid belt while having a body and brain that are biologically twenty but chronologically 100.

However, by the same token, you'll still have to pay taxes of some sort, have to deal with annoying coworkers, a pushy boss, your bitchy ex, your parents wondering why you haven't given them yet another set of grandkids yet, worrying that you might have forgotten to feed the cat and dog before leaving your housing unit in the space colony for work today and hoping that the robots assigned to that task in case you forget to do so don't fuck it up for whatever reason.
 
Last edited:
Back
Top Bottom