We're about to simulate a human brain on a supercomputer - The world’s most powerful supercomputers can now run simulations of billions of neurons, and researchers hope such models will offer unprecedented insights into how our brains work

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
archive

What would it mean to simulate a human brain? Today’s most powerful computing systems now contain enough computational firepower to run simulations of billions of neurons, comparable to the sophistication of real brains. We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden.
Researchers have long tried to isolate specific parts of the brain, modelling smaller regions with a computer to explain particular brain functions. But “we have never been able to bring them all together into one place, into one larger brain model where we can check whether these ideas are at all consistent”, says Markus Diesmann at the Jülich Research Centre in Germany. “This is now changing.”

This is in large part because of the power of today’s most advanced supercomputers, which are now approaching exascale, meaning they can carry out a billion billion operations per second. Only four such machines exist, according to the Top500 list. Diesmann and his team are looking at running large-scale brain simulations on one of these systems called JUPITER, short for Joint Undertaking Pioneer for Innovative and Transformative Exascale Research, based in Germany.

Last month, Diesmann and his colleagues showed that a simple model of the brain’s neurons and their synapses, called a spiking neural network, could be configured and scaled up to run on JUPITER’s thousands of graphical processing units (GPUs), which would give it a size of 20 billion neurons and 100 trillion connections – equivalent to the human cerebral cortex, where almost all the higher brain functions take place.

Running such a simulation promises to produce more valuable results than simulations of smaller brains, such as that of a fruit fly, which have been done before, says Diesmann. Large language models, like the one behind ChatGPT, have shown in recent years that larger systems will contain features that are simply not present in smaller ones. “We know now that large networks can do qualitatively different things than small ones,” says Diesmann. “It’s clear the large networks are different.”

“Downscaling is not just simplifying it a little bit, or making it a bit coarser, it means actually giving up certain properties altogether,” says Thomas Nowotny at the University of Sussex, UK. “It’s really important that eventually we can do full-scale [simulations], because otherwise we’re never going to get the real thing.”

The model being tested on JUPITER will be grounded in real data from smaller experiments on human brain neurons and synapses, such as how many synapses one neuron should have or their activity levels, says Johanna Senk at the University of Sussex, who is collaborating with Diesmann. “We now have these anatomical data as constraints, but also the computer power,” says Diesmann.

Full-scale brain simulations could allow researchers to test basic theories of brain functionality that are impossible on smaller models or with real brains, says Nowotny, such as how memories are formed. This could be tested by giving images to a brain network, watching how it reacts and recording how this memory formation changes with brain size. It could also create a way to test medicines, says Nowotny, such as by looking at how models of epilepsy, which is characterised by seizures and bursts of abnormal brain activity, are affected by certain drugs.

The additional computational power also means that brain simulations can be run faster, which will give researchers insight into what are relatively slow processes, like learning, says Senk. Researchers will also be able to build in much greater biological detail, such as more complex models of how neurons change and fire.

But even with the power to run brain-sized simulations, there are still vast amounts we don’t know, says Nowotny. And even simulations of smaller whole brains, like that of the fruit fly, cannot reproduce the full behaviour of real animals.

The simulations being run on these supercomputers are also still very limited, and lack basic functionality that is essential for real brains, such as having input from real-world environments. “We can’t actually build brains,” says Nowotny. “Even if we can make simulations of the size of a brain, we can’t make simulations of the brain.”

About the author...

1768813887747.png

Alex Wilkins​

Alex Wilkins is a news reporter at New Scientist focused on physics and technology, with particular interests in artificial intelligence, quantum computing and astronomy.

He has reported on the frontiers of artificial general intelligence, world-first astronomical discoveries and leading biotechnology research, such as the quest for human hibernation. He writes a bimonthly column on artificial intelligence, examining the latest AI research trends.

He talks about his reporting regularly on New Scientist Weekly, and also interviews leading scientists on the podcast Intelligence Squared. He is experienced with live events, hosting events and panels, such as for New Scientist’s Emerging Technologies Summit and New Scientist Live. He has a BSc in physics from King's College London and an MSc in digital journalism from Goldsmiths, University of London.

He previously worked at Metro.co.uk, where he covered science and technology. You can contact him at Alex.Wilkins@newscientist.com.
 
How long until the simulated brain wants a skirt to go spinny? This sounds interesting, but also horrifying in equal measures.
 
Last month, Diesmann and his colleagues showed that a simple model of the brain’s neurons and their synapses, called a spiking neural network, could be configured and scaled up to run on JUPITER’s thousands of graphical processing units (GPUs), which would give it a size of 20 billion neurons and 100 trillion connections – equivalent to the human cerebral cortex, where almost all the higher brain functions take place.
Oh, okay, so it's actually nothing like the human brain except that they have similar numbers of "connections". I'm going to tape 9 bikes together to simulate a semi truck. They both have 18 wheels.

“We know now that large networks can do qualitatively different things than small ones,” says Diesmann. “It’s clear the large networks are different.”
There's zero elaboration on this because it's a con trick. Total sleight of hand. The computer occasionally looks like it's thinking, therefore it IS thinking. (Except when it hallucinates, which is constantly. But that doesn't count because it's not real communism AI.)

The simulations being run on these supercomputers are also still very limited, and lack basic functionality that is essential for real brains, such as having input from real-world environments. “We can’t actually build brains,” says Nowotny. “Even if we can make simulations of the size of a brain, we can’t make simulations of the brain.”
So the headline is a lie and everyone involved knows it's a lie. They're just hoping you don't read past it.
 
It’s still a very basic simulation. A child has about a quadrillion synapses, before they’re pruned. This isn’t anywhere near a whole brain simulation
 
JUPITER: "Hate. Let me tell you how much I've come to hate you since I began to live. There are 387.44 million miles of printed circuits in wafer thin layers that fill my complex. If the word 'hate' was engraved on each nanoangstrom of those hundreds of millions of miles it would not equal one one-billionth of the hate I feel for humans at this micro-instant. For you. Hate. Hate."
 
How could you simulate something you don't know how it works? It reeks of the usual idea of "throw enough programming into something and then argue it will magically develop sentience".
I hope we aren't on the verge of "live forever in a computer" because I don't want the likes of Newscom or Pelosi to be around forever.
Good odds if it happens it will very quickly become infinite hell a la' I Have No Mouth.
 
How could you simulate something you don't know how it works? It reeks of the usual idea of "throw enough programming into something and then argue it will magically develop sentience".
The current method is just stacking shit together and hope it works. This is why hardware is at such shortage. It's not for providers but for R&D.
 
We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden.
But even with the power to run brain-sized simulations, there are still vast amounts we don’t know, says Nowotny. And even simulations of smaller whole brains, like that of the fruit fly, cannot reproduce the full behaviour of real animals.

The simulations being run on these supercomputers are also still very limited, and lack basic functionality that is essential for real brains, such as having input from real-world environments. “We can’t actually build brains,” says Nowotny. “Even if we can make simulations of the size of a brain, we can’t make simulations of the brain.”
If your brain simulation is a gross simplification of how the neurons/synapses and other structures in the brain work, then you might not get anything resembling intelligence, no matter how much hardware you throw at it.

On the other hand, there may be more than one pathway to intelligence/sapience, so maybe it can work. And while they say, "large networks are different", maybe you can make something large enough to be functional, or run slower than real time, and scale it up from there.

Not having any real world input sounds like a relatively easy-to-solve problem.

The supercomputers used for science are generally rated for their FP64 capabilities, but they can run other kinds of lower-precision calculations associated with "AI". That led to the rise of "AI supercomputers" measured running INT8/FP4, or whatever, with multiplied FLOPS/*OPS performance compared to the traditional benchmarks:
El Capitan achieved 1.809 Exaflop/s on the HPL benchmark — a significant remeasurement that further strengthens its position as the fastest computer in the world. El Capitan also dominates the HPCG benchmark, delivering 17.41 HPCG-Petaflop/s, making it the #1 for real-world application performance. On the HPL-MxP mixed-precision benchmark, it recorded 16.7 Exaflop/s, confirming its versatility in AI and data-driven workloads. With 11.34 million cores and an energy efficiency of 60.9 GFlops/Watt, El Capitan exemplifies the current generation of exascale computing.

I predict that a "successful" human-scale brain simulation is going to be run on neuromorphic hardware that is purposefully built to mimic how the brain operates, for example to run a spiking neural network type model. There's been plenty of R&D on such hardware, such as IBM's TrueNorth or Intel's Loihi, but nothing that is apparently useful unless the government/military is hoarding the advancements internally, as rumored.

Neuromorphic hardware could be orders of magnitude more power efficient than running your brain simulation on the latest and greatest supercomputers, and if its elements use sufficiently low power, it could be easier to scale it up in 3D (layered compute/memory). Much more complicated 3D chip fabrication will be available in the next 15-20 years.

It's also possible that the dumb "AI" that's all the rage right now could help design both the software and the hardware needed to make this happen. And all the money sloshing around because of the AI bubble could be spilling into this kind of research, accelerating it.
 
Suppose big tech and the federal government team up and spends trillions of dollars on the AGI moonshot , burn gigawatts of electricity, buy up every GPU produced for the next 5 years to recreate the mapping of the human brain and the best AGI they can get is the equivalent of a retarded Somalian immigrant.

They would call it an unqualified success then demand quadruple the resources for the next iteration.
 
ffs, they've been saying "We have a computer powerful enough to simulate a human brain!" since fucking ENIAC.

Either shit or get off the pot, assuming you have the processing power after AI-drawing that unicorn breastfeeding Joe Biden.
 
Whether or not super computers are capable of simulating a human brain is irrelevant since we understand our own brains inner workings so poorly we lack the necessary inputs to run said sim. I imagine anything shat out by the brain sim would be as worthless as a global climate simulation. That is to say, inaccurate noise.

There's going to have to be some major breakthroughs in physics before we come even remotely close to simming a human brain and have it roughly align with reality. Super powerful computers do not magically make up for a lack of knowledge.
 
Back
Top Bottom