Science Step towards light-based, brain-like computing chip - It was roughly 10x dumber than Tommy Tooter. No need to worry yet.

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
https://www.sciencedaily.com/releases/2019/05/190508134459.htm
tw: European spelling

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched -- for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units -- the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place -- in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to "learn" information and use this as a basis for computing and recognizing patterns -- just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. "This integrated photonic system is an experimental milestone," says Prof. Wolfram Pernice from Münster University and lead partner in the study. "The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses." The study is published in the latest issue of the "Nature" journal.

The story in detail -- background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems -- in which photons, i.e. light particles, are used -- are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials -- which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline -- when their atoms arrange themselves in a regular fashion -- or amorphous -- when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. "Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons," says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip -- consisting of different layers -- was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers "fed" it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system "learns" from examples and can, ultimately, generalise them. In the case of the two algorithms used -- both in so-called supervised and in unsupervised learning -- the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought -- one of which was four consecutive letters.

"Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks," says Wolfram Pernice. "By working with photons instead of electrons we can exploit to the full the known potential of optical technologies -- not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place," adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. "This step is to be taken in the EU joint project 'Fun-COMP' by using foundry processing for the production of nanochips," says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.
 
This kind of article is the science equivalent of a CGI trailer for a videogame: made up eyecandy for stuff that won't exist for quite a while (if ever, in some cases)

I guess it's interesting to read, but this brings nothing substantial to the table
 
This kind of article is the science equivalent of a CGI trailer for a videogame: made up eyecandy for stuff that won't exist for quite a while (if ever, in some cases)

I guess it's interesting to read, but this brings nothing substantial to the table

Normally I'd agree; but the article stated that the teams actually completed & function-tested their chip, and it's not a large step to optimize for mass-production after that.... especially as it's based on optical tech already well understood & implemeneted across the world.
 
So, about "light-based" computing. If light is used to transmit signals instead of electrical impulses... what kind of doors does that open? Electrical impulses seem to be basically "ON" and "OFF", but light uses a spectrum of wavelengths. Are 1s and 0s going to go the way of the dinosaur? Can we encode certain "colors" as nearly-infinitely variable bits? I mean, I know jack and shit about the higher workings of such a system but right offhand that sounds like it would obsolete quantum computing and qubits overnight.
 
So, about "light-based" computing. If light is used to transmit signals instead of electrical impulses... what kind of doors does that open? Electrical impulses seem to be basically "ON" and "OFF", but light uses a spectrum of wavelengths. Are 1s and 0s going to go the way of the dinosaur? Can we encode certain "colors" as nearly-infinitely variable bits? I mean, I know jack and shit about the higher workings of such a system but right offhand that sounds like it would obsolete quantum computing and qubits overnight.
Typically stuff like this is just a light sensor that gives off/on signals. You're using light instead of electrical signals because they're faster, plus they have the nice effect of isolating the electronics from the inputs.

If you make "variable bits" you'll still need to convert them to a digital signal, encoded as, you guessed it, bits.

There have been ideas of what you're talking about, fuzzy logic, but I sorta doubt light would be good for that.
 
So, about "light-based" computing. If light is used to transmit signals instead of electrical impulses... what kind of doors does that open? Electrical impulses seem to be basically "ON" and "OFF", but light uses a spectrum of wavelengths. Are 1s and 0s going to go the way of the dinosaur? Can we encode certain "colors" as nearly-infinitely variable bits? I mean, I know jack and shit about the higher workings of such a system but right offhand that sounds like it would obsolete quantum computing and qubits overnight.
Typically stuff like this is just a light sensor that gives off/on signals. You're using light instead of electrical signals because they're faster, plus they have the nice effect of isolating the electronics from the inputs.

If you make "variable bits" you'll still need to convert them to a digital signal, encoded as, you guessed it, bits.

There have been ideas of what you're talking about, fuzzy logic, but I sorta doubt light would be good for that.

But..... if every wavelength of the spectrum can be used to transmit 1/0s, wouldn't that mean each could have it's own switch, given a processor fast enough to handle that amount of nearly instant data? Just the visible spectrum is a pretty wide swath of input channles.......
.....
So many deepthunks in a row (including yesterday's) has really started to overclock my braincase.

I may need a bathdrink to slow things down before long. Goddamn you, internet
 

But..... if every wavelength of the spectrum can be used to transmit 1/0s, wouldn't that mean each could have it's own switch, given a processor fast enough to handle that amount of nearly instant data? Just the visible spectrum is a pretty wide swath of input channles.......
.....
So many deepthunks in a row (including yesterday's) has really started to overclock my braincase.

I may need a bathdrink to slow things down before long. Goddamn you, internet

I was originally thinking of just adapting fiberoptics for transmitting data in terms of X nm wavelength variable bits but you'd need a processor capable of taking that input without converting it to old-fashioned bits. Then there's the issue of speed - the simplicity of 1s and 0s allows for a processor to sift through that shit quickly. When you're getting a signal comprised of a burst of 421nm, then 600nm, then 513nm and so on what kind of processor would be able to sort that shit quickly and process it? Complexity allows for a lot of neat things but it's going to be inherently a bit slower, I would think.

We would need to come up with something ENTIRELY NEW to replace semiconductors, I think.
 
I was originally thinking of just adapting fiberoptics for transmitting data in terms of X nm wavelength variable bits but you'd need a processor capable of taking that input without converting it to old-fashioned bits. Then there's the issue of speed - the simplicity of 1s and 0s allows for a processor to sift through that shit quickly. When you're getting a signal comprised of a burst of 421nm, then 600nm, then 513nm and so on what kind of processor would be able to sort that shit quickly and process it? Complexity allows for a lot of neat things but it's going to be inherently a bit slower, I would think.

We would need to come up with something ENTIRELY NEW to replace semiconductors, I think.

🤔
....
Quantum processors!

tags in @Corbin Dallas Multipass

:semperfidelis:
 
But..... if every wavelength of the spectrum can be used to transmit 1/0s, wouldn't that mean each could have it's own switch, given a processor fast enough to handle that amount of nearly instant data?
But wouldn't the gain in processing power be totally offset by the fact that each wavelength needed to be managed by its own on/off switch? You're essentially forcing the computer to translate an alternative data format into binary, which would eat up time and resources and is basically what conventional computing already does anyway.
 
But wouldn't the gain in processing power be totally offset by the fact that each wavelength needed to be managed by its own on/off switch? You're essentially forcing the computer to translate an alternative data format into binary, which would eat up time and resources and is basically what conventional computing already does anyway.
Yeah, you're just talking about analog to digital conversion here. It's a wildly inefficient way to communicate between digital components.

The idea of fuzzy logic that would work with analog signals like that requires the entire circuitry to be that way. And I'm not sure they've ever gotten any meaningfully good results from that. You're talking about a completely new architecture from scratch. And without being digital it's really tough to be as flexible as computers are.

Now, I know LEDs are super efficient in terms of light production vs heat production, so I wonder if that's a consideration here. Instead of wires with resistance throwing off heat you've got photons flying through space.

Another possible advantage is light based components could maintain high speed at longer distances, though I wonder if that really could be a consideration here, usually you want to go smaller, not bigger.
 
Yeah, you're just talking about analog to digital conversion here. It's a wildly inefficient way to communicate between digital components.

The idea of fuzzy logic that would work with analog signals like that requires the entire circuitry to be that way. And I'm not sure they've ever gotten any meaningfully good results from that. You're talking about a completely new architecture from scratch. And without being digital it's really tough to be as flexible as computers are.

Now, I know LEDs are super efficient in terms of light production vs heat production, so I wonder if that's a consideration here. Instead of wires with resistance throwing off heat you've got photons flying through space.

Another possible advantage is light based components could maintain high speed at longer distances, though I wonder if that really could be a consideration here, usually you want to go smaller, not bigger.

The problem I keep running into is that the context of modern computing is... entirely digital. All of it. Even things that aren't computers, like, you know, the sort of instruments that could gauge incoming wavelengths and parse series of varied wavelength light bursts into streams of data. At least, I can't think of anything that ISN'T. This would require a SERIES of new inventions. Before we had gone too far down Semiconductor Lane this might not have felt like such a tall order, but in terms of electronics and computing infrastructure EVERYTHING USES 1s AND 0s.
 
The problem I keep running into is that the context of modern computing is... entirely digital. All of it. Even things that aren't computers, like, you know, the sort of instruments that could gauge incoming wavelengths and parse series of varied wavelength light bursts into streams of data. At least, I can't think of anything that ISN'T. This would require a SERIES of new inventions. Before we had gone too far down Semiconductor Lane this might not have felt like such a tall order, but in terms of electronics and computing infrastructure EVERYTHING USES 1s AND 0s.
I mean... Anything of finite precision uses 1s and 0s. And you need finite precision for perfect accuracy.
 
I mean... Anything of finite precision uses 1s and 0s. And you need finite precision for perfect accuracy.
I kind of grasp why this is the case and kind of don't. I mean, if you set a wavelength at 421nm and you ensure that whatever's reading it can differentiate to at least a tenth of a nanometer you can say that 421nm (this is just the example I'm running with, no significance to the number honestly) is a "1". Then you can go up or down a nm in wavelength, set that as "2", the whole time calibrating it so that for the processor there is NO mistaking a 421nm beam for a 422nm beam, while still having more "bits" to work with than a binary system. But I do see how having a simple "IS 1" and "IS NOT 1" lends itself to precision AND speed. Less variation means less chance for error, as a rule.
 
Normally I'd agree; but the article stated that the teams actually completed & function-tested their chip, and it's not a large step to optimize for mass-production after that.... especially as it's based on optical tech already well understood & implemeneted across the world.
They do some pretty crazy shit with the neurons, too. It's explained better in the paper, but even if you don't have a way to get at that (outside the obvious), the supplemental material is publicly accessible.
That being said, I do wish they benchmarked how long the German:English recognition process took for a given word. Having a nice bank of times and heats would make it way easier to compare their hardware's performance with that of computer-based neural networks.
 
Now, I know LEDs are super efficient in terms of light production vs heat production, so I wonder if that's a consideration here. Instead of wires with resistance throwing off heat you've got photons flying through space.

Another possible advantage is light based components could maintain high speed at longer distances, though I wonder if that really could be a consideration here, usually you want to go smaller, not bigger.

One of my takeaways from the article, is that some of the tech is based on the coatings used in DVDs, and that they've gotten to the point of making that material change states, and given that DVDs are read by laser diodes....

I also had a thought that this might be applicable to faster communications in space; while not being superluminal like an ansible, the amount of information able to be speedily sent & parsed downrange would greatly help in robotic reconnaissance

I mean... Anything of finite precision uses 1s and 0s. And you need finite precision for perfect accuracy.

But what if you had..... infinite precision?

Photons are basically covered by the uncertainty principal, right?

I'm thinking of switches that work on that level.


I kind of grasp why this is the case and kind of don't. I mean, if you set a wavelength at 421nm and you ensure that whatever's reading it can differentiate to at least a tenth of a nanometer you can say that 421nm (this is just the example I'm running with, no significance to the number honestly) is a "1". Then you can go up or down a nm in wavelength, set that as "2", the whole time calibrating it so that for the processor there is NO mistaking a 421nm beam for a 422nm beam, while still having more "bits" to work with than a binary system. But I do see how having a simple "IS 1" and "IS NOT 1" lends itself to precision AND speed. Less variation means less chance for error, as a rule.

Considering this, I would think that if each nm of light has 1/0 assigned to it, then it being off is simply a pulsed absence of light.

But yeah, I think right now it'd need something like the new DoE supercomputer solely dedicated to the task.
 
Back
Top Bottom