I’ve suspected for a few years now that optoelectronics is where this is all headed. It’s exciting to watch as important foundations are set on that path, and this was one of them.
This is hard to understand as someone who only applies AIs and doesn’t create them.
Are we talking changing from electronic circuits to something else? Would be great if someone could dumb this down a bit
Yeh, me too, so I asked Bard to make it readable …
BARD: Sure, here is a précis and key points of the technical paper in a form understandable by non-technicians:
MIT researchers have developed a new machine-learning system that uses light instead of electrons to perform computations.
The system is much more energy-efficient and powerful than current machine-learning systems, and it could be scaled up to run on small devices like cellphones.
The new system could lead to the development of much larger and more powerful machine-learning models, which could be used for a variety of applications, such as natural language processing and medical diagnosis.
Key points:
The new system uses light instead of electrons to perform computations. This makes the system much more energy-efficient and powerful.
The system can be scaled up to run on small devices like cellphones.
The new system could lead to the development of much larger and more powerful machine-learning models.
Potential applications:
Natural language processing Medical diagnosis Financial trading Self-driving cars Virtual reality
The new system is a significant advance in machine learning, and it has the potential to revolutionize a wide range of industries.
Yes. It’s using light instead of electricity.
Which has advantages in multiple aspects.
Pretty much every major technical college’s research/VC groups are making a photonics AI play right now.
One of the physicists at NIST suggested a few years ago that AGI would only occur in photonic neural networks because of the ability to better model the neuron in light vs electricity.
ML in particular is a great fit for the quantum effects of light too, given that in-process accuracy doesn’t matter, some degree of jitter is typically desired, and the network is a black box anyways so not being able to observe intermediate processing wouldn’t be a problem.
This work isn’t really playing with those effects yet (would need entirely new ways of training models), but it’s the fourth the road promise that’s probably the most exciting IMO.