Computers

3D-printed Deep Learning neural network uses light instead of electrons

3D-printed Deep Learning neural network uses light instead of electrons
UCLA researchers have "created a unique all-optical platform to perform machine learning tasks at the speed of light"
UCLA researchers have "created a unique all-optical platform to perform machine learning tasks at the speed of light"
View 7 Images
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
1/7
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
2/7
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
UCLA researchers have "created a unique all-optical platform to perform machine learning tasks at the speed of light"
3/7
UCLA researchers have "created a unique all-optical platform to perform machine learning tasks at the speed of light"
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
4/7
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
The D2NN was trained to recognize handwritten numerals
5/7
The D2NN was trained to recognize handwritten numerals
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
6/7
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
the 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
7/7
the 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
View gallery - 7 images

It's a novel idea, using light diffracted through numerous plates instead of electrons. And to some, it might seem a little like replacing a computer with an abacus, but researchers at UCLA have high hopes for their quirky, shiny, speed-of-light artificial neural network.

Coined by Rina Dechter in 1986, Deep Learning is one of the fastest-growing methodologies in the machine learning community and is often used in face, speech and audio recognition, language processing, social network filtering and medical image analysis as well as addressing more specific tasks, such as solving inverse imaging problems.

Traditionally, deep learning systems are implemented on a computer to learn data representation and abstraction and perform tasks, on par with – or better than – the performance of humans. However the team led by Dr. Aydogan Ozcan, the Chancellor's Professor of electrical and computer engineering at UCLA, didn't use a traditional computer set-up, instead choosing to forgo all those energy-hungry electrons in favor of light waves. The result was its all-optical Diffractive Deep Neural Network (D2NN) architecture.

The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)
The 3D-printed diffraction plates of the all-optical Diffractive Deep Neural Network (D2NN)

The setup uses 3D-printed translucent sheets, each with thousands of raised pixels, which deflect light through each panel in order to perform set tasks. By the way, these tasks are performed without the use of any power, except for the input light beam.

The UCLA team's all-optical deep neural network – which looks like the guts of a solid gold car battery – literally operates at the speed of light, and will find applications in image analysis, feature detection and object classification. Researchers on the team also envisage possibilities for D2NN architectures performing specialized tasks in cameras. Perhaps your next DSLR might identify your subjects on the fly and post the tagged image to your Facebook timeline.

The D2NN was trained to recognize handwritten numerals
The D2NN was trained to recognize handwritten numerals

"Using passive components that are fabricated layer by layer, and connecting these layers to each other via light diffraction created a unique all-optical platform to perform machine learning tasks at the speed of light," said Dr. Ozcan.

For now though, this is a proof of concept, but it shines a light on some unique opportunities for the machine learning industry.

The research has been published in the journal Science.

Source: The Ozcan Research Group

View gallery - 7 images
3 comments
3 comments
bwana4swahili
Sounds like a great approach for fast, low energy neural net processing!
f8lee
It sounds like an optical/photonic version of the soap bubble wireframe models that can solve the 'traveling salesman problem' by dipping the frame into a tank of soapy water and letting the bubble pattern that forms show the best route.
Ralf Biernacki
It seems an exciting concept; but the present implementation is not in any way programmable. The sheets are designed to solve a single task---perhaps a fairly complex task---and then fabricated in metal. What results is not a computer, but a single-function, passive device. It cannot be programmed, interacted with, or retrained to any other task. But, to design the plates in the first place, a "deep" learning process must take place---and presumably is implemented on a conventional computer, because this device can not learn, it can only implement the end result. It is essentially only a printout of the final state of the actual neural network software. It is still useful, a hard copy of a trained solver for a specific task.