Artificial intelligence (AI), or machine learning, is taking the computing world by storm, even though it has been in development for decades. AI tools are changing the way we use data and computers in a variety of fields, from medicine to traffic control. New research shows how we can make AI even more efficient and useful.
The name “artificial intelligence” often sparks the imagination and prompts images of intelligent robots. But the reality is different. Machine learning does not emulate human intelligence. What is it does it It is, however, to mimic the complex neural pathways that exist in our own brains.
This mimicry is the key to which AI owes its power. But it is energy that comes at a great cost, both financially and in terms of the energy required to run the machines.
Research from the Massachusetts Institute of Technology (MIT) and published in Sciences it is part of a growing subset of AI research focused on AI architecture that is cheaper to build, faster, and more energy efficient.
Read more: Australian researchers develop coherent quantum simulator
The multidisciplinary team used programmable resistors to produce “analog deep learning” machines. Just as transistors are at the core of digital processors, resistors are built into repeating arrays to create a complex layered network of artificial “neurons” and “synapses.” The machine can perform complicated tasks like image recognition and natural language processing.
Human beings learn through the weakening and strengthening of the synapses that connect our neurons, the brain cells.
While digital deep learning weakens and strengthens the links between artificial neurons through algorithms, analog deep learning occurs by increasing or decreasing the electrical conductance of resistors.
Increasing conductance in resistors is achieved by pushing more protons towards them, attracting more electron flow. This is done by using a battery-like electrolyte that allows protons to pass through, but blocks electrons.
“The working mechanism of the device is the electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the movement of this ion by using a strong electric field and push these ionic devices into the nanosecond regime of operation,” says lead author Bilge Yildiz, Professor of Science and Engineering. Nuclear, and MIT Departments of Materials Science and Engineering.
Using inorganic phosphosilicate glass (PSG) as the base inorganic compound for the resistors, the team found that their deep learning analog device could process information a million times faster than previous attempts. This makes your machine about a million times faster than the activation of our own synapses.
“The action potential in biological cells goes up and down with a time scale of milliseconds, since the voltage difference of about 0.1 volts is limited by the stability of water,” says lead author Ju Li, Prof. materials science and engineering. “Here we apply up to 10 volts through a special film of nanoscale-thick solid glass that conducts protons, without permanently damaging it. And the stronger the field, the faster the ion devices.”
The resistor can work for millions of cycles without breaking because the protons do not damage the material.
“The speed was certainly amazing. Normally, we wouldn’t apply such extreme fields to devices so as not to turn them to ash. But instead, the protons ended up traveling at immense speeds through the device stack, specifically a million times faster compared to what we had before. And this motion doesn’t hurt anything, thanks to the protons’ small size and low mass,” says lead author and MIT postdoc Murat Onen.
“The nanosecond time scale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” adds Li.
PSG also makes the device extremely energy efficient and compatible with silicon fabrication techniques. It also means that the device can be integrated into commercial computing hardware.
“With that key knowledge and powerful nanofabrication techniques, we’ve been able to put these pieces together and show that these devices are inherently very fast and operate at reasonable voltages,” says lead author Jesús A. del Álamo, a professor at MIT. Department of Electrical and Computer Engineering (EECS). “This work has really put these devices to a point where they now look really promising for future applications.”
“Once you have an analog processor, you’re no longer training networks that everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford, and thus vastly outperforming them all. In other words, this isn’t a faster car, it’s a spaceship,” adds Onen.
Analog deep learning has two key advantages over its digital cousin.
Onen says that the computation is done inside the memory device instead of being transferred from memory to processors.
Analog processors also perform operations simultaneously, instead of taking longer to perform new calculations.
Now that the device has been proven effective, the team aims to design them for high-volume manufacturing. They also plan to remove factors that limit the voltage required for protons to be efficient.
“The collaboration we have is going to be fundamental to innovate in the future. The way forward will continue to be very challenging, but at the same time very exciting,” says the Alamo professor.