Nanomagnetic Computing Could Drastically Cut AI’s Energy Use

As the Internet of Things expands, engineers want to incorporate AI into everything, but the amount of energy it requires is a challenge for the smallest and most remote devices. A new “nanomagnetic” computing approach could provide a solution.

For the most part AI development today is focused on large, complex models operating in huge data centers, there is also a growing demand for ways to run simpler AI applications on smaller and more powerful devices.

For many applications, from portable to smart industrial sensors to drones, sending data to cloud-based AI systems makes no sense. This may be due to concerns about sharing private data, or the unavoidable delays that come from transmitting the data and waiting for a response.

But many of these devices are too small to contain the kind of high-power processors commonly used for AI. They also tend to run on batteries or energy harvested from the environment, and therefore cannot meet the demanding power requirements of conventional deep learning approaches.

This has led to a growing body of research into new hardware and computing approaches that make it possible to run AI on these types of systems. Much of this work has sought to borrow from the brain, which is capable of doing incredible feats of computing using the same amount of power as a light bulb. These include neuromorphic chips that mimic the brain’s wiring and processors built from memristors-electronic components that behave like biological neurons.

New research led by Scientists at Imperial College London suggest that computing with nanoscale magnet networks could be a promising alternative. In paper published last week in Nature Nanotechnologythe team showed that by applying magnetic fields to a set of tiny magnetic elements, they could train the system to process complex data and provide predictions using a fraction of the power of a normal computer.

At the heart of their approach is what is known as metamaterial, a man-made material whose internal physical structure is carefully engineered to give it unusual features not normally found in nature. In particular, the team created an “artificial spin system”, an arrangement of many nanomagnets that combine to exhibit exotic magnetic behavior.

Their design consists of a grid of hundreds of 600-nanometer rods made of permalloy, a highly magnetic nickel-iron alloy. These bars are arranged in a repeating pattern of Xs whose upper arms are thicker than their lower arms.

Normally artificial spin systems have a single magnetic texture that describes the pattern of magnetization across their nanomagnets. But the imperial team’s metamaterial has two distinct textures and the ability for different parts of it to switch between them in response to magnetic fields.

The researchers used these features to implement a form of AI known as reservoir computing. Unlike deep learning, in which a neural network rewires its links while training on a task, this approach feeds data into a network whose links are all fixed and simply trains a single output layer to interpret what comes out of this network.

It is also possible to replace this fixed network with physical systems, including things like memory sticks or oscillators, provided they have certain features, such as non-linear response to inputs and some form of memory of previous inputs. The new artificial spin system meets these requirements, so the team used it as a reservoir to perform a series of data processing tasks.

They enter data into the system by subjecting it to sequences of magnetic fields before allowing their own internal dynamics. al process the data. They then used an imaging technique called ferromagnetic resonance to determine the final distribution of the nanomagnets that provided the response.

While these were not practical data processing tasks, the team was able to show that their device was able to match major backup computer schemes on a series of predictive challenges involving data that vary over time. Importantly, they showed that it could learn effectively on fairly short training sessions, which would be important in many real IoT applications.

And not only is the device very small, but the fact that it uses magnetic fields to perform computing rather than carrying electricity means that it consumes much less power. In press releasethe researchers estimate that when expanded it could be 100,000 times more efficient than conventional computing.

There is a long way to go before this type of device could be practiced, but the results suggest that magnet-based computers could play an important role in embedding AI everywhere.

Image Credit: BarbaraJackson / 264 images

Leave a Reply

Your email address will not be published.