Though IBM’s newest AI processor, which draws inspiration from the human brain, may be 25 times more efficient than GPUs due to its greater integration, Nvidia and AMD need not worry just yet.

Though IBM’s newest AI processor, which draws inspiration from the human brain, may be 25 times more efficient than GPUs due to its greater integration, Nvidia and AMD need not worry just yet.

RAM is eliminated by the NorthPole processor to increase speed.

Researchers have created a neural network-based processor that, by eliminating the requirement for external memory, can complete AI tasks far more quickly than traditional chips.

 

Because calculations require RAM, even the fastest CPUs experience bottlenecks when processing data, leading to inefficiencies from the back and forth transfer of data. With its NorthPole microprocessor, IBM hopes to address the Von Neumann bottleneck, as reported by Nature.

The 256 cores of the NorthPole processor are interconnected in a manner akin to the white matter that connects the various sections of the brain. Each core has a little amount of memory embedded in it. This indicates that the chip completely eliminates the bottleneck.

Looking to the human brain for inspiration

IBM‘s NorthPole is not so much a working processor as it is a proof of concept designed to take on rivals such as AMD and Nvidia. For example, it only has 224MB of RAM, which is far less than what is needed to run big language models (LLMs) or AI.

Also, pre-programmed neural networks trained on other systems can be executed directly on the chip. Its energy efficiency, however, makes it truly stand out due to its distinctive architecture. According to the researchers, NorthPole would be 25 times more efficient than the fastest CPUs and GPUs if it were developed using modern manufacturing techniques.

Nature quoted Damien Querlioz, a nanoelectronics researcher at the University of Paris-Saclay in Palaiseau, as saying, “Its energy efficiency is just mind-blowing.” According to him, the research, which was published in Science, demonstrates how computers and memory can be combined extensively. “It seems that the paper will challenge conventional wisdom in computer architecture.”

Additionally, it can perform tasks like picture identification faster than AI systems. Because of its neural network architecture, a bottom layer processes input, like the pixels in a picture, and then higher layers start to identify patterns that are more complicated as information moves up the layers. The final result, such as a suggestion as to whether an image contains a specific object, is subsequently output by the topmost layer.

 

 

Leave a Comment