AI is running out of computing power. IBM says the answer is this new chip

Turn off the IBM Artificial Intelligence Unit chip

A close up of the IBM Artificial Intelligence Unit chip.

Image: IBM

The hype suggests that artificial intelligence (AI) is already everywhere, but the technology that actually drives it is still evolving. Many AI applications are powered with chips that aren’t designed for AI – instead, they rely on general-purpose CPUs and GPUs designed for video games. This disparity has led to a flurry of investment from tech giants like IBM, Intel and Google, as well as from startups and VCs – in the design of new chips designed expressly for AI workloads.

As technology improves, enterprise investment will surely follow. According to Gartner, total AI chip revenue was over $34 billion in 2021 and is expected to grow to $86 billion by 2026. In addition, the research firm stated that workload accelerators accounted for less than 3% of data center servers in 2020, while more than 15% are expected by 2026.

IBM Research, for its part, recently unveiled the Artificial Intelligence Unit (AIU), a prototype chip specialized for AI.

“We are running out of computing power. AI models are growing rapidly, but the hardware to train these behemoths and run them on servers in the cloud or on edge devices such as smartphones and sensors has not evolved fast enough,” IBM said.

Also: Can AI help solve education’s big data problems?

AIU is the IBM Research AI Hardware Center’s first complete system-on-a-chip (SoC) designed expressly to run enterprise AI deep-learning models.

IBM argues that the “workhorses of traditional computing,” otherwise known as CPUs, were designed before the advent of deep learning. Although CPUs are good for general-purpose applications, they are not so good at running training and deep-learning models that require massively parallel AI operations.

“There’s no question in our mind that AI will be a fundamental driver of IT solutions for a long, long time,” Jeff Burns, director of AI Compute for IBM Research, told ZDNET. “It’s going to pour into the computing landscape, into these complex enterprise IT infrastructures and solutions in a very broad and pervasive way.”

For IBM, it makes the most sense to build complete solutions that are effectively universal, Burns said, “so that we can integrate those capabilities across different computing platforms and support a much wider variety of enterprise AI requirements.”

An AIU is an application-specific integrated circuit (ASIC), but it can be programmed to run any type of deep-learning task. The chip has 32 processing cores built with 5 nm technology and contains 23 billion transistors. The layout is simpler than a CPU, designed to send data directly from one computing engine to another, making it more energy efficient. This graphics card is designed to be easy to use and can be plugged into any computer or server with a PCIe slot.

To conserve energy and resources, AIU leverages approximate computing, a technique developed by IBM to trade off computational precision in favor of efficiency. Traditionally, computations have relied on 64- and 32-bit floating point arithmetic to provide precision useful for semantic, scientific calculations, and other applications where detailed precision is important. However, that level of accuracy isn’t really necessary for the vast majority of AI applications.

“If you’re thinking about the trajectory of an autonomous driving vehicle, there’s no exact position in the lane that the car needs to be in,” Burns explains. “There are many places in the lane.”

Neural networks are fundamentally inexact – they generate outputs with probabilities. For example, a computer vision program can tell you with 98% certainty that you are looking at a picture of a cat. Nevertheless, neural networks were still trained with high-precision arithmetic, consuming significant energy and time.

Also: I tested the AI ​​art generator and here’s what I learned

AIU’s approximate computing technique allows it to downscale from 32-bit floating point arithmetic to a bit-format containing a quarter of the information.

To ensure that the chip is truly universal, IBM has focused only on hardware innovation. IBM Research has put a lot of emphasis on foundation models, with a team of 400 to 500 people working on them. In contrast to AI models built for a specific task, foundational models are trained on a broad set of unlabeled data, creating a vast database-like resource. Then, when you need the model for a specific task, you can retrain the foundation model using a relatively small amount of labeled data.

Using this approach, IBM intends to address various verticals and different AI use cases. There are a handful of domains for which the company is building foundational models—use cases such as chemistry and time series data. Time series data, which simply refers to data collected at regular intervals of time, is important for industrial companies that need to monitor how their equipment is performing. After creating infrastructure models for a handful of key areas, IBM can develop more specialized, vertical-driven offerings. The team has also ensured that the software for AIU is fully compatible with IBM-owned Red Hat’s software stack.

Source

Also Read :  Musk warns of Twitter bankruptcy as more senior executives quit

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button