Intel announced on Monday that its upcoming data center chip, Sierra Forest, is capable of handling more than double the amount of computing work per watt of power used, as part of a broader industrial push to reduce electricity consumption.
At a semiconductor technology conference held at Stanford University in Silicon Valley, California, Intel revealed that its Sierra Forest chip will offer a 240% performance improvement per watt compared to the current generation of data center chips, this marks the first time the company has disclosed such figures.
Data centers that power the internet and its services consume vast amounts of electricity, pushing tech companies to either maintain or decrease their energy usage, this has led chip manufacturers to focus on increasing computational work per chip.
Ampere Computing, a startup founded by former Intel executives, was one of the first to market with a chip designed for efficient cloud computing workloads.
Intel and its competitor AMD followed suit, announcing similar products. AMD released its chips in the market last June.
Intel, which has lost market share to both AMD and Ampere in data centers, announced on Monday that its Sierra Forest chip is on track for release next year.
The company is for the first time dividing its data center chips into two categories: the Granite Rapids chip, which prioritizes performance but consumes more power, and the Sierra Forest chip, which focuses on efficiency.
Ronak Singhal, a senior fellow at Intel, mentioned that the company’s customers can consolidate legacy applications onto fewer computers inside a data center.
Singhal stated, “I might have things that are four, five, six years old. I can save power by moving something that is currently on five, 10, 15 different servers onto one new chip.”
He added, “It’s this density that drives total cost of ownership. The denser you go, the fewer systems you need.”