Saving Power in the Data Center

Past Tides | Tech Events
June 12, 2017 By Wendy Wolfson

In the May 10 Association for Computing Machinery (ACM) meeting “Saving Power in the Data Center”, Marios Papaefthymiou, professor of computer science and dean of the UCI Donald Bren School of Information and Computer Sciences (ICS) showed how redesigning the internal architectures of high performance computer chips can increase their energy efficiency, which can translate to saving significant amounts of power in data centers that rely on myriads of such chips. Papaefthymiou’s research is in energy-efficient architectures for high performance computing.He co-founded Cyclos Semiconductor, a developer of energy-efficient resonant clocking technologies that enable high-performance computing with drastically reduced power consumption.

When chips start to melt

In the last decade, managing data center power consumption has become more problematic as high performance servers keep getting faster and more powerful. Papaefthymiou showed a picture of Google’s Oregon data center emitting billows of steam produced from cooling its computer banks. “Getting the energy consumption of these data centers down is a big deal,” Papaefthymiou says. Currently, data centers consume between 1.5% to 2.5% of the electricity used in the U.S annually. All this energy is converted to heat, which must be removed from the system to prevent computer chips from overheating and malfunctioning.

Prior to the advent of cloud computing, only super computers required extreme cooling measures. In conventional computers, temperatures were kept in check by curtailing their power consumption primarily through voltage scaling, a power management technique, where the voltage used in a computer is flexibly increased or decreased as needed. However, the ability to further decrease voltage without sacrificing computing performance ended about 15 years ago. Although computer chips continued to shrink, their power consumption did not decrease at the same rate. With chips encountering their thermal limits, power consumption is the main obstacle in obtaining higher performance.

Industry puzzled on how to lower power consumption of high-performance chips

According to sources, servers amount to 80% of the IT power consumed in a data center, with processor chips eating up the bulk of server power. Researchers experimented with strategies like turning off whatever is not in use, gradual recycling of charges, as well as computing more slowly to use less energy. The practice of turning off whatever is not used has been widely adopted. But once you turn something off, the resource is not used (by definition) and processor utilization decreases, a phenomenon coined as “dark silicon.” In other words, turning the hardware off keeps power consumption in check, but it wastes computing resources. Gradual recycling of charges and slow computing remained experimental for decades, but some of this research was recently turned into practical technologies. Somewhat counter intuitively, these technologies are now used in some of the fastest processors commercially available. Papaefthymiou described the first such deployment in a high volume commercial processor chip, AMD’s Piledriver processor core.

Targeting the clock

Papaefthymiou observed that in high performance computer chips, the clock, which synchronizes millions of storage elements, can consume anywhere from 10% to 30% of the chip’s power. There is an internal clock in every processor. Like a particularly omniscient traffic cop, this clock synchronizes operations so that no data is lost. Papaefthymiou and his students focused on the clock, pioneering the design of high-end computer chips with resonant clock distribution networks over the past 15 years or so. In a technique similar to the regenerative braking system used in hybrid cars, they used a system of inductors to capture some of the electric energy used by the processor’s clock and return it to the system. The result is a system which ‘recycles’ the clock power instead of dissipating it on every clock cycle as traditional systems do. According to Papaefthymiou, this technique of energy recycling at the chip level saves significant amounts of power in high-end multiGHz processors for data centers—reported savings in clock power of commercial chips are 30% to 50% and translate into a 5% to 10% reduction in total chip power, depending on the workload. Papaefthymiou then gave an overview of ICS. The only computing school in the UC system, ICS was founded in 1968 as a department and became a school in 2002. The school has three departments (computer science, informatics, and statistics) with more than 70 tenure-track faculty and a broad research portfolio ranging from data science and cybersecurity to software engineering and human-computer interfaces. In the next few years, ICS will expand and has plans to hire 30 new faculty.