Multicore Processors

Read Complete Research Material



Multicore Processors

Abstract

Computer processor design has evolved at a constant pace for the last 20 years. The proliferation of computers into the mass market and the tasks we ask of them continue to push the need for more powerful processors. The market requirement for higher performing processors is linked to the demand for more sophisticated software applications. E-mail, for instance, which is now used globally, was only a limited and expensive technology 10 years ago. Today, software applications span everything from helping large corporations better manage and protect their business-critical data and networks to allowing PCs in the home to edit home videos, manipulate digital photographs, and burn downloaded music to CDs. Tomorrow, software applications might create real-world simulations that are so vivid it will be difficult for people to know if they are looking at a computer monitor or out the window; however, advancements like this will only come with significant performance increases from readily available and inexpensive computer technologies. Multi-core processors will help to break through today's single-core performance limitations, and provide the performance capacity to tackle tomorrow's more advanced software.

Multicore Processors

Introduction

The entire microprocessor industry is moving towards multi-core architecture design. To take full advantage of multi-core CPU chips, computer workloads must rely on thread-level parallelism. Software engineers use multiple threads of control for many reasons: to build responsive servers that communicate with multiple parallel clients, to exploit parallelism in shared-memory multiprocessors, to produce sophisticated user interfaces, and to enable a variety of other program structuring approaches.

The design, evaluation, and optimization of multi-core architectures present a daunting set of challenges.

Workload Synthesis for Efficient Microprocessor Design Evaluation

The prohibitively long simulation time in processor architecture design has spurred a burst of research in recent years to reduce this cost. Among those, workload synthesis has been shown to be an effective methodology to accelerate architecture design evaluation. The goal of this approach is to create reduced miniature benchmarks that represent the execution characteristics of the input applications but have a much shorter execution period than the original applications.

From the perspective of architectural design evaluation, it is essential that the synthetic program efficiently and accurately model the behavior of the original application.

Server density has grown dramatically over the past decade to keep pace with escalating performance requirements for enterprise applications. Ongoing progress in processor designs has enabled servers to continue delivering increased performance, which in turn helps fuel the powerful applications that support rapid business growth. However, increased performance incurs a corresponding increase in processor power consumption—and heat is a consequence of power use. As a result, administrators must determine not only how to supply large amounts of power to systems, but also how to contend with the large amounts of heat that these systems generate in the data center. As more applications move from proprietary to standardsbased systems, the performance demands on industrystandard servers are spiraling upward. Today, in place of midrange and large mainframe systems, tightly packed racks of stand-alone servers and blade servers ...
Related Ads