Overview
This website is under construction.
The NSF Center for High-Performance Reconfigurable Computing (CHREC, pronounced “shreck”) is a national research center and consortium under the auspices of the industry/university cooperative research centers program at the National Science Foundation. Operational since 2007, and recognized by NSF as one of its best centers, CHREC consists of more than 30 industry, government, and academic partners working collaboratively in solving research challenges at the nexus of reconfigurable, high-performance, and embedded computing (i.e., RC, HPC, and HPEC).
Reconfigurable computing, a major focus of CHREC, holds tremendous promise in addressing the needs of a broad range of applications, in areas such as signal and image processing, cryptology, communications processing, data and text mining, optimization, bioinformatics, and complex system simulations. Reconfigurable systems span a variety of platform types, from leading-edge machines on earth to mission-critical machines in space. Advantages from a reconfigurable approach can be realized in terms of performance, power, size, cooling, cost, versatility, scalability, and dependability to name a few, important facets where conventional computing infrastructure alone is proving unable to meet the needs of an increasing number of critical applications. Preliminary thrust areas for CHREC include device and core building blocks, reconfigurable systems and services, design automation and programming methods and tools, and reconfigurable and parallel algorithms and applications. Research projects in these areas are formulated on an annual basis in concert with Center partners, emphasizing a keen interest in exploring and evaluating new methods as well as key tradeoff analyses.
Although a relatively new field, reconfigurable computing (RC) has come to the forefront as an important processing paradigm for HPC, often in concert with conventional microprocessor-based computing. With RC, the full potential of underlying electronics in a system may be better realized in an adaptive manner. At the heart of RC, field-programmable hardware in its many forms has the potential to revolutionize the performance and efficiency of systems for HPC as well as deployable systems in HPEC. One ideal of the RC paradigm is to achieve the performance, scalability, power, and cooling advantages of the “Master of a trade,” custom hardware, with the versatility, flexibility, and efficacy of the “Jack of all trades,” a general-purpose processor. As is commonplace with components for HPC such as microprocessors, memory, networking, storage, etc., critical technologies for RC can also be leveraged from other IT markets to achieve a better performance-cost ratio, most notably the field-programmable gate array or FPGA. Each of these devices is inherently heterogeneous, being a predefined mixture of configurable logic cells and powerful, fixed resources.
Many opportunities and challenges exist in realizing the full potential of reconfigurable hardware for HPC. Among the opportunities offered by field-programmable hardware are a high degree of on-chip parallelism that can be mapped directly from data flow characteristics of the application’s defining parallel algorithm, user control over low-level resource definition and allocation, and user-defined data format and precision rendered efficiently in hardware. In realizing these opportunities, there are many vertical challenges, where we seek to bridge the semantic gap between the high level at which HPC applications are developed and the low level (i.e. HDL) at which hardware is typically defined. There are also many horizontal challenges, where we seek to integrate or marry diverse resources such as microprocessors, FPGAs, and memory in optimal relationships, in essence bridging the paradigm gap between conventional and reconfigurable processing at various levels in the system and software architectures.
Success is expected to come from both revolutionary and evolutionary advances. For example, at one end of the spectrum, internal design strategies of field-programmable devices need to be reevaluated in light of a broad range of HPC and HPEC applications, not only to potentially achieve a more effective mixture of on-chip fixed resources alongside reconfigurable logic blocks, but also as a prime target for higher-level programming and translation. At the other end of the spectrum, new concepts and tools are needed to analyze the algorithmic basis of applications under study (e.g. inherent control-flow vs. data-flow components, numeric format vs. dynamic range), and new programming models to render this basis in an abstracted design strategy, so as to potentially target and exploit a combination of resources (e.g. general-purpose processors, reconfigurable processors, and special-purpose processors such as GPUs, DSPs, and NPs). While attempting to build highly heterogeneous systems composed of resources from many diverse categories can be cost-prohibitive, and a goal of uni-paradigm application design for multi-paradigm computing may be extremely difficult to perfect, one of the inherent advantages of RC is that it promises to support these goals in a more flexible and cost-effective manner. Between the two extremes of devices and programming models for multi-paradigm computing, many challenges await with new concepts and tools – compilers, core libraries, system services, debug and performance analysis tools, etc. These and related steps will be of paramount importance for the transition of RC technologies into the mainstream of HPC and HPEC.
More »