International Symposium on Future of High Performance Green Computing 2018 (HPGC2018)
Towards Low-Energy Co-Design of Future Computer Systems
During the last decade, further developments of computer architecture and microprocessor hardware have been hitting the so-called “energy wall” because of their excessive demands for more power. Existing approaches for energy efficient computing rely heavily on power efficient hardware in isolation which is far from acceptable for the emerging challenges. Furthermore, hardware techniques, like dynamic voltage and frequency scaling, are often limited by their granularity (very coarse power management) or by their scope (a very limited system view). More specifically, recent developments of multi-core processors recognize energy monitoring and tuning as one of the main challenges towards achieving higher performance, given the growing power and temperature constraints. To address these challenges, one needs both suitable energy abstraction and corresponding instrumentation which are amongst the core topics of ongoing research and development work.
Since current methodologies and tools are limited by hardware capabilities and their lack of information about the application code, a promising approach is to consider together the characteristics of both the processor and the application-specific workload. Indeed, it is pivotal for hardware to expose mechanisms for optimizing dynamically consumed power and thermal energy for various workloads and for reducing data motion, a major component of energy use. Therefore, our abstract model is based on application-specific parameters such as power consumption, execution time, and equilibrium temperature as well as hardware-specific parameters such as half time for thermal rise or fall. Building upon this recent work, the ongoing and future research efforts involve the development of a novel tuning methodology and the evaluation of its advantages on real use cases. Experimental results demonstrate the efficient use of the model for analyzing and improving significantly the application-specific balance between power, temperature and performance.
Vladimir Getov is professor of distributed and high-performance computing and research group leader at the University of Westminster, London. His career spans both industrial research and academia. After completing his PhD, Getov was project manager of an IBM PC/XT-compatible computer (1984). In 1989, he joined the Concurrent Computations Group at the University of Southampton. Since 1995, Vladimir has been an academic staff member at the University of Westminster, where he was awarded the title Professor (2001) and served as research director and coordinator of research in Parallel and Distributed Computing. Getov is recognized by his peers for his commitment to service, leadership skills, and dedication to research and related professional activities. He has received several prestigious awards, such as IEEE CS Golden Core Award (2016), Honorary Professor (TU-Sofia, Bulgaria, 2012), IBM Faculty Award (2010), Bulgarian "Pythagoras" Science Award (2009), Outstanding Executive Committee Contribution Award (EU CoreGRID, 2008).
Vladimir Getov has an extensive track record of international collaboration and achievements, such as founding contributions to the PARKBENCH Committee, the Java Grande Forum, and the Open Grid Forum. An active IEEE Computer Society (CS) volunteer since the mid-1990s, Getov is currently a member of the IEEE CS Board of Governors (2016–2018), and secretary of the IEEE CS Publications Board. He has been Computer’s area editor for high-performance computing since 2008 and has served as general and program chair of several IEEE conferences. He is also Standing Committee member and co-chair of publications for IEEE COMPSAC, Executive Committee member of the IEEE CS Technical Consortium on High-Performance Computing, and an active member of the IEEE International Roadmap for Devices and Systems. His main research interests include Performance Analysis and Evaluation, Energy Efficient Computing, Parallel and Distributed Computer Architectures, Cloud and Services Computing, Component-oriented Design, Extreme-scale Computer Systems, Autonomous Computing, Message-passing Environments, and Hybrid Programming Models and Paradigms.