Exploiting interference-aware GPU container concurrency learning from resource usage of application execution
- Authors
- Kim, Sejin; Kim, Yoonhee
- Issue Date
- Sep-2020
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Keywords
- Container; GPU Virtualization; Interference; Interference-aware Scheduling; Machine Learning; Profiling; Resource Metrics
- Citation
- APNOMS 2020 - 2020 21st Asia-Pacific Network Operations and Management Symposium: Towards Service and Networking Intelligence for Humanity, pp 173 - 178
- Pages
- 6
- Journal Title
- APNOMS 2020 - 2020 21st Asia-Pacific Network Operations and Management Symposium: Towards Service and Networking Intelligence for Humanity
- Start Page
- 173
- End Page
- 178
- URI
- https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/1133
- DOI
- 10.23919/APNOMS50412.2020.9236964
- ISSN
- 2576-8565
- Abstract
- The advent of GPGPU (General-Purpose Graphic Processing Unit) containers enlarges opportunities of acceleration and easy-to-use in clouds. However, there is still lack of research on utilizing efficiently GPU resource and managing multiple applications at the same time. Co-execution of applications without understanding applications' execution characteristics may result in low performance caused by their interference problems. To solve the problem, this paper defines resource metrics that causes performance degradation when sharing resource. We calculate the degree of interference during concurrent execution of multi applications using a ML (Machine Learning) method with the metrics. The experiments show that the execution of interference aware groups improves 7% in execution time compared to non-interference aware group in overall. For a workload consisting of several applications, the overall performance was improved by 18% and 25%, respectively, when compared to SJF and random. © 2020 KICS.
- Files in This Item
-
Go to Link
- Appears in
Collections - 공과대학 > 소프트웨어학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.