Data Provenance for Experiment Management of Scientific Applications on GPU
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

Graphics Processing Units (GPUs) are getting popularly utilized for multi-purpose applications in order to enhance highly performed parallelism of computation. As memory virtualization methods in GPU nodes are not efficiently provided to deal with diverse memory usage patterns for these applications, the success of their execution depends on exclusive and limited use of physical memory in GPU environments. Therefore, it is important to predict a pattern change of GPU memory usage during runtime execution of an application. Data provenance extracted from application characteristics, GPU runtime environments, input, and execution patterns from runtime monitoring, is defined for supporting application management to set runtime configuration and predict an experimental result, and utilize resource with co-located applications. In this paper, we define data provenance of an application on GPUs and manage data by profiling the execution of CUDA scientific applications. Data provenance management helps to predict execution patterns of other similar experiments and plan efficient resource configuration. © 2019 IEICE.

키워드

data provenanceGPUscientific workflowComputer graphicsForecastingGraphics processing unitProgram processorsApplication managementData provenanceExperiment managementMemory virtualizationResource configurationsRun-time configurationScientific applicationsScientific workflowsInformation management
제목
Data Provenance for Experiment Management of Scientific Applications on GPU
저자
Kim, SejinOh, JisunKim, Yoonhee
DOI
10.23919/APNOMS.2019.8892997
발행일
2019-09
유형
Conference Paper
저널명
2019 20th Asia-Pacific Network Operations and Management Symposium: Management in a Cyber-Physical World, APNOMS 2019