Analyzing Data Locality on GPU Caches using Static Profiling of Workloads
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

The diversity of workloads drives studies to use GPU more effectively to overcome the limited memory of GPUs. Precisely, it is essential to understand and utilize data locality of workloads to utilize the memory and cache efficiently, which is relatively smaller than CPU’s. It is important to understand GPU memory hierarchy to efficiently use with multi-thread environment. Although there have been previous approaches to analyzing data locality on GPUs, these approaches focused on global memory and L2 cache levels with profiling at thread block levels. The data locality study in warp level in GPU has not been studied much. Especially, the concept of coalescing has been defined but the method of measuring the degree of coalescing has not been discussed. In this paper, we analyze data locality in L1 cache levels, which is the smallest but fastest in cache level to analyze the impact of data locality. To achieve this analysis, we profile data locality in warp level, which is smallest segment in GPU thread groups. We define the degree of coalescing besides static profiling of data locality and provide the estimation of refined locality from profiling of L1 cache data access patterns. As a proof-of-concept, the estimation using the proposed method are evaluated with the performance comparison to diverse real-world GPU benchmarks such as Rodina and Polybench. The locality metrics with coalescing on experiments showed meaningful correlation with cache utilization for performance enhancement. Author

키워드

CodesCorrelationData localityEstimationGPGPU workload analysisGPU cacheGPU profilingGraphics processing unitsInstruction setsMemory managementMessage systemsPTX code
제목
Analyzing Data Locality on GPU Caches using Static Profiling of Workloads
저자
Kim, JieunEom, HyeonsangKim, Yoonhee
DOI
10.1109/ACCESS.2023.3307315
발행일
2023-08
유형
Article
저널명
IEEE Access
11
페이지
95939 ~ 95947