{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T04:07:40Z","timestamp":1750306060380,"version":"3.41.0"},"reference-count":42,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2017,5,26]],"date-time":"2017-05-26T00:00:00Z","timestamp":1495756800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"TCS Ph.D. fellowship"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2017,6,30]]},"abstract":"<jats:p>\n            General-Purpose Graphics Processing Unit (GPGPU) applications exploit on-chip scratchpad memory available in the Graphics Processing Units (GPUs) to improve performance. The amount of thread level parallelism (TLP) present in the GPU is limited by the number of resident threads, which in turn depends on the availability of scratchpad memory in its streaming multiprocessor (SM). Since the scratchpad memory is allocated at thread block granularity, part of the memory may remain unutilized. In this article, we propose architectural and compiler optimizations to improve the scratchpad memory utilization. Our approach, called\n            <jats:italic>Scratchpad Sharing<\/jats:italic>\n            , addresses scratchpad under-utilization by launching additional thread blocks in each SM. These thread blocks use unutilized scratchpad memory and also share scratchpad memory with other resident blocks. To improve the performance of scratchpad sharing, we propose\n            <jats:italic>Owner Warp First (OWF)<\/jats:italic>\n            scheduling that schedules warps from the additional thread blocks effectively. The performance of this approach, however, is limited by the availability of the part of scratchpad memory that is shared among thread blocks.\n          <\/jats:p>\n          <jats:p>\n            We propose compiler optimizations to improve the availability of shared scratchpad memory. We describe an allocation scheme that helps in allocating scratchpad variables such that shared scratchpad is accessed for short duration. We introduce a new hardware instruction,\n            <jats:italic>relssp<\/jats:italic>\n            , that when executed releases the shared scratchpad memory. Finally, we describe an analysis for optimal placement of\n            <jats:italic>relssp<\/jats:italic>\n            instructions, such that shared scratchpad memory is released as early as possible, but only after its last use, along every execution path.\n          <\/jats:p>\n          <jats:p>\n            We implemented the hardware changes required for scratchpad sharing and the\n            <jats:italic>relssp<\/jats:italic>\n            instruction using the GPGPU-Sim simulator and implemented the compiler optimizations in Ocelot framework. We evaluated the effectiveness of our approach on 19 kernels from 3 benchmarks suites: CUDA-SDK, GPGPU-Sim, and Rodinia. The kernels that under-utilize scratchpad memory show an average improvement of 19% and maximum improvement of 92.17% in terms of the number of instruction executed per cycle when compared to the baseline approach, without affecting the performance of the kernels that are not limited by scratchpad memory.\n          <\/jats:p>","DOI":"10.1145\/3075619","type":"journal-article","created":{"date-parts":[[2017,5,31]],"date-time":"2017-05-31T19:32:40Z","timestamp":1496259160000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Scratchpad Sharing in GPUs"],"prefix":"10.1145","volume":"14","author":[{"given":"Vishwesh","family":"Jatala","sequence":"first","affiliation":[{"name":"Indian Institute of Technology, Kanpur, Uttar Pradesh, India"}]},{"given":"Jayvant","family":"Anantpur","sequence":"additional","affiliation":[{"name":"Indian Institute of Science, Bangalore, Karnataka, India"}]},{"given":"Amey","family":"Karkare","sequence":"additional","affiliation":[{"name":"Indian Institute of Technology, Kanpur, Uttar Pradesh, India"}]}],"member":"320","published-online":{"date-parts":[[2017,5,26]]},"reference":[{"volume-title":"Proceedings of the Conference on Compiler Construction (CC\u201914)","author":"Anantpur Jayvant","key":"e_1_2_1_1_1","unstructured":"Jayvant Anantpur and R. Govindarajan . 2014. Taming control divergence in GPUs through control flow linearization . In Proceedings of the Conference on Compiler Construction (CC\u201914) . Jayvant Anantpur and R. Govindarajan. 2014. Taming control divergence in GPUs through control flow linearization. In Proceedings of the Conference on Compiler Construction (CC\u201914)."},{"volume-title":"Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software.","author":"Bakhoda A.","key":"e_1_2_1_2_1","unstructured":"A. Bakhoda , G. L. Yuan , W. W. L. Fung , H. Wong , and T. M. Aamodt . 2009. Analyzing CUDA workloads using a detailed GPU simulator . In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software. A. Bakhoda, G. L. Yuan, W. W. L. Fung, H. Wong, and T. M. Aamodt. 2009. Analyzing CUDA workloads using a detailed GPU simulator. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.5555\/2337159.2337166"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/IISWC.2009.5306797"},{"key":"e_1_2_1_5_1","unstructured":"CUDA 2012. CUDA C Programming Guide. (2012). Retrieved from http:\/\/docs.nvidia.com\/cuda\/pdf\/CUDA_C_Programming_Guide.pdf.  CUDA 2012. CUDA C Programming Guide. (2012). Retrieved from http:\/\/docs.nvidia.com\/cuda\/pdf\/CUDA_C_Programming_Guide.pdf."},{"key":"e_1_2_1_6_1","unstructured":"CUDA-SDK\n     2014. CUDA-SDK.\n   (\n  2014\n  ). Retrieved from http:\/\/docs.nvidia.com\/cuda\/cuda-samples.  CUDA-SDK 2014. CUDA-SDK. (2014). Retrieved from http:\/\/docs.nvidia.com\/cuda\/cuda-samples."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1854273.1854318"},{"volume-title":"Proceedings of the Conference on High Performance Computer Architecture.","author":"Wilson W.","key":"e_1_2_1_8_1","unstructured":"Wilson W. L. Fung and Tor M. Aamodt. 2011. Thread block compaction for efficient SIMT control flow . In Proceedings of the Conference on High Performance Computer Architecture. Wilson W. L. Fung and Tor M. Aamodt. 2011. Thread block compaction for efficient SIMT control flow. In Proceedings of the Conference on High Performance Computer Architecture."},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2007.12"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2012.18"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPDS.2012.319"},{"key":"e_1_2_1_12_1","unstructured":"GPGPUSIM\n       2014. GPGPU-\n      Sim Simulator\n    .\n   (\n  2014\n  ). Retrieved from http:\/\/www.gpgpu-sim.org.  GPGPUSIM 2014. GPGPU-Sim Simulator. (2014). Retrieved from http:\/\/www.gpgpu-sim.org."},{"volume-title":"Proceedings of the International Meeting on High-Performance Computing for Computational Science.","author":"Gutierrez Eladio","key":"e_1_2_1_13_1","unstructured":"Eladio Gutierrez , Sergio Romero , Maria A. Trenas , and Emilio L. Zapata . 2008. Memory locality exploitation strategies for FFT on the CUDA architecture . In Proceedings of the International Meeting on High-Performance Computing for Computational Science. Eladio Gutierrez, Sergio Romero, Maria A. Trenas, and Emilio L. Zapata. 2008. Memory locality exploitation strategies for FFT on the CUDA architecture. In Proceedings of the International Meeting on High-Performance Computing for Computational Science."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/1964179.1964184"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.1977.231133"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2597652.2597685"},{"volume-title":"Proceedings of the Conference on High Performance Computing.","author":"Huo Xin","key":"e_1_2_1_17_1","unstructured":"Xin Huo , V. T. Ravi , Wenjing Ma , and G. Agrawal . 2010. Approaches for parallelizing reductions on modern GPUs . In Proceedings of the Conference on High Performance Computing. Xin Huo, V. T. Ravi, Wenjing Ma, and G. Agrawal. 2010. Approaches for parallelizing reductions on modern GPUs. In Proceedings of the Conference on High Performance Computing."},{"key":"e_1_2_1_18_1","volume-title":"The more we share, the more we have: Improving GPU performance through register sharing. CoRR abs\/1503.05694","author":"Jatala Vishwesh","year":"2015","unstructured":"Vishwesh Jatala , Jayvant Anantpur , and Amey Karkare . 2015. The more we share, the more we have: Improving GPU performance through register sharing. CoRR abs\/1503.05694 ( 2015 ). Vishwesh Jatala, Jayvant Anantpur, and Amey Karkare. 2015. The more we share, the more we have: Improving GPU performance through register sharing. CoRR abs\/1503.05694 (2015)."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/2907294.2907298"},{"key":"e_1_2_1_20_1","volume-title":"Scratchpad sharing in GPUs. CoRR abs\/1607.03238","author":"Jatala Vishwesh","year":"2016","unstructured":"Vishwesh Jatala , Jayvant Anantpur , and Amey Karkare . 2016b. Scratchpad sharing in GPUs. CoRR abs\/1607.03238 ( 2016 ). Vishwesh Jatala, Jayvant Anantpur, and Amey Karkare. 2016b. Scratchpad sharing in GPUs. CoRR abs\/1607.03238 (2016)."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2451116.2451158"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/321921.321938"},{"volume-title":"Proceedings of the Conference on Parallel Architectures and Compilation Techniques.","author":"Kayiran O.","key":"e_1_2_1_23_1","unstructured":"O. Kayiran , A. Jog , M. T. Kandemir , and C. R. Das . 2013. Neither more nor less: Optimizing thread-level parallelism for GPGPUs . In Proceedings of the Conference on Parallel Architectures and Compilation Techniques. O. Kayiran, A. Jog, M. T. Kandemir, and C. R. Das. 2013. Neither more nor less: Optimizing thread-level parallelism for GPGPUs. In Proceedings of the Conference on Parallel Architectures and Compilation Techniques."},{"key":"e_1_2_1_24_1","volume-title":"Data Flow Analysis: Theory and Practice","author":"Khedker Uday","unstructured":"Uday Khedker , Amitabha Sanyal , and Bageshri Karkare . 2009. Data Flow Analysis: Theory and Practice ( 1 st ed.). CRC Press, Inc. , Boca Raton, FL . Uday Khedker, Amitabha Sanyal, and Bageshri Karkare. 2009. Data Flow Analysis: Theory and Practice (1st ed.). CRC Press, Inc., Boca Raton, FL.","edition":"1"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2014.6835937"},{"key":"e_1_2_1_26_1","volume-title":"Proceedings of the Conference on High Performance Computer Architecture.","author":"Lee Sangpil","year":"2016","unstructured":"Sangpil Lee , Won Woo Ro , Keunsoo Kim , Gunjae Koo , Myung Kuk Yoon , and Murali Annavaram . 2016 . Warped-preexecution: A GPU pre-execution approach for improving latency hiding . In Proceedings of the Conference on High Performance Computer Architecture. Sangpil Lee, Won Woo Ro, Keunsoo Kim, Gunjae Koo, Myung Kuk Yoon, and Murali Annavaram. 2016. Warped-preexecution: A GPU pre-execution approach for improving latency hiding. In Proceedings of the Conference on High Performance Computer Architecture."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/2749469.2750418"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/2628071.2628107"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.5555\/2738600.2738604"},{"volume-title":"Proceedings of the Conference on High Performance Computer Architecture.","author":"Li Dong","key":"e_1_2_1_30_1","unstructured":"Dong Li , Minsoo Rhu , Daniel R. Johnson , Mike O\u2019Connor , Mattan Erez , Doug Burger , Donald S. Fussell , and Stephen W. Redder . 2015a. Priority-based cache allocation in throughput processors . In Proceedings of the Conference on High Performance Computer Architecture. Dong Li, Minsoo Rhu, Daniel R. Johnson, Mike O\u2019Connor, Mattan Erez, Doug Burger, Donald S. Fussell, and Stephen W. Redder. 2015a. Priority-based cache allocation in throughput processors. In Proceedings of the Conference on High Performance Computer Architecture."},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPP.2011.88"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/1854273.1854348"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2155620.2155656"},{"key":"e_1_2_1_34_1","unstructured":"Open\n      CL\n     2009.\n   Retrieved from https:\/\/www.khronos.org\/opencl\/ Accessed \n  2012\n  .  OpenCL 2009. Retrieved from https:\/\/www.khronos.org\/opencl\/ Accessed 2012."},{"key":"e_1_2_1_35_1","unstructured":"PTX 2014. Parallel Thread Execution. (2014). Retrieved from http:\/\/docs.nvidia.com\/cuda\/parallel-thread-execution\/.  PTX 2014. Parallel Thread Execution. (2014). Retrieved from http:\/\/docs.nvidia.com\/cuda\/parallel-thread-execution\/."},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/CGO.2013.6494996"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2012.16"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2015.7056031"},{"key":"e_1_2_1_39_1","first-page":"238","article-title":"On demand register allocation and deallocation for a multithreaded processor. Retrieved from http:\/\/www.google.com\/patents\/US20110161616. (2011)","volume":"12","author":"Tarjan D.","year":"2011","unstructured":"D. Tarjan and K. Skadron . 2011 . On demand register allocation and deallocation for a multithreaded processor. Retrieved from http:\/\/www.google.com\/patents\/US20110161616. (2011) . U.S. Patent App. 12\/649 , 238 . D. Tarjan and K. Skadron. 2011. On demand register allocation and deallocation for a multithreaded processor. Retrieved from http:\/\/www.google.com\/patents\/US20110161616. (2011). U.S. Patent App. 12\/649,238.","journal-title":"U.S. Patent App."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/HPCA.2014.6835939"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1145\/2830772.2830813"},{"key":"e_1_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2370816.2370858"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3075619","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3075619","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T03:03:42Z","timestamp":1750215822000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3075619"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,5,26]]},"references-count":42,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2017,6,30]]}},"alternative-id":["10.1145\/3075619"],"URL":"https:\/\/doi.org\/10.1145\/3075619","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"type":"print","value":"1544-3566"},{"type":"electronic","value":"1544-3973"}],"subject":[],"published":{"date-parts":[[2017,5,26]]},"assertion":[{"value":"2016-07-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2017-03-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2017-05-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}