top of page

What is schedGPU?

schedGPU [1, 2] is a framework for intra-node GPU scheduling that facilitates the simultaneous execution of multiple applications on a GPU. Using schedGPU multiple applications can request GPU memory during execution time. schedGPU safely co-schedules the applications by taking memory requirements into account and thereby avoids potential memory allocation errors due to unavailable memory on the GPU. The schedGPU framework is proposed and developed for CUDA-based GPU applications.
​
​
[1] C. Reaño, F. Silla, D. S. Nikolopoulos and B. Varghese, "Intra-Node Memory Safe GPU Co-Scheduling," in IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 5, pp. 1089-1102, 1 May 2018. https://doi.org/10.1109/TPDS.2017.2784428
​
[2] C. Reano, F. Silla and M. J. Leslie, "schedGPU: Fine-grain dynamic and adaptative scheduling for GPUs," 2016 International Conference on High Performance Computing & Simulation (HPCS), Innsbruck, 2016, pp. 993-997  https://doi.org/10.1109/HPCSim.2016.7568444
​
bottom of page