GPUfs: The case for operating system services on gpus

Mark Silberstein, Bryan Ford, Emmett Witchel

Research output: Contribution to journalArticlepeer-review

Abstract

The GPUfs file system layer for GPU software makes core operating system abstractions available to GPU code. By allowing developers to access files directly from GPU programs, GPUfs demonstrates the productivity and performance benefits of allowing GPUs to guide the flow of data in a system. GPUfs distributes its buffer cache across all CPU and GPU memories to enable idioms like process pipelines that produce and consume files across multiple processors. GPUfs guarantees local file changes propagate to other processors when the file is closed on the modifying processor first and subsequently opened on other processors. Organizing GPUfs without daemon threads has important design consequences, including the need to optimize the page replacement algorithm for speed. File descriptors in GPUfs are global to a GPU kernel, just as they are global to a CPU process. Each GPU open returns a distinct file descriptor available to all GPU threads that must be closed with close. Although GPUfs must invoke the replacement algorithm synchronously, writing modified pages from the GPU memory back to the CPU can be done asynchronously. GPUfs balances programmer convenience with implementation efficiency by layering its API.

Original languageEnglish
Pages (from-to)68-79
Number of pages12
JournalCommunications of the ACM
Volume57
Issue number12
DOIs
StatePublished - 1 Dec 2014

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'GPUfs: The case for operating system services on gpus'. Together they form a unique fingerprint.

Cite this