LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Lars Wrenger
  • Florian Rommel
  • Alexander Halbuer
  • Christian Dietrich
  • Daniel Lohmann

External Research Organisations

  • Hamburg University of Technology (TUHH)
View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 2023 USENIX Annual Technical Conference, ATC 2023
Pages897-914
Number of pages18
ISBN (electronic)9781939133359
Publication statusPublished - 2023
Event2023 USENIX Annual Technical Conference, ATC 2023 - Boston, United States
Duration: 10 Jul 202312 Jul 2023

Abstract

Within the operating-system’s memory-management subsystem, the page-frame allocator is the most fundamental component. It administers the physical-memory frames, which are required to populate the page-table tree. Although the appearance of heterogeneous, nonvolatile, and huge memories has drastically changed the memory hierarchy, we still manage our physical memory with the seminal methods from the 1960s. With this paper, we argue that it is time to revisit the design of page-frame allocators. We demonstrate that the Linux frame allocator not only scales poorly on multi-core systems, but it also comes with a high memory overhead, suffers from huge-frame fragmentation, and uses scattered data structures that hinder its usage as a persistent-memory allocator. With LLFREE, we provide a new lock- and log-free allocator design that scales well, has a small memory footprint, and is readily applicable to nonvolatile memory. LLFREE uses cache-friendly data structures and exhibits antifragmentation behavior without inducing additional performance overheads. Compared to the Linux frame allocator, LLFREE reduces the allocation time for concurrent 4 KiB allocations by up to 88 percent and for 2 MiB allocations by up to 98 percent. For memory compaction, LLFREE decreases the number of required page movements by 64 percent.

ASJC Scopus subject areas

Cite this

LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation. / Wrenger, Lars; Rommel, Florian; Halbuer, Alexander et al.
Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023. 2023. p. 897-914.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Wrenger, L, Rommel, F, Halbuer, A, Dietrich, C & Lohmann, D 2023, LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation. in Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023. pp. 897-914, 2023 USENIX Annual Technical Conference, ATC 2023, Boston, United States, 10 Jul 2023.
Wrenger, L., Rommel, F., Halbuer, A., Dietrich, C., & Lohmann, D. (2023). LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation. In Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023 (pp. 897-914)
Wrenger L, Rommel F, Halbuer A, Dietrich C, Lohmann D. LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation. In Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023. 2023. p. 897-914
Wrenger, Lars ; Rommel, Florian ; Halbuer, Alexander et al. / LLFREE : Scalable and Optionally-Persistent Page-Frame Allocation. Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023. 2023. pp. 897-914
Download
@inproceedings{38af39330ee146459c8cd2e31d9130e8,
title = "LLFREE: Scalable and Optionally-Persistent Page-Frame Allocation",
abstract = "Within the operating-system{\textquoteright}s memory-management subsystem, the page-frame allocator is the most fundamental component. It administers the physical-memory frames, which are required to populate the page-table tree. Although the appearance of heterogeneous, nonvolatile, and huge memories has drastically changed the memory hierarchy, we still manage our physical memory with the seminal methods from the 1960s. With this paper, we argue that it is time to revisit the design of page-frame allocators. We demonstrate that the Linux frame allocator not only scales poorly on multi-core systems, but it also comes with a high memory overhead, suffers from huge-frame fragmentation, and uses scattered data structures that hinder its usage as a persistent-memory allocator. With LLFREE, we provide a new lock- and log-free allocator design that scales well, has a small memory footprint, and is readily applicable to nonvolatile memory. LLFREE uses cache-friendly data structures and exhibits antifragmentation behavior without inducing additional performance overheads. Compared to the Linux frame allocator, LLFREE reduces the allocation time for concurrent 4 KiB allocations by up to 88 percent and for 2 MiB allocations by up to 98 percent. For memory compaction, LLFREE decreases the number of required page movements by 64 percent.",
author = "Lars Wrenger and Florian Rommel and Alexander Halbuer and Christian Dietrich and Daniel Lohmann",
note = "Funding Information: This work was funded by the Deutsche Forschungsgemein-schaft (DFG, German Research Foundation) – 468988364, 501887536. ; 2023 USENIX Annual Technical Conference, ATC 2023 ; Conference date: 10-07-2023 Through 12-07-2023",
year = "2023",
language = "English",
pages = "897--914",
booktitle = "Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023",

}

Download

TY - GEN

T1 - LLFREE

T2 - 2023 USENIX Annual Technical Conference, ATC 2023

AU - Wrenger, Lars

AU - Rommel, Florian

AU - Halbuer, Alexander

AU - Dietrich, Christian

AU - Lohmann, Daniel

N1 - Funding Information: This work was funded by the Deutsche Forschungsgemein-schaft (DFG, German Research Foundation) – 468988364, 501887536.

PY - 2023

Y1 - 2023

N2 - Within the operating-system’s memory-management subsystem, the page-frame allocator is the most fundamental component. It administers the physical-memory frames, which are required to populate the page-table tree. Although the appearance of heterogeneous, nonvolatile, and huge memories has drastically changed the memory hierarchy, we still manage our physical memory with the seminal methods from the 1960s. With this paper, we argue that it is time to revisit the design of page-frame allocators. We demonstrate that the Linux frame allocator not only scales poorly on multi-core systems, but it also comes with a high memory overhead, suffers from huge-frame fragmentation, and uses scattered data structures that hinder its usage as a persistent-memory allocator. With LLFREE, we provide a new lock- and log-free allocator design that scales well, has a small memory footprint, and is readily applicable to nonvolatile memory. LLFREE uses cache-friendly data structures and exhibits antifragmentation behavior without inducing additional performance overheads. Compared to the Linux frame allocator, LLFREE reduces the allocation time for concurrent 4 KiB allocations by up to 88 percent and for 2 MiB allocations by up to 98 percent. For memory compaction, LLFREE decreases the number of required page movements by 64 percent.

AB - Within the operating-system’s memory-management subsystem, the page-frame allocator is the most fundamental component. It administers the physical-memory frames, which are required to populate the page-table tree. Although the appearance of heterogeneous, nonvolatile, and huge memories has drastically changed the memory hierarchy, we still manage our physical memory with the seminal methods from the 1960s. With this paper, we argue that it is time to revisit the design of page-frame allocators. We demonstrate that the Linux frame allocator not only scales poorly on multi-core systems, but it also comes with a high memory overhead, suffers from huge-frame fragmentation, and uses scattered data structures that hinder its usage as a persistent-memory allocator. With LLFREE, we provide a new lock- and log-free allocator design that scales well, has a small memory footprint, and is readily applicable to nonvolatile memory. LLFREE uses cache-friendly data structures and exhibits antifragmentation behavior without inducing additional performance overheads. Compared to the Linux frame allocator, LLFREE reduces the allocation time for concurrent 4 KiB allocations by up to 88 percent and for 2 MiB allocations by up to 98 percent. For memory compaction, LLFREE decreases the number of required page movements by 64 percent.

UR - http://www.scopus.com/inward/record.url?scp=85176927514&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85176927514

SP - 897

EP - 914

BT - Proceedings of the 2023 USENIX Annual Technical Conference, ATC 2023

Y2 - 10 July 2023 through 12 July 2023

ER -