Game of Cross Cache: Let's win it in a more effective way!
This talk demonstrates an advanced, data-only exploitation technique for the Linux kernel using cross-cache attacks to achieve privilege escalation. The researcher presents a novel 'race-style slab move' primitive to overcome constraints in the SLUB allocator, enabling reliable exploitation of use-after-free (UAF) vulnerabilities. The technique is applied to a specific NPU driver vulnerability (CVE-2023-21400) on Samsung devices, successfully bypassing SELinux and kernel mitigations. The presentation provides a detailed methodology for heap shaping and manipulating the page allocator to facilitate stable cross-cache exploitation.
Mastering Cross-Cache Attacks: A New Primitive for Linux Kernel Exploitation
TLDR: Researchers have developed a novel "race-style slab move" primitive that significantly improves the reliability of cross-cache attacks in the Linux kernel. By manipulating the SLUB allocator and page allocator, this technique enables stable exploitation of use-after-free vulnerabilities, even under strict memory constraints. This research provides a repeatable methodology for heap shaping that security researchers can apply to bypass modern kernel protections like KASLR and SELinux.
Kernel exploitation is often a game of inches. You find a use-after-free (UAF) vulnerability, but the object you need to overwrite is in a different cache, or the allocator is too noisy to allow for a reliable heap spray. Most researchers treat cross-cache attacks as a "best effort" endeavor, hoping the allocator eventually places their controlled data where they need it. This approach is fragile and rarely survives in production environments where memory pressure and kernel mitigations are active.
The recent research presented at Black Hat 2024 changes this dynamic. By focusing on the mechanics of the SLUB allocator and the page allocator, the team demonstrated how to turn an unstable, probabilistic attack into a deterministic one. They specifically targeted CVE-2023-21400, an NPU driver vulnerability in the Qualcomm kernel, to prove that even with modern mitigations, the kernel's memory management remains a viable target for those who understand its internal state.
The Mechanics of the Race-Style Slab Move
At the heart of this technique is the "race-style slab move" primitive. Traditional cross-cache attacks rely on triggering a UAF, freeing a victim object, and then spraying the heap to reclaim that memory with a controlled object. The problem is that the SLUB allocator’s per-CPU partial lists are often unpredictable. If your spray doesn't land exactly when and where the allocator expects, the exploit fails.
The researchers identified that they could force the allocator into a specific state by manipulating the per-CPU partial lists. By pinning tasks to specific CPUs and forcing the allocation of objects until a slab is full, they can trigger a "flush" of the partial list. This is where the race condition comes in. By running multiple tasks that simultaneously allocate and release objects, they can force the kernel to move a slab from one CPU's partial list to another.
This gives the attacker a window of time to ensure that the "victim" slab is in a known, empty state. Once the slab is empty, it is returned to the page allocator. From there, the attacker can use a secondary heap spray—in this case, using user-space page tables or pipe buffers—to reclaim that specific memory region.
Deterministic Heap Shaping
The most impressive part of this research is how they handle the page allocator. Even if you can free a slab, you still need to ensure that your controlled data lands in that exact physical memory. The team used a side-channel approach to monitor the state of the page allocator by reading from /proc/meminfo and /proc/pagetypeinfo.
By monitoring these files, they can detect when the kernel is under memory pressure and when specific migration types are being used. This allows them to "shape" the heap by allocating and releasing large numbers of order-0 pages until the allocator is forced to use the specific memory region they want to target.
For example, when targeting a Samsung device, they used the following logic to ensure their controlled data would land in the right place:
// Simplified heap shaping logic
for (int i = 0; i < TARGET_ALLOCATIONS; i++) {
// Allocate order-0 pages to fill the free area
int fd = open("/dev/pipe", O_RDWR);
// Trigger memory pressure to force page allocator state
trigger_memory_pressure();
}
This isn't just a theoretical exercise. By combining this heap shaping with the race-style slab move, they achieved a 65% success rate in exploiting the NPU driver vulnerability from an untrusted application. This is a massive jump from the sub-10% success rates typically associated with these types of kernel exploits.
Real-World Implications for Pentesters
If you are performing a mobile security assessment or a deep-dive kernel audit, you should stop viewing the SLUB allocator as a black box. The ability to monitor page allocator states through procfs is a powerful tool that is often overlooked. When you encounter a UAF in a driver, don't just spray and pray. Look at the cache the object belongs to, check its slab size, and determine if you can force a slab move to make your exploit deterministic.
The impact of this technique is significant. It allows for data-only attacks that can bypass SELinux policies and other kernel-level protections. Because the exploit doesn't rely on executing shellcode in kernel space, it is much harder for traditional integrity-checking mechanisms to detect.
Defensive Considerations
Defenders should focus on the root cause: the memory management vulnerabilities that allow these primitives to exist. While the researchers mentioned SLAB_VIRTUAL as a potential mitigation, the reality is that hardening the allocator is a cat-and-mouse game. The most effective defense remains rigorous code review of driver-level memory management and the implementation of Kernel Address Sanitizer (KASAN) during the development and testing phases to catch UAFs before they reach production.
The game of cross-cache is far from over. As long as the kernel relies on complex, performance-oriented allocators, there will be ways to manipulate them. The next time you are looking at a kernel crash, ask yourself if you are looking at a random failure or a predictable state you can control.
CVEs
Vulnerability Classes
Target Technologies
All Tags
Up Next From This Conference

How to Read and Write a High-Level Bytecode Decompiler

Opening Keynote: Black Hat Asia 2024

AI Governance and Security: A Conversation with Singapore's Chief AI Officer
Similar Talks

Unsaflock: Unlocking Millions of Hotel Locks

Playing Dirty Without Cheating - Getting Banned for Fun and No Profit

