Breaking Theoretical Limits: The Gap Between Virtual NICs and Physical Network Cards
This research demonstrates how discrepancies between virtual network interface card (vNIC) implementations and physical network card specifications can be exploited to trigger memory corruption vulnerabilities. The study focuses on the Hyper-V network stack, specifically how malformed packets bypass length constraints enforced by the physical and data link layers. The researchers successfully triggered integer overflows and out-of-bounds read vulnerabilities, resulting in kernel-level crashes and potential information disclosure. The talk provides a methodology for using code review and fuzzing to identify similar vulnerabilities in other virtualization platforms.
Exploiting the Gap Between Virtual and Physical Network Stacks
TLDR: Researchers discovered that discrepancies between virtual network interface card (vNIC) implementations and physical hardware specifications allow for malformed packet injection that bypasses length constraints. By targeting the Hyper-V network stack, they successfully triggered integer overflows and out-of-bounds reads, leading to kernel-level crashes. This research highlights a critical attack surface in virtualization environments where software-defined networking fails to enforce the same strict validation as physical hardware.
Virtualization is the bedrock of modern infrastructure, but we often treat the virtual network interface card (vNIC) as a black box that behaves exactly like its physical counterpart. That assumption is dangerous. When a hypervisor simulates a physical network card, it must implement complex features like Large Send Offload (LSO) and UDP Segmentation Offload (USO) in software. If the software implementation doesn't perfectly mirror the hardware's constraints, you get a gap. That gap is where the bugs live.
The Mechanics of the Mismatch
The core issue identified in this research is that vNICs often lack the rigid, hardware-level validation that physical NICs provide. In a physical environment, the network card firmware or hardware logic naturally enforces packet length limits. In a virtualized environment, the hypervisor's vSwitch module handles these packets. If the vSwitch assumes the guest operating system has already performed necessary validation, or if it fails to account for the specific way the host processes these packets, an attacker can inject malformed traffic that triggers memory corruption.
The researchers focused on the Hyper-V network stack, specifically how it handles incoming packets from the guest. By using Ghidra to reverse engineer the vSwitch module, they identified that the code path for processing certain packet types—specifically ICMPv6—did not properly validate length fields before performing memory operations.
Exploiting the Length Constraint Failure
The research highlights two specific vulnerabilities that demonstrate the danger of this implementation gap. The first is CVE-2021-24074, an integer overflow in the Windows TCP/IP stack. By sending a single, malformed ICMPv6 packet with a length exceeding 65535, the attacker can force the system to miscalculate buffer sizes. Because the length field is treated as a 16-bit unsigned integer in some contexts but handled differently in others, the overflow leads to a heap-based memory corruption.
The second vulnerability, CVE-2022-30223, is an out-of-bounds read in Windows Hyper-V. This occurs when a small, 15-byte ARP packet is processed. The system expects a minimum length of 28 bytes for an ARP packet, but the vSwitch logic fails to enforce this. When the code attempts to read the sender's hardware address or other fields, it reads past the end of the allocated buffer, potentially leaking sensitive kernel memory.
These aren't just theoretical crashes. The researchers demonstrated a Blue Screen of Death (BSOD) on a Windows Server 2016 host by simply sending these malformed packets from a guest CentOS VM. For a pentester, this is a goldmine. If you have access to a guest VM, you aren't just limited to the guest's privilege level; you have a direct line to the hypervisor's memory management logic.
Practical Implications for Pentesters
During a red team engagement or a cloud-based penetration test, the ability to escape a guest VM is the ultimate goal. While these specific vulnerabilities are patched, the methodology remains highly relevant. When you are testing a virtualized environment, look for the "offload" features. If you can identify a vNIC that supports LSO or USO, you have found a complex software implementation that is likely to have edge cases.
Use tools like KAFO to assist in your fuzzing efforts. Focus your attention on the interface between the guest and the host, specifically the virtual bus (vmbus in Hyper-V). Any time the guest sends a command or a packet that the host must interpret, you are looking at a potential entry point. If the host code is written in C or C++, and it handles variable-length data structures without strict bounds checking, you have a high probability of finding an integer overflow or an out-of-bounds read.
Defensive Hardening
Defending against these attacks is difficult because the vulnerability exists in the hypervisor's core logic, not in the guest's configuration. However, the most effective mitigation is to minimize the attack surface by disabling unnecessary offload features on vNICs if they aren't required for performance. Furthermore, ensure that the host operating system is running the latest security updates, as these vulnerabilities are typically addressed through kernel-level patches.
Blue teams should also monitor for anomalous traffic patterns originating from guest VMs. A guest VM sending malformed ICMPv6 packets or undersized ARP requests is a clear indicator of an attempt to probe the hypervisor's network stack. Implementing strict ingress filtering at the virtual switch level can prevent these malformed packets from ever reaching the vulnerable processing routines.
The gap between virtual and physical is not going away. As we move more workloads to the cloud, the complexity of these software-defined networks will only increase. For researchers, this means the next big exploit is likely hiding in the code that tries to make a virtual device look and feel like a real one. Keep digging into those vSwitch drivers. The next time you see a BSOD during a test, don't just reboot and move on—look at what you were sending right before the crash. You might have just found your way out of the sandbox.
Vulnerability Classes
Target Technologies
Up Next From This Conference

A Security RISC? The State of Microarchitectural Attacks on RISC-V

REDIScovering HeadCrab: A Technical Analysis of a Novel Malware and the Mind Behind It

TsuKing: Coordinating DNS Resolvers and Queries into Potent DDoS Amplifiers
Similar Talks

The Dark Side of Bug Bounty

Playing Dirty Without Cheating - Getting Banned for Fun and No Profit

