| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
| |
Somehow one of the previous patches made dummy_null_handler() behave
like uexit_irq_handler(). Restore the original handler behavior.
|
| |
|
|
|
| |
Use UEXIT_END to indicate normal guest termination, and UEXIT_INVALID_MAIN
to indicate malformed guest program.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit corrects the GDT setup for the data and TSS segments in L1.
Previously, the data segment was incorrectly using the TSS base address,
and the TSS base address was not properly set.
The data segment base is now set to 0, as it should be for a flat 64-bit
model. The TSS segment descriptor in the GDT now correctly points to
X86_SYZOS_ADDR_VAR_TSS and uses the full 64-bit address.
The attributes are also updated to mark the TSS as busy.
Additionally, the TSS region is now explicitly copied from L1 to L2 to
ensure the L2 environment has a valid TSS.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit introduces the `SYZOS_API_NESTED_LOAD_SYZOS` command to
enable running full SYZOS programs within a nested L2 guest, enhancing
fuzzing capabilities for nested virtualization.
Key changes include:
- Nested SYZOS Execution: The new command loads a SYZOS program into an
L2 VM, setting up its execution environment.
- ABI Refinement: Program size is now passed via the shared `syzos_globals`
memory region instead of registers, standardizing the ABI for L1 and L2.
- L2 State Management: Improved saving and restoring of L2 guest GPRs
across VM-exits using inline assembly wrappers for Intel and AMD.
- Nested UEXIT Propagation: Intercepts EPT/NPT faults on the exit page to
capture the L2 exit code from saved registers and forward it to L0 with
an incremented nesting level.
- L2 Memory Management: Updates to L2 page table setup, including skipping
NO_HOST_MEM regions to force exits, and a new `l2_gpa_to_pa` helper.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the SYZOS L1 guest to construct L2 page tables dynamically by
mirroring its own memory layout (provided via boot arguments) instead
of using a static 2MB identity map.
This change introduces l2_map_page to allocate unique backing memory
for most regions, while mapping X86_SYZOS_ADDR_USER_CODE and
X86_SYZOS_ADDR_STACK_BOTTOM to specific per-VM buffers reserved in L1.
This allows L1 to inject code and stack content into backing buffers
while the L2 guest executes them from standard virtual addresses.
Additionally, MEM_REGION_FLAG_* definitions are moved to the guest
header to support this logic.
|
| |
|
|
|
|
|
|
|
| |
Reorder include directives in SYZOS headers to follow the project's
include ordering rules.
https://google.github.io/styleguide/cppguide.html#Names_and_Order_of_Includes
Signed-off-by: 6eanut <jiakaiPeanut@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable the SYZOS guest (L1) to dynamically allocate memory
for nested L2 page tables, replacing the previous rigid static layout.
Move the mem_region and syzos_boot_args struct definitions to the guest
header (common_kvm_amd64_syzos.h) to allow the guest to parse the memory
map injected by the host.
Introduce a bump allocator, guest_alloc_page(), which targets the
X86_SYZOS_ADDR_UNUSED heap. This allocator relies on a new struct
syzos_globals located at X86_SYZOS_ADDR_GLOBALS to track the allocation
offset.
Refactor setup_l2_page_tables() to allocate intermediate paging levels
(PDPT, PD, PT) via guest_alloc_page() instead of using fixed contiguous
offsets relative to the PML4. This allows for disjoint memory usage and
supports future recursion requirements.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces SYZOS_API_NESTED_AMD_VMLOAD and
SYZOS_API_NESTED_AMD_VMSAVE.
These primitives allow the L1 guest to execute the VMLOAD and VMSAVE
instructions, which load/store additional guest state (FS, GS, TR, LDTR,
etc.) to/from the VMCB specified by the 'vm_id' argument.
This stresses the KVM L0 instruction emulator, which must validate the
L1-provided physical address in RAX and perform the state transfer.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces SYZOS_API_NESTED_AMD_SET_INTERCEPT to SYZOS.
This primitive enables the fuzzer to surgically modify intercept vectors
in the AMD VMCB (Virtual Machine Control Block) Control Area.
It implements a read-modify-write operation on 32-bit VMCB offsets,
allowing the L1 hypervisor (SYZOS) to deterministically set or clear
specific intercept bits (e.g., for RDTSC, HLT, or exceptions) for the L2
guest.
This capability allows syzkaller to systematically explore KVM's nested
SVM emulation logic by toggling intercepts on and off, rather than
relying on static defaults or random memory corruption.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Implement SYZOS_API_NESTED_AMD_INJECT_EVENT to allow the L1 guest to
inject events (Interrupts, NMIs, Exceptions) into L2 via the VMCB EVENTINJ
field.
This primitive abstracts the VMCB bit-packing logic
(Vector, Type, Valid, Error Code) into a high-level API, enabling the fuzzer
to semantically mutate event injection parameters.
This targets KVM's nested event merging logic, specifically where L0 must
reconcile L1-injected events with Host-pending events.
|
| |
|
|
|
|
|
|
|
| |
Implement the SYZOS_API_NESTED_AMD_STGI and SYZOS_API_NESTED_AMD_CLGI
primitives to toggle the Global Interrupt Flag (GIF). These commands
execute the stgi and clgi instructions respectively and require no
arguments.
Also add a test checking that CLGI correctly masks NMI injection from L0.
|
| |
|
|
|
|
|
|
|
|
| |
Implement the SYZOS_API_NESTED_AMD_INVLPGA primitive to execute the
INVLPGA instruction in the L1 guest.
This allows the fuzzer to target KVM's Shadow MMU and Nested Paging (NPT)
logic by invalidating TLB entries for specific ASIDs.
Also add a simple syzlang seed/regression test.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Florent Revest reported ThinLTO builds failing with the following error:
<inline asm>:2:1: error: symbol 'after_vmentry_label' is already defined
after_vmentry_label:
^
error: cannot compile inline asm
, which turned out to be caused by the compiler not respecting `noinline`.
Adding __attribute__((optnone)) (or optimize("O0") on GCC) fixes the issue.
|
| |
|
|
|
|
|
|
|
|
|
| |
The new command allows mutation of AMD VMCB block with plain 64-bit writes.
In addition to VM ID and VMCB offset, @nested_amd_vmcb_write_mask takes
three 64-bit numbers: the set mask, the unset mask, and the flip mask.
This allows to make bitwise modifications to VMCB without disturbing
the execution too much.
Also add sys/linux/test/amd64-syz_kvm_nested_amd_vmcb_write_mask to test the
new command behavior.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The new command allows mutation of Intel VMCS fields with the help
of vmwrite instruction.
In addition to VM ID and field ID, @nested_intel_vmwrite_mask takes
three 64-bit numbers: the set mask, the unset mask, and the flip mask.
This allows to make bitwise modifications to VMCS without disturbing
the execution too much.
Also add sys/linux/test/amd64-syz_kvm_nested_vmwrite_mask to test the
new command behavior.
|
| |
|
|
|
|
|
| |
Enable basic RDTSCP handling. Ensure that Intel hosts exit on RDTSCP
in L2, and that both Intel and AMD can handle RDTSCP exits.
Add amd64-syz_kvm_nested_vmresume-rdtscp to test that.
|
| |
|
|
|
|
|
|
|
|
| |
While at it, fix a bug in rdmsr() that apparently lost the top 32 bits.
Also fix a bug in Intel's Secondary Processor-based Controls:
we were incorrectly using the top 32 bits of
X86_MSR_IA32_VMX_PROCBASED_CTLS2 to enable all the available controls
without additional setup. This only worked because rdmsr() zeroed out
those top bits.
|
| |
|
|
|
|
|
| |
Enable basic RDTSC handling. Ensure that Intel hosts exit on RDTSC
in L2, and that both Intel and AMD can handle RDTSC exits.
Add amd64-syz_kvm_nested_vmresume-rdtsc to test that.
|
| |
|
|
|
| |
Ensure L2 correctly exits to L1 on CPUID and resumes properly.
Add a test.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provide the SYZOS API command to resume L2 execution after a VM exit,
using VMRESUME on Intel and VMRUN on AMD.
For testing purpose, implement basic handling of the INVD instruction:
- enable INVD interception on AMD (set all bits in VMCB 00Ch);
- map EXIT_REASON_INVD and VMEXIT_INVD into SYZOS_NESTED_EXIT_REASON_INVD;
- advance L2 RIP to skip to the next instruction.
While at it, perform minor refactorings of L2 exit reason handling.
sys/linux/test/amd64-syz_kvm_nested_vmresume tests the new command by
executing two instructions, INVD and HLT, in the nested VM.
|
| |
|
|
|
|
|
| |
It was useful initially for vendor-agnostic tests, but given that we
have guest_uexit_l2() right before it, we can save an extra L2-L1 exit.
Perhaps this should increase the probability of executing more complex
payloads (fewer KVM_RUN calls to reach the same point in L2 code).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Provide a SYZOS API command to launch the L2 VM using the
VMLAUNCH (Intel) or VMRUN (AMD) instruction.
For testing purposes, each L2->L1 exit is followed by a guest_uexit_l2()
returning the exit code to L0. Common exit reasons (like HLT) will be
mapped into a common exit code space (0xe2e20000 | reason), so that
a single test can be used for both Intel and AMD.
Vendor-specific exit codes will be returned using the 0xe2110000 mask
for Intel and 0xe2aa0000 for AMD.
|
| |
|
|
| |
The new command loads an instruction blob into the specified L2 VM.
|
| |
|
|
|
|
|
|
| |
Now that we are using volatiles in guest_main(), there is no
particular need to base the numbers on primes (this didn't work well
with Clang anyway).
Instead, group the commands logically and leave some space between the
groups for future updates.
|
| |
|
|
|
|
| |
Provide basic setup for registers, page tables, and segments to create
Intel/AMD-based nested virtual machines.
Note that the machines do not get started yet.
|
| |
|
|
|
|
| |
Add vendor-specific code to turn on nested virtualization on Intel
and AMD. Also provide get_cpu_vendor() to pick the correct
implementation.
|
| |
|
|
|
| |
Not having these results in three copies of every KVM-related #define
in each reproducer.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Apply __addrspace_guest to every guest function and use a C++ template
to statically validate that host functions are not passed to
executor_fn_guest_addr().
This only works in Clang builds of syz-executor, because GCC does not
support address spaces, and C reproducers cannot use templates.
The static check allows us to drop the dynamic checks in DEFINE_GUEST_FN_TO_GPA_FN().
While at it, replace DEFINE_GUEST_FN_TO_GPA_FN() with explicit declarations of
host_fn_guest_addr() and guest_fn_guest_addr().
|
| |
|
|
| |
Somehow Clang still manages to emit a jump table for it.
|
| |
|
|
|
|
|
|
|
|
| |
The new API call allows to initialize the handler with one of the
three possible values:
- NULL (should cause a page fault)
- dummy_null_handler (should call iret)
- uexit_irq_handler (should perform guest_uexit(UEXIT_IRQ))
Also add a test for uexit_irq_handler()
|
| |
|
|
|
| |
Untangle SYZOS GDT setup from the legacy one.
Drop LDT and TSS for now.
|
| |
|
|
|
|
|
| |
To distinguish SYZOS addresses from other x86 definitions, change them
to start with X86_SYZOS_ADDR_
No functional change.
|
| |
|
|
|
|
|
|
| |
Add SYZOS calls that correspond to the IN and OUT x86 instructions
that perform port I/O.
These instructions have different variants, for now we just implement
the one that takes the port number from DX instead of encoding it in
the opcode.
|
| |
|
|
|
| |
Add a SYZOS call to write to one of the debug registers
(DR0-DR7).
|
| |
|
|
|
|
|
|
|
|
|
| |
When compiling the executor in syz-env-old, -fstack-protector may
kick in and introduce global accesses that tools/check-syzos.sh reports.
To prevent this, introduce the __no_stack_protector macro attribute that
disable stack protection for the function in question, and use it for
guest code.
While at it, factor out some common definitions into common_kvm_syzos.h
|
| |
|
|
|
| |
Replace the switch statement in guest_handle_wr_crn() with a series of
if statements.
|
| |
|
|
|
| |
Add a SYZOS call to write to one of the system registers
(CR0, CR2, CR3, CR4, CR8).
|
| |
|
|
|
|
| |
Let's try to stick to the convention of naming every SYZOS API handler
syzos_handle_something().
No functional change.
|
| |
|
|
| |
Let SYZOS execute RDMSR and WRMSR on x86.
|
| | |
|
| |
|
|
|
|
| |
Like we already do on ARM, use prime numbers multiplied by 10 for
SYZOS API IDs to prevent the compiler from emitting a jump table in
guest_main().
|
| |
|
|
|
| |
This commit adds support for CPUID instructions on AMD64. It also adds a
relevant test.
|
|
|
This commit adds the actual SyzOS fuzzer for x86-64 and a small test. It
also updates some necessary parts of the ARM version and adds some glue
for i386.
|