Talks

Rain: Microarchitectural Cloud Leakage Mathé Hertogh

In 2017, the discovery of Spectre and Meltdown kicked off a whirlwind of attention and spawned a new research topic on "transient execution attacks". Now, 8 years, and many new attacks and mitigations, later, most of the microarchitectural dust has settled. While still keeping a few research groups around the globe busy, many people in the field, including top-notch security experts from industry, question the threat these attacks pose in real-world scenarios.
We, as microarchitectural security researchers, always find it funny when people ask questions like: "Does Spectre actually work?", "Are those attacks real?", "Doesn't that only work in a lab setting?". Funny, but also worrying, as we know the risks. The goal of this project was to convince (many of) these people that Spectre & friends *do* pose serious and realistic threats. In particular: we show they can enable a malicious public cloud user to find and leak highly sensitive private data from other customers (without even knowing them a priori).
To be as real-world as we could be, we performed these attacks on the production clouds of AWS and Google Cloud, using their standard public interfaces to spawn VMs. By combining the two (old, and —supposedly— mitigated) transient execution vulnerabilities called "L1TF" and "Half-Spectre", we build an arbitrary read-primitive into the host's virtual address space. On Google Cloud, this led us to be able to (1) discover co-located victim VMs, (2) list their processes, and, as an example (3) leak the private TLS key of an nginx webserver — reliably, even under extreme system noise, with perfect accuracy, within hours.

It’s Giving Insecure Vibes: Secure Coding Literacy for Vibe Coders Betta Lyon Delsordo

Vibe coding has a time and a place: it is great for making quick prototypes, and is very tempting for less technical folks. However, those who don’t understand their own code will be blissfully unaware of the many security vulnerabilities that AI assistants can introduce. In this presentation, I will cover a variety of common vulnerabilities that can be introduced from vibe coding, and then how to recognize and fix them. I will also cover how to prompt genAI tools to code more securely and help you review your code, as well as how to take a hybrid approach with AI-advised coding. This is a crucial topic for anyone venturing into vibe coding, as well as any team leads who are starting to see AI-generated code introduced by more junior members.