In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost.

Modern hardware is moving to more cores rather than faster ones. For application developers, this means that further performance gains require _parallelism_; making progress on many tasks at the same time. But as anyone with experience writing multi-threaded code will tell you, this is easier said than done — those threads inevitably have to coordinate, and coordination is "expensive."
But _why_ is it expensive? Sure, a mutual exclusion lock forces sequential execution, which limits overall speedup (per Amdahl's law). But is the lock itself expensive? Do reader-writer locks help when the coordination is mostly required for reads? In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost. By the end, you'll leave with a mental model that goes all the way down to the CPU cache line, and a newfound appreciation for a protocol from the 1980s.
This talk explains how Rust debugging actually works: how compiler-generated debuginfo (DWARF/PDB) maps binaries back to source, and how LLDB/GDB interpret that data in practice.
For infrastructure engineers, SREs, platform teams, and Rust developers who've felt the pain of configuration drift, failed deployments, and infrastructure code that simply doesn't scale safely.
In 2024, I added the `Option::as_slice` and `Option::as_mut_slice` methods to libcore. This talk is about what motivated the addition, and looks into the no less than 4 different implementations that made up the methods. It also shows that even without a deep understanding of all compiler internals, it is possible to add changes both to the compiler and standard library.
I'll share a few tricks to help you write cleaner, more powerful declarative macros. You'll also get a sneak peek at the nightly features to see what's coming next macro_rules! world.