In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost.

Modern hardware is moving to more cores rather than faster ones. For application developers, this means that further performance gains require _parallelism_; making progress on many tasks at the same time. But as anyone with experience writing multi-threaded code will tell you, this is easier said than done — those threads inevitably have to coordinate, and coordination is "expensive."
But _why_ is it expensive? Sure, a mutual exclusion lock forces sequential execution, which limits overall speedup (per Amdahl's law). But is the lock itself expensive? Do reader-writer locks help when the coordination is mostly required for reads? In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost. By the end, you'll leave with a mental model that goes all the way down to the CPU cache line, and a newfound appreciation for a protocol from the 1980s.
In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust.
The talk explores how Rust’s type system and memory safety can be leveraged to enforce mandatory guardrails at the infrastructure level, where traditional frameworks often fall short.
I'll initiate you in the art of 'CAN bus sniffing': Connecting to the central nervous system of a modern car, interpreting the data, and seeing what we can build as enthousiastic amateurs.
In this talk, we’ll explore battle-tested best practices for integrating Claude Code into a professional Axum development workflow without compromising on Rust’s core values: correctness, clarity, and maintainability.