In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost.

Modern hardware is moving to more cores rather than faster ones. For application developers, this means that further performance gains require _parallelism_; making progress on many tasks at the same time. But as anyone with experience writing multi-threaded code will tell you, this is easier said than done — those threads inevitably have to coordinate, and coordination is "expensive."
But _why_ is it expensive? Sure, a mutual exclusion lock forces sequential execution, which limits overall speedup (per Amdahl's law). But is the lock itself expensive? Do reader-writer locks help when the coordination is mostly required for reads? In this talk, we'll dive deep into what makes concurrency coordination costly, and explore some pathways to mitigate that cost. By the end, you'll leave with a mental model that goes all the way down to the CPU cache line, and a newfound appreciation for a protocol from the 1980s.
This technical talk examines the most prevalent pain points facing Rust web developers today and explores how the community is addressing them.
As Rust projects grow, managing private crates becomes a real headache. Teams struggle with inconsistent versioning, fragile dependencies, and cumbersome workflows that slow down development. In this talk, I’ll walk through how these challenges can be solved with Rust and CrabHub.
I contributed LTO-related changes to many open-source projects, and had a lot of interesting discussions with their maintainers about LTO. In this talk, I want to share with you my experience.
In this talk, we’ll explore battle-tested best practices for integrating Claude Code into a professional Axum development workflow without compromising on Rust’s core values: correctness, clarity, and maintainability.
For infrastructure engineers, SREs, platform teams, and Rust developers who've felt the pain of configuration drift, failed deployments, and infrastructure code that simply doesn't scale safely.
This talk explores building a complete self-hosted LLM stack in Rust: Paddler, a distributed load balancer for serving LLMs at scale, and Poet, a static site generator that consumes those LLMs for AI-powered content features.