We’ll take a deep dive into Rust channels — from synchronous channels to asynchronous channels — to explore how message passing enables reliable concurrent programming.

Concurrency in Rust is powerful, but safe and efficient communication between tasks can be tricky to master. In this talk, we’ll take a deep dive into Rust channels — from synchronous channels to asynchronous channels — to explore how message passing enables reliable concurrent programming.
We’ll start with simple synchronous channels and examine common patterns for multi-producer and multi-consumer setups. Then we’ll move to asynchronous channels in Rust’s async ecosystem, showing how channels coordinate tasks in real-world systems. Along the way, we’ll discuss backpressure, error handling, and performance trade-offs, with practical examples inspired by cloud-native workloads like Kubernetes operators, runtime schedulers, and event pipelines.
Attendees will see live demonstrations of channels in action and learn how Rust’s ownership and type system ensures safety while enabling scalable, reliable concurrency. We’ll also compare different channel strategies and patterns to illustrate where design choices impact performance and simplicity.
Attendees will leave with:
- A clear understanding of how synchronous and asynchronous channels work in Rust
- Patterns for multi-producer, multi-consumer messaging- Insights into handling backpressure and errors safely
- Practical knowledge for building scalable, concurrent systems
- Confidence applying Rust channels in real-world cloud-native and async workloads
This talk provides a practical, approachable exploration of Rust channels, empowering developers to write safer and more efficient concurrent Rust programs.
This technical talk examines the most prevalent pain points facing Rust web developers today and explores how the community is addressing them.
I’ll share what the Rust job market really looks like in 2025 — where companies are hiring, which skills stand out, and how the recruitment process actually works behind the scenes.
In my session, I want to present hotpath and channels-console libraries and explain how they compare to other profiling tools available.
This talk explores building a complete self-hosted LLM stack in Rust: Paddler, a distributed load balancer for serving LLMs at scale, and Poet, a static site generator that consumes those LLMs for AI-powered content features.
This talk explores what it means to write scientific software that lives up to the standards we expect of science itself.
During this talk we'll build a basic, working async runtime using nothing more than a standard library. The point? To see it's approachable for mere mortals.