In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust.

What happens when we take Andrej Karpathy’s legendary “The spelled-out intro to neural networks and backpropagation” — and rebuild it line by line in Rust?
In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust. Together, we’ll build a tiny automatic differentiation engine and a simple neural network library - all from first principles. Along the way, we’ll uncover what backpropagation really is, why it works, and how it feels to express these mathematical concepts using Rust’s types, ownership, and safety guarantees.
This session is not just about Rust, and not just about neural networks — it’s where the two meet.
If you know Rust but have never built a neural net from scratch, you’ll finally understand how gradients flow and models learn.
If you know machine learning but not Rust, you’ll see how Rust’s design leads to clear, correct, and fast numerical code.
And if you love both - you’ll walk away inspired to experiment with tch-rs and other Rust ML tools.
Prerequisites: Basic Rust (or any programming language) and a vague memory of high-school calculus.
Outcome: You’ll leave understanding both how backpropagation works and how Rust helps you express it safely and efficiently.
I’ll share what the Rust job market really looks like in 2025 — where companies are hiring, which skills stand out, and how the recruitment process actually works behind the scenes.
I'll initiate you in the art of 'CAN bus sniffing': Connecting to the central nervous system of a modern car, interpreting the data, and seeing what we can build as enthousiastic amateurs.
I contributed LTO-related changes to many open-source projects, and had a lot of interesting discussions with their maintainers about LTO. In this talk, I want to share with you my experience.
This technical talk examines the most prevalent pain points facing Rust web developers today and explores how the community is addressing them.