In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust.

What happens when we take Andrej Karpathy’s legendary “The spelled-out intro to neural networks and backpropagation” — and rebuild it line by line in Rust?
In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust. Together, we’ll build a tiny automatic differentiation engine and a simple neural network library - all from first principles. Along the way, we’ll uncover what backpropagation really is, why it works, and how it feels to express these mathematical concepts using Rust’s types, ownership, and safety guarantees.
This session is not just about Rust, and not just about neural networks — it’s where the two meet.
If you know Rust but have never built a neural net from scratch, you’ll finally understand how gradients flow and models learn.
If you know machine learning but not Rust, you’ll see how Rust’s design leads to clear, correct, and fast numerical code.
And if you love both - you’ll walk away inspired to experiment with tch-rs and other Rust ML tools.
Prerequisites: Basic Rust (or any programming language) and a vague memory of high-school calculus.
Outcome: You’ll leave understanding both how backpropagation works and how Rust helps you express it safely and efficiently.
In this talk, we’ll explore battle-tested best practices for integrating Claude Code into a professional Axum development workflow without compromising on Rust’s core values: correctness, clarity, and maintainability.
For infrastructure engineers, SREs, platform teams, and Rust developers who've felt the pain of configuration drift, failed deployments, and infrastructure code that simply doesn't scale safely.
This talk puts popular Rust rewrites to the test. We'll examine how these tools stack up against their battle-tested predecessors, looking at real-world performance, compilation times, binary sizes, feature completeness, and ecosystem maturity.