In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust.

What happens when we take Andrej Karpathy’s legendary “The spelled-out intro to neural networks and backpropagation” — and rebuild it line by line in Rust?
In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust. Together, we’ll build a tiny automatic differentiation engine and a simple neural network library - all from first principles. Along the way, we’ll uncover what backpropagation really is, why it works, and how it feels to express these mathematical concepts using Rust’s types, ownership, and safety guarantees.
This session is not just about Rust, and not just about neural networks — it’s where the two meet.
If you know Rust but have never built a neural net from scratch, you’ll finally understand how gradients flow and models learn.
If you know machine learning but not Rust, you’ll see how Rust’s design leads to clear, correct, and fast numerical code.
And if you love both - you’ll walk away inspired to experiment with tch-rs and other Rust ML tools.
Prerequisites: Basic Rust (or any programming language) and a vague memory of high-school calculus.
Outcome: You’ll leave understanding both how backpropagation works and how Rust helps you express it safely and efficiently.
In this talk, we’ll explore battle-tested best practices for integrating Claude Code into a professional Axum development workflow without compromising on Rust’s core values: correctness, clarity, and maintainability.
In 2024, I added the `Option::as_slice` and `Option::as_mut_slice` methods to libcore. This talk is about what motivated the addition, and looks into the no less than 4 different implementations that made up the methods. It also shows that even without a deep understanding of all compiler internals, it is possible to add changes both to the compiler and standard library.
This talk explores what it means to write scientific software that lives up to the standards we expect of science itself.
This talk explores building a complete self-hosted LLM stack in Rust: Paddler, a distributed load balancer for serving LLMs at scale, and Poet, a static site generator that consumes those LLMs for AI-powered content features.
Rust performance debugging with TUIs and LLMsDescription: In my session, I will present the https://hotpath.rs crate and explain how it compares to other profiling tools available.
The talk explores how Rust’s type system and memory safety can be leveraged to enforce mandatory guardrails at the infrastructure level, where traditional frameworks often fall short.