This talk explores building a complete self-hosted LLM stack in Rust: Paddler, a distributed load balancer for serving LLMs at scale, and Poet, a static site generator that consumes those LLMs for AI-powered content features.
In this talk, we’ll explore battle-tested best practices for integrating Claude Code into a professional Axum development workflow without compromising on Rust’s core values: correctness, clarity, and maintainability.
In this talk, we’ll re-create the core ideas of Karpathy’s micrograd, but entirely in Rust.
In this talk, we'll explore the current state of AI development in Rust, highlighting key crates, frameworks, and tools. Covering the essentials from ML and NLP to integrating LLMs and agent-based automation.
This session we will delve into the sometimes murky world of procedural macros - showing some of the great tooling available for understanding the code generated, such as cargo expand, and the key building blocks we will need for writing our own.
In this introductory talk, we will explore what it means to "Ratatuify" the Rust package manager, Cargo.