The LLM Engineer’s Handbook is a comprehensive book to building and deploying LLM-powered systems in the real world. This is the culmination of Paul Iusztin’s and my experience writing LLM courses and articles online.
Through an end-to-end example of creating an LLM twin, it covers everything related to data, from web crawling to generating synthetic preference datasets. We see how to train LLMs with supervised fine-tuning and preference alignment, before evaluating them on different benchmarks, including our own. We show how to implement a RAG pipeline, including advanced techniques like query expansion and reranking. Finally, we deploy it with inference optimization techniques using AWS SageMaker.
The best part is that everything uses best practices from MLOps. The entire codebase is available for free in the LLM Engineer’s Handbook repo on Github.
I’m super grateful to Julien Chaumond, CTO of Hugging Face, and Hamza Tahir, CTO of ZenML, for endorsing this book and writing kind forewords for us. :)
Hands-On Graph Neural Networks is the result of a year’s worth of hard work, research, and collaboration with fellow experts in the field.
When I started learning about GNNs, resources were particularly scarce. This book is the guide I wish I had back then: it has been carefully crafted to provide a step-by-step guide for those new to the world of GNNs, while also offering advanced use cases and examples for experienced practitioners. What’s more, the entire code with examples and use cases is freely available on Github, making it easy for you to get started with implementing GNNs in Python.
If you’re interested in graph neural networks, I highly recommend checking it out. I’m confident that you’ll find it to be an invaluable resource as you explore this exciting and rapidly growing field.