Welcome To The Smol Course Hugging Face A Smol Course

Leo Migdal
-
welcome to the smol course hugging face a smol course

and get access to the augmented documentation experience Welcome to the comprehensive (and smollest) course to Fine-Tuning Language Models! This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models. This course is smol but fast! It’s for software developers and engineers looking to fast track their LLM fine-tuning skills. If that’s not you, check out the LLM Course.

At the end of this course, you’ll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques. This is a practical course on aligning language models for your specific use case. It's a handy way to get started with aligning language models, because everything runs on most local machines. There are minimal GPU requirements and no paid services. The course is built around the SmolLM3 and SmolVLM2 models, but the skills you'll learn can be applied to larger models or other small LLMs/VLMs as well. This course is open and peer reviewed.

To get involved with the course open a pull request and submit your work for review. Here are the steps: This should help you learn and to build a community-driven course that is always improving. This course will soon be re-released on Hugging Face Learn! Stay tuned for updates. This course provides a practical, hands-on approach to working with small language models, from initial training through to production deployment.

This document provides an overview of the SmolLM course, a practical guide to aligning language models for specific use cases. The course focuses on working with small language models that can run on local machines with minimal GPU requirements. While centered around the SmolLM2 series of models, the skills learned throughout the course are transferable to larger models and other small language models. For specific installation instructions and environment setup details, see Installation and Setup. The SmolLM course follows a progressive learning path, starting with basic model fine-tuning techniques and advancing to more complex topics like preference alignment, efficient training methods, and deployment strategies. The course consists of eight core modules that build upon each other in a logical progression:

The course focuses on small language models because they offer several practical advantages over larger models: Display a leaderboard of student submissions Test your knowledge of Preference Alignment Test your knowledge of SFT in the real world. There was an error while loading. Please reload this page.

This is a practical course on aligning language models for your specific use case. It's a handy way to get started with aligning language models, because everything runs on most local machines. There are minimal GPU requirements and no paid services. The course is based on the SmolLM2 series of models, but you can transfer the skills you learn here to larger models or other small language models. This course is open and peer reviewed. To get involved with the course open a pull request and submit your work for review.

Here are the steps: This should help you learn and to build a community-driven course that is always improving. We can discuss the process in this discussion thread. This course provides a practical, hands-on approach to working with small language models, from initial training through to production deployment. and get access to the augmented documentation experience Welcome to the practical section!

Here you’ll apply everything you’ve learned about chat templates and supervised fine-tuning using SmolLM3. These exercises progress from basic concepts to advanced techniques, giving you real-world experience with instruction tuning. By completing these exercises, you will: Objective: Understand how SmolLM3 handles different conversation formats and reasoning modes. SmolLM3 is a hybrid reasoning model which can follow instructions or generated tokens that ‘reason’ on a complex problem. When post-trained effectively, the model will reason on hard problems and generate direct responses on easy problems.

People Also Search

And Get Access To The Augmented Documentation Experience Welcome To

and get access to the augmented documentation experience Welcome to the comprehensive (and smollest) course to Fine-Tuning Language Models! This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models. This course is smol but fast! It’s for software developers and engineers looking to fast trac...

At The End Of This Course, You’ll Understand How To

At the end of this course, you’ll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques. This is a practical course on aligning language models for your specific use case. It's a handy way to get started with aligning language models, because everything runs on most local machines. There are minimal GPU requirements an...

To Get Involved With The Course Open A Pull Request

To get involved with the course open a pull request and submit your work for review. Here are the steps: This should help you learn and to build a community-driven course that is always improving. This course will soon be re-released on Hugging Face Learn! Stay tuned for updates. This course provides a practical, hands-on approach to working with small language models, from initial training throug...

This Document Provides An Overview Of The SmolLM Course, A

This document provides an overview of the SmolLM course, a practical guide to aligning language models for specific use cases. The course focuses on working with small language models that can run on local machines with minimal GPU requirements. While centered around the SmolLM2 series of models, the skills learned throughout the course are transferable to larger models and other small language mo...

The Course Focuses On Small Language Models Because They Offer

The course focuses on small language models because they offer several practical advantages over larger models: Display a leaderboard of student submissions Test your knowledge of Preference Alignment Test your knowledge of SFT in the real world. There was an error while loading. Please reload this page.