Publish Nemo Model On Hugging Face Hub Ipynb Colab

Leo Migdal
-
publish nemo model on hugging face hub ipynb colab

There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page.

Anyone can now publish their models to Hugging Face Hub! Notice there is now "NeMo" option under "Libraries" filter on the left :) Once you think your .nemo model is useful to the community, we encourage you to publish it on Hugging Face Hub under permissible licence. We recommend CC-BY-4 license, but the choice of the right license is yours. If you follow the tutorial below, your model will be available via .from_pretrained API for all NeMo users and discoverable on Hugging Face Hub. and get access to the augmented documentation experience

To upload models to the Hub, you’ll need to create an account at Hugging Face. Models on the Hub are Git-based repositories, which give you versioning, branches, discoverability and sharing features, integration with dozens of libraries, and more! You have control over what you want to upload to your repository, which could include checkpoints, configs, and any other files. You can link repositories with an individual user, such as osanseviero/fashion_brands_patterns, or with an organization, such as facebook/bart-large-xsum. Organizations can collect models related to a company, community, or library! If you choose an organization, the model will be featured on the organization’s page, and every member of the organization will have the ability to contribute to the repository.

You can create a new organization here. NOTE: Models do NOT need to be compatible with the Transformers/Diffusers libraries to get download metrics. Any custom model is supported. Read more below! There are several ways to upload models for them to be nicely integrated into the Hub and get download metrics, described below. Hugging Face provides a wide variety of pre-trained models for tasks like text generation, summarization, translation, and more.

Google Colab, on the other hand, offers free cloud-based Jupyter notebooks with GPU support. Together, they make a powerful combination for quickly experimenting with AI models. In this guide, I’ll walk you through step by step on how to run Hugging Face models on Google Colab — from creating an account to executing your first model. Before running any model, you’ll need a Hugging Face account and an access token. Google Colab allows you to store secrets securely. This is where you’ll put your Hugging Face token.

Now, let’s log in to Hugging Face inside your Colab notebook. Posted on Nov 13, 2024 • Edited on Nov 18, 2024 This guide will walk you through using Hugging Face models in Google Colab. We’ll cover everything from setting up your Colab environment with GPU to running your first Hugging Face model. Hugging Face provides pre-trained models that make it easy to build powerful natural language processing (NLP) and machine learning applications without starting from scratch. It’s helpful to have a basic understanding of:

Running Hugging Face models, especially transformer-based ones, can be resource-intensive, so setting up a GPU in Colab will help speed things up. To access Hugging Face models in Colab, you need to install the Hugging Face transformers library, which includes pre-trained models and pipelines. There was an error while loading. Please reload this page. Jupyter Notebooks on the Hugging Face Hub and get access to the augmented documentation experience

Jupyter notebooks are a very popular format for sharing code and data analysis for machine learning and data science. They are interactive documents that can contain code, visualizations, and text. When you visit a model page on the Hugging Face Hub, you’ll see a new “Google Colab”/ “Kaggle” button in the “Use this model” drop down. Clicking this will generate a ready-to-run notebook with basic code to load and test the model. This is perfect for quick prototyping, inference testing, or fine-tuning experiments — all without leaving your browser. Users can also access a ready-to-run notebook by appending /colab to the model card’s URL.

As an example, for the latest Gemma 3 4B IT model, the corresponding Colab notebook can be reached by taking the model card URL: https://huggingface.co/google/gemma-3-4b-it I was working on this tutorial NeMo/tutorials/llm/function_calling/nemo2-chat-sft-function-calling.ipynb at main · NVIDIA/NeMo · GitHub . After finetuning the model I managed to successfully evaluate with api.generate function. However, when I tried to generate response with huggingface formatted model, model does not generate anything. I can convert my other finetuned models into huggingface and deploy them without an error but I couldn’t manage to make this work. Hi @yusufkurt , I wonder what NeMo Docker image are you using here ?

I would suggest to use the latest release container for doing the finetuning: nvcr.io/nvidia/nemo:25.07 thanks @aot . I managed to make it work by switching from peft to sft in nemo:25.07.00 . This topic was automatically closed 14 days after the last reply. New replies are no longer allowed. Powered by Discourse, best viewed with JavaScript enabled

People Also Search

There Was An Error While Loading. Please Reload This Page.

There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page.

Anyone Can Now Publish Their Models To Hugging Face Hub!

Anyone can now publish their models to Hugging Face Hub! Notice there is now "NeMo" option under "Libraries" filter on the left :) Once you think your .nemo model is useful to the community, we encourage you to publish it on Hugging Face Hub under permissible licence. We recommend CC-BY-4 license, but the choice of the right license is yours. If you follow the tutorial below, your model will be av...

To Upload Models To The Hub, You’ll Need To Create

To upload models to the Hub, you’ll need to create an account at Hugging Face. Models on the Hub are Git-based repositories, which give you versioning, branches, discoverability and sharing features, integration with dozens of libraries, and more! You have control over what you want to upload to your repository, which could include checkpoints, configs, and any other files. You can link repositori...

You Can Create A New Organization Here. NOTE: Models Do

You can create a new organization here. NOTE: Models do NOT need to be compatible with the Transformers/Diffusers libraries to get download metrics. Any custom model is supported. Read more below! There are several ways to upload models for them to be nicely integrated into the Hub and get download metrics, described below. Hugging Face provides a wide variety of pre-trained models for tasks like ...

Google Colab, On The Other Hand, Offers Free Cloud-based Jupyter

Google Colab, on the other hand, offers free cloud-based Jupyter notebooks with GPU support. Together, they make a powerful combination for quickly experimenting with AI models. In this guide, I’ll walk you through step by step on how to run Hugging Face models on Google Colab — from creating an account to executing your first model. Before running any model, you’ll need a Hugging Face account and...