Lighteval 0 10 0 On Pypi Libraries Io Security Maintenance Data
Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team. Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack-up. Customization at your fingertips: letting you either browse all our existing tasks and metrics or effortlessly create your own custom task and custom metric, tailored to your needs. Lighteval supports 1000+ evaluation tasks across multiple domains and languages. Use this space to find what you need, or, here's an overview of some popular benchmarks:
Note: lighteval is currently completely untested on Windows, and we don't support it yet. (Should be fully functional on Mac/Linux) Subscribe to an RSS feed of lighteval releases Libraries.io helps you find new open source packages, modules and frameworks and keep track of ones you depend upon. Copyright © 2025 SourceSource SA Code is Open Source under AGPLv3 license Data is available under CC-BY-SA 4.0 license pip install lighteval Copy PIP instructions
A lightweight and configurable evaluation package Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team. Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack-up. Customization at your fingertips: letting you either browse all our existing tasks and metrics or effortlessly create your own custom task and custom metric, tailored to your needs. Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team.
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack-up. Customization at your fingertips: letting you either browse all our existing tasks and metrics or effortlessly create your own custom task and custom metric, tailored to your needs. Lighteval supports 1000+ evaluation tasks across multiple domains and languages. Use this space to find what you need, or, here's an overview of some popular benchmarks: Note: lighteval is currently completely untested on Windows, and we don't support it yet.
(Should be fully functional on Mac/Linux) and get access to the augmented documentation experience Lighteval can be installed from PyPI or from source. This guide covers all installation options and dependencies. The simplest way to install Lighteval is from PyPI: This installs the core package with all essential dependencies for basic evaluation tasks.
Source installation is recommended for developers who want to contribute to Lighteval or need the latest features: There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading.
Please reload this page. There was an error while loading. Please reload this page. There was an error while loading. Please reload this page. There was an error while loading.
Please reload this page. Libraries.io monitors 10,349,800 open source packages across 32 different package managers. Libraries.io is a free service that collects publicly available open source package information scraped from the internet. With it you can search 9.96M packages by license, language, or explore new, trending, or popular packages. Data available via Libraries.io is scraped from the internet and not validated, corrected, or curated for accuracy. If you are looking to make important decisions about open source usage and management, consider our paid offering: The Tidelift Subscription.
The Tidelift Subscription provides a curated source of open source package data backed by Tidelift and our maintainer partners, who are paid to ensure their projects follow enterprise-grade secure software development practices, now and... The Tidelift Subscription provides deeper, more meaningful insights that allow you to evaluate latent risk indicators such as package maintenance and end-of-life status, evaluating code contributors and security measures such as two-factor-authentication to eliminate... There was an error while loading. Please reload this page. I noticed that the pyproject.toml in pypi for 0.7.0 is set to Can you update the pypi package?
Causes a number of issues with vllm dependencies especially since vllm (currently) requires torch 2.5.1 There was an error while loading. Please reload this page.
People Also Search
- lighteval 0.10.0 on PyPI - Libraries.io - security & maintenance data ...
- lighteval published releases on PyPI - Libraries.io - security ...
- lighteval · PyPI
- GitHub - huggingface/lighteval: Lighteval is your all-in-one toolkit ...
- Installation - Hugging Face
- lighteval/docs/source/installation.mdx at main - GitHub
- Releases · huggingface/lighteval - GitHub
- Libraries.io - security & maintenance data for open source software
- updated pypi package with torch>=2.0,<3.0 #526 - GitHub
- [FT] Any Pypi package for this toolkit? #296 - GitHub
Your Go-to Toolkit For Lightning-fast, Flexible LLM Evaluation, From Hugging
Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team. Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-by-sample results to debug and see how your models ...
Note: Lighteval Is Currently Completely Untested On Windows, And We
Note: lighteval is currently completely untested on Windows, and we don't support it yet. (Should be fully functional on Mac/Linux) Subscribe to an RSS feed of lighteval releases Libraries.io helps you find new open source packages, modules and frameworks and keep track of ones you depend upon. Copyright © 2025 SourceSource SA Code is Open Source under AGPLv3 license Data is available under CC-BY-...
A Lightweight And Configurable Evaluation Package Your Go-to Toolkit For
A lightweight and configurable evaluation package Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team. Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-b...
Lighteval Is Your All-in-one Toolkit For Evaluating LLMs Across Multiple
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether your model is being served somewhere or already loaded in memory. Dive deep into your model's performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack-up. Customization at your fingertips: letting you either browse all our existing tasks and metrics or effo...
(Should Be Fully Functional On Mac/Linux) And Get Access To
(Should be fully functional on Mac/Linux) and get access to the augmented documentation experience Lighteval can be installed from PyPI or from source. This guide covers all installation options and dependencies. The simplest way to install Lighteval is from PyPI: This installs the core package with all essential dependencies for basic evaluation tasks.