URL stringlengths 30 87 | Headline stringlengths 11 143 | Authors stringlengths 5 190 | Publication Date stringlengths 11 18 | Article Text stringlengths 140 47.6k |
|---|---|---|---|---|
https://huggingface.co/blog/leaderboard-hebrew | Introducing the Open Leaderboard for Hebrew LLMs! | Shaltiel Shmidman, Tal Geva, Omer Koren, Clémentine Fourrier | May 5, 2024 | This project addresses the critical need for advancement in Hebrew NLP. As Hebrew is considered a low-resource language, existing LLM leaderboards often lack benchmarks that accurately reflect its unique characteristics. Today, we are excited to introduce a pioneering effort to change this narrative — our new open LLM ... |
https://huggingface.co/blog/leaderboard-artificial-analysis | Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face | Micah Hill-Smith, George Cameron, Clémentine Fourrier | May 3, 2024 | Building applications with LLMs requires considering more than just quality: for many use-cases, speed and price are equally or more important. For consumer applications and chat experiences, speed and responsiveness are critical to user engagement. Users expect near-instant responses, and delays can directly lead to r... |
https://huggingface.co/blog/asr-diarization | Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints | Sergei Petrov, Vaibhav Srivastav, Pedro Cuenca, Philipp Schmid | May 1, 2024 | Whisper is one of the best open source speech recognition models and definitely the one most widely used. Hugging Face Inference Endpoints make it very easy to deploy any Whisper model out of the box. However, if you’d like tointroduce additional features, like a diarization pipeline to identify speakers, or assisted g... |
https://huggingface.co/blog/evaluation-structured-outputs | Improving Prompt Consistency with Structured Generations | Will Kurt, Remi Louf, Clémentine Fourrier | April 30, 2024 | Recently, the Leaderboards and Evals research team at Hugging Face did small experiments, which highlighted how fickle evaluation can be. For a given task, results are extremely sensitive to minuscule changes in prompt format! However, this is not what we want: a model prompted with the same amount of information as in... |
https://huggingface.co/blog/sc2-instruct | StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation | Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang | April 29, 2024 | Instruction tuning is an approach of fine-tuning that gives large language models (LLMs) the capability to follow natural and human-written instructions. However, for programming tasks, most models are tuned on either human-written instructions (which are very expensive) or instructions generated by huge and proprietar... |
https://huggingface.co/blog/leaderboard-cot | Introducing the Open Chain of Thought Leaderboard | Gregor Betz, Sebastian Cacean, Clémentine Fourrier, Kyle Richardson | April 23, 2024 | Chain-of-thought prompting is emerging as a powerful and effective design pattern for LLM-based apps and agents. The basic idea of chain-of-thought prompting is to let a model generate a step-by-step solution (“reasoning trace”) before answering a question or taking a decision. With the Open CoT Leaderboard we’re track... |
https://huggingface.co/blog/jat | Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent | Quentin Gallouédec, Edward Beeching, Clément ROMAC, Thomas Wolf | April 22, 2024 | IntroductionWe're excited to share Jack of All Trades (JAT), a project that aims to move in the direction of a generalist agent. The project started as an open reproduction of the Gato (Reed et al., 2022) work, which proposed to train a Transformer able to perform both vision-and-language and decision-making tasks. We ... |
https://huggingface.co/blog/llama3 | Welcome Llama 3 - Meta’s new open LLM | Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Younes Belkada, Leandro von Werra | April 18, 2024 | IntroductionMeta’s Llama 3, the next iteration of the open-access Llama family, is now released and available at Hugging Face. It's great to see Meta continuing its commitment to open AI, and we’re excited to fully support the launch with comprehensive integration in the Hugging Face ecosystem.Llama 3 comes in two size... |
https://huggingface.co/blog/leaderboard-medicalllm | The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare | Aaditya Ura (looking for PhD), Pasquale Minervini, Clémentine Fourrier | April 19, 2024 | Over the years, Large Language Models (LLMs) have emerged as a groundbreaking technology with immense potential to revolutionize various aspects of healthcare. These models, such as GPT-3, GPT-4 and Med-PaLM 2 have demonstrated remarkable capabilities in understanding and generating human-like text, making them valuabl... |
https://huggingface.co/blog/gradio-reload | AI Apps in a Flash with Gradio's Reload Mode | Freddy Boulton | April 16, 2024 | In this post, I will show you how you can build a functional AI application quickly with Gradio's reload mode. But before we get to that, I want to explain what reload mode does and why Gradio implements its own auto-reloading logic. If you are already familiar with Gradio and want to get to building, please skip to th... |
https://huggingface.co/blog/leaderboard-livecodebench | Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs | Naman Jain, Alex Gu, Tianjun Zhang, Wen-Ding Li, King Han, Fanjia Yan, Clémentine Fourrier | April 16, 2024 | We are excited to introduce the LiveCodeBench leaderboard, based on LiveCodeBench, a new benchmark developed by researchers from UC Berkeley, MIT, and Cornell for measuring LLMs’ code generation capabilities. LiveCodeBench collects coding problems over time from various coding contest platforms, annotating problems wit... |
https://huggingface.co/blog/fhe-endpoints | Running Privacy-Preserving Inferences on Hugging Face Endpoints | Benoit Chevallier-Mames | April 16, 2024 | This is a guest blog post by the Zama team. Zama is an open source cryptography company building state-of-the-art FHE solutions for blockchain and AI.Eighteen months ago, Zama started Concrete ML, a privacy-preserving ML framework with bindings to traditional ML frameworks such as scikit-learn, ONNX, PyTorch, and Tenso... |
https://huggingface.co/blog/ryght-case-study | Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face | Andrew Reed, Johnny Crupi | April 16, 2024 | This is a guest blog post by the Ryght team. Who is Ryght? Ryght is building an enterprise-grade generative AI platform tailored for the healthcare and life sciences sectors. Today is their official launch of Ryght Preview, now publicly available for all.Life science companies are amassing a wealth of data from divers... |
https://huggingface.co/blog/idefics2 | Introducing Idefics2: A Powerful 8B Vision-Language Model for the community | Leo Tronchon, Hugo Laurençon, Victor Sanh | April 15, 2024 | We are excited to release Idefics2, a general multimodal model that takes as input arbitrary sequences of texts and images, and generates text responses. It can answer questions about images, describe visual content, create stories grounded in multiple images, extract information from documents, and perform basic arith... |
https://huggingface.co/blog/vlms | Vision Language Models Explained | Merve Noyan, Edward Beeching | April 11, 2024 | Vision language models are models that can learn simultaneously from images and texts to tackle many tasks, from visual question answering to image captioning. In this post, we go through the main building blocks of vision language models: have an overview, grasp how they work, figure out how to find the right model, h... |
https://huggingface.co/blog/google-cloud-model-garden | Making thousands of open LLMs bloom in the Vertex AI Model Garden | Philipp Schmid, Jeff Boudier | April 10, 2024 | Today, we are thrilled to announce the launch of Deploy on Google Cloud, a new integration on the Hugging Face Hub to deploy thousands of foundation models easily to Google Cloud using Vertex AI or Google Kubernetes Engine (GKE). Deploy on Google Cloud makes it easy to deploy open models as API Endpoints within your ow... |
https://huggingface.co/blog/codegemma | CodeGemma - an official Google release for code LLMs | Pedro Cuenca, Omar Sanseviero, Vaibhav Srivastav, Philipp Schmid, Mishig Davaadorj, Loubna Ben Allal | April 9, 2024 | CodeGemma is a family of open-access versions of Gemma specialized in code, and we’re excited to collaborate with Google on its release to make it as accessible as possible.🤗CodeGemma comes in three flavors:A 2B base model specialized in infilling and open-ended generation.A 7B base model trained with both code infill... |
https://huggingface.co/blog/hugging-face-wiz-security-blog | Hugging Face partners with Wiz Research to Improve AI Security | Josef Fukano, Guillaume Salou, Michelle Habonneau, Adrien, Luc Georges, Nicolas Patry, Julien Chaumond | April 4, 2024 | We are pleased to announce that we are partnering with Wiz with the goal of improving security across our platform and the AI/ML ecosystem at large.Wiz researchers collaborated with Hugging Face on the security of our platform and shared their findings. Wiz is a cloud security company that helps their customers build a... |
https://huggingface.co/blog/duckdb-nsql-7b | Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B | Andrea Soria, Till Döhmen, Sen Wu, Laurel Orr | April 4, 2024 | Today, integrating AI-powered features, particularly leveraging Large Language Models (LLMs), has become increasingly prevalent across various tasks such as text generation, classification, image-to-text, image-to-image transformations, etc.Developers are increasingly recognizing these applications' potential benefits,... |
https://huggingface.co/blog/setfit-optimum-intel | Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon | Daniel Korat, Tom Aarsen, Oren Pereg, Moshe Wasserblat, Ella Charlaix, Abirami Prabhakaran | April 3, 2024 | SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at Intel Labs and the UKP Lab, SetFit is an efficient framework for few-shot fine-tuning of Sentence Transformers models. SetFit achieves high accuracy with l... |
https://huggingface.co/blog/policy-blog | Public Policy at Hugging Face | Irene Solaiman, Yacine Jernite, Margaret Mitchell | April 8, 2024 | AI Policy at Hugging Face is a multidisciplinary and cross-organizational workstream. Instead of being part of a vertical communications or global affairs organization, our policy work is rooted in the expertise of our many researchers and developers, from Ethics and Society Regulars and the legal team to machine learn... |
https://huggingface.co/blog/cloudflare-workers-ai | Bringing serverless GPU inference to Hugging Face users | Philipp Schmid, Jeff Boudier, Rita Kozlov, Nikhil Kothari | April 2, 2024 | Today, we are thrilled to announce the launch of Deploy on Cloudflare Workers AI, a new integration on the Hugging Face Hub. Deploy on Cloudflare Workers AI makes using open models as a serverless API easy, powered by state-of-the-art GPUs deployed in Cloudflare edge data centers. Starting today, we are integrating som... |
https://huggingface.co/blog/pollen-vision | Pollen-Vision: Unified interface for Zero-Shot vision models in robotics | Antoine Pirrone, Simon Le Goff, Rouanet, Simon Revelly | March 25, 2024 | This is a guest blog post by the Pollen Robotics team. We are the creators of Reachy, an open-source humanoid robot designed for manipulation in the real world.In the context of autonomous behaviors, the essence of a robot's usability lies in its ability to understand and interact with its environment. This understandi... |
https://huggingface.co/blog/noob_intro_transformers | Total noob’s intro to Hugging Face Transformers | Andrew Jardine | March 22, 2024 | Welcome to "A Total Noob’s Introduction to Hugging Face Transformers," a guide designed specifically for those looking to understand the bare basics of using open-source ML. Our goal is to demystify what Hugging Face Transformers is and how it works, not to turn you into a machine learning practitioner, but to enable b... |
https://huggingface.co/blog/embedding-quantization | Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval | Aamir Shakir, Tom Aarsen, SeanLee | March 22, 2024 | We introduce the concept of embedding quantization and showcase their impact on retrieval speed, memory usage, disk space, and cost. We'll discuss how embeddings can be quantized in theory and in practice, after which we introduce a demo showing a real-life retrieval scenario of 41 million Wikipedia texts.Table of Cont... |
https://huggingface.co/blog/arena-lighthouz | Introducing the Chatbot Guardrails Arena | Sonali Pattnaik, Rohan Karan, Srijan Kumar, Clémentine Fourrier | March 21, 2024 | With the recent advancements in augmented LLM capabilities, deployment of enterprise AI assistants (such as chatbots and agents) with access to internal databases is likely to increase; this trend could help with many tasks, from internal document summarization to personalized customer and employee support. However, da... |
https://huggingface.co/blog/phi2-intel-meteor-lake | A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake | Julien Simon, Ella Charlaix, Ofir Zafrir, Igor Margulis, Guy Boudoukh, Moshe Wasserblat | March 20, 2024 | Because of their impressive abilities, large language models (LLMs) require significant computing power, which is seldom available on personal computers. Consequently, we have no choice but to deploy them on powerful bespoke AI servers hosted on-premises or in the cloud.Why local LLM inference is desirable What ... |
https://huggingface.co/blog/cosmopedia | Cosmopedia: how to create large-scale synthetic data for pre-training | Loubna Ben Allal, Anton Lozhkov, Daniel van Strien | March 20, 2024 | In this blog post, we outline the challenges and solutions involved in generating a synthetic dataset with billions of tokens to replicate Phi-1.5, leading to the creation of Cosmopedia. Synthetic data has become a central topic in Machine Learning. It refers to artificially generated data, for instance by large langua... |
https://huggingface.co/blog/galore | GaLore: Advancing Large Model Training on Consumer-grade Hardware | Titus von Koeller, Jiawei Zhao, Matthew Douglas, Yaowei Zheng, Younes Belkada, Zachary Mueller, Amy Roberts, Sourab Mangrulkar, Benjamin Bossan | March 20, 2024 | The integration of GaLore into the training of large language models (LLMs) marks a significant advancement in the field of deep learning, particularly in terms of memory efficiency and the democratization of AI research. By allowing for the training of billion-parameter models on consumer-grade hardware, reducing memo... |
https://huggingface.co/blog/train-dgx-cloud | Easily Train Models with H100 GPUs on NVIDIA DGX Cloud | Philipp Schmid, Jeff Boudier, Rafael Pierre, Abhishek Thakur | March 18, 2024 | Today, we are thrilled to announce the launch of Train on DGX Cloud, a new service on the Hugging Face Hub, available to Enterprise Hub organizations. Train on DGX Cloud makes it easy to use open models with the accelerated compute infrastructure of NVIDIA DGX Cloud. Together, we built Train on DGX Cloud so that Enterp... |
https://huggingface.co/blog/quanto-introduction | Quanto: a pytorch quantization toolkit | David Corvoysier, Younes Belkada, Marc Sun | March 18, 2024 | Quantization is a technique to reduce the computational and memory costs of evaluating Deep Learning Models by representing their weights and activations with low-precision data types like 8-bit integer (int8) instead of the usual 32-bit floating point (float32).Reducing the number of bits means the resulting model req... |
https://huggingface.co/blog/intel-fast-embedding | CPU Optimized Embeddings with 🤗 Optimum Intel and fastRAG | Peter Izsak, Moshe Berchansky, Daniel Fleischer, Ella Charlaix, Morgan Funtowicz, Moshe Wasserblat | March 15, 2024 | Embedding models are useful for many applications such as retrieval, reranking, clustering, and classification. The research community has witnessed significant advancements in recent years in embedding models, leading to substantial enhancements in all applications building on semantic representation. Models such as B... |
https://huggingface.co/blog/websight | From screenshots to HTML code: Introducing the WebSight dataset | Hugo Laurençon, Leo Tronchon, Victor Sanh | March 15, 2024 | In the world of web development, turning designs into functional websites usually involves a lot of coding and careful testing. What if we could simplify this process, making it possible to convert web designs into working websites more easily and quickly? WebSight is a new dataset that aims at building AI systems capa... |
https://huggingface.co/blog/leaderboard-contextual | Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes? | Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, NANYUN (Violet) PENG, Clémentine Fourrier | March 5, 2024 | Models are becoming quite good at understanding text on its own, but what about text in images, which gives important contextual information? For example, navigating a map, or understanding a meme? The ability to reason about the interactions between the text and visual context in images can power many real-world appli... |
https://huggingface.co/blog/community-datasets | Data is better together: Enabling communities to collectively build better datasets together using Argilla and Hugging Face Spaces | Daniel van Strien, Daniel Vila | March 4, 2024 | Recently, Argilla and Hugging Face launched Data is Better Together, an experiment to collectively build a preference dataset of prompt rankings. In a few days, we had:350 community contributors labeling data Over 11,000 prompt ratingsSee the progress dashboard for the latest stats!This resulted in the release of 10k_p... |
https://huggingface.co/blog/textgen-pipe-gaudi | Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator | Siddhant Jagtap | February 29, 2024 | With the Generative AI (GenAI) revolution in full swing, text-generation with open-source transformer models like Llama 2 has become the talk of the town. AI enthusiasts as well as developers are looking to leverage the generative abilities of such models for their own use cases and applications. This article shows how... |
https://huggingface.co/blog/starcoder2 | StarCoder2 and The Stack v2 | Leandro von Werra, Loubna Ben Allal, Anton Lozhkov, Nouamane Tazi | February 28, 2024 | BigCode is releasing StarCoder2, the next generation of transparently trained open code LLMs. All StarCoder2 variants were trained on The Stack v2, a new large and high-quality code dataset. We release all models, datasets, and the processing as well as the training code. Check out the paper for details. What is StarC... |
https://huggingface.co/blog/arena-tts | TTS Arena: Benchmarking Text-to-Speech Models in the Wild | mrfakename, Vaibhav Srivastav, Clémentine Fourrier, Lucain Pouget, Yoach Lacombe, Main Horse, Sanchit Gandhi | February 27, 2024 | Automated measurement of the quality of text-to-speech (TTS) models is very difficult. Assessing the naturalness and inflection of a voice is a trivial task for humans, but it is much more difficult for AI. This is why today, we’re thrilled to announce the TTS Arena. Inspired by LMSys's Chatbot Arena for LLMs, we devel... |
https://huggingface.co/blog/watermarking | AI Watermarking 101: Tools and Techniques | Sasha Luccioni, Yacine Jernite, Derek Thomas, Emily Witko, Ezi Ozoani, Josef Fukano, Vaibhav Srivastav, Brigitte Tousignant, Margaret Mitchell | February 26, 2024 | In recent months, we've seen multiple news stories involving ‘deepfakes’, or AI-generated content: from images of Taylor Swift to videos of Tom Hanks and recordings of US President Joe Biden. Whether they are selling products, manipulating images of people without their consent, supporting phishing for private informat... |
https://huggingface.co/blog/gemma-peft | Fine-Tuning Gemma Models in Hugging Face | Vaibhav Singh, Jiewen Tan, Younes Belkada, Arthur Zucker | February 23, 2024 | We recently announced that Gemma, the open weights language model from Google Deepmind, is available for the broader open-source community via Hugging Face. It’s available in 2 billion and 7 billion parameter sizes with pretrained and instruction-tuned flavors. It’s available on Hugging Face, supported in TGI, and easi... |
https://huggingface.co/blog/leaderboard-haizelab | Introducing the Red-Teaming Resistance Leaderboard | Steve Li, Richard, Leonard Tang, Clémentine Fourrier | February 23, 2024 | Content warning: since this blog post is about a red-teaming leaderboard (testing elicitation of harmful behavior in LLMs), some users might find the content of the related datasets or examples unsettling.LLM research is moving fast. Indeed, some might say too fast.While researchers in the field continue to rapidly exp... |
https://huggingface.co/blog/matryoshka | 🪆 Introduction to Matryoshka Embedding Models | Tom Aarsen, Joshua, Omar Sanseviero | February 23, 2024 | In this blogpost, we will introduce you to the concept of Matryoshka Embeddings and explain why they are useful. We will discuss how these models are theoretically trained and how you can train them using Sentence Transformers.Additionally, we will provide practical guidance on how to use Matryoshka Embedding models an... |
https://huggingface.co/blog/fetch-eap-case-study | Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS | Violette Lepercq | February 23, 2023 | If you need support in using Hugging Face and AWS, please get in touch with us here - our team will contact you to discuss your requirements! Executive Summary Fetch, a consumer rewards company, developed about 15 different AI tools to help it receive, route, read, process, analyze, and store receipts uploaded by user... |
https://huggingface.co/blog/gemma | Welcome Gemma - Google’s new open LLM | Philipp Schmid, Omar Sanseviero, Pedro Cuenca | February 21, 2024 | Gemma, a new family of state-of-the-art open LLMs, was released today by Google! It's great to see Google reinforcing its commitment to open-source AI, and we’re excited to fully support the launch with comprehensive integration in Hugging Face.Gemma comes in two sizes: 7B parameters, for efficient deployment and devel... |
https://huggingface.co/blog/leaderboard-upstage | Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem | Park, Sung Kim, Clémentine Fourrier | February 20, 2024 | In the fast-evolving landscape of Large Language Models (LLMs), building an “ecosystem” has never been more important. This trend is evident in several major developments like Hugging Face's democratizing NLP and Upstage building a Generative AI ecosystem.Inspired by these industry milestones, in September of 2023, at ... |
https://huggingface.co/blog/peft_merging | 🤗 PEFT welcomes new merging methods | Sourab Mangrulkar, Sayak Paul | February 19, 2024 | Model merging has quickly become the de-facto standard of pushing the performance limits of large language models. On the Open LLM Leaderboard, we continue to notice merged models topping up the charts. Our very own Omar Sanseviero, made a little sprint on model merging and discovered interesting findings. The typical ... |
https://huggingface.co/blog/synthetic-data-save-costs | Synthetic data: save money, time and carbon with open source | Moritz Laurer | February 16, 2024 | tl;dr Should you fine-tune your own model or use an LLM API? Creating your own model puts you in full control but requires expertise in data collection, training, and deployment. LLM APIs are much easier to use but force you to send your data to a third party and create costly dependencies on LLM providers. This blog p... |
https://huggingface.co/blog/amd_pervasive_developer_ai_contest | AMD Pervasive AI Developer Contest | Guruprasad MP | February 14, 2024 | AMD and Hugging Face are actively engaged in helping developers seamlessly deploy cutting-edge AI models on AMD hardware. This year, AMD takes their commitment one step further by providing developers free, hands-on access to state-of-the-art AMD hardware through their recently announced Pervasive AI Developer Contest.... |
https://huggingface.co/blog/tgi-messages-api | From OpenAI to Open LLMs with Messages API on Hugging Face | Andrew Reed, Philipp Schmid, Joffrey THOMAS, David Holtz | February 8, 2024 | We are excited to introduce the Messages API to provide OpenAI compatibility with Text Generation Inference (TGI) and Inference Endpoints.Starting with version 1.4.0, TGI offers an API compatible with the OpenAI Chat Completion API. The new Messages API allows customers and users to transition seamlessly from OpenAI mo... |
https://huggingface.co/blog/segmoe | SegMoE: Segmind Mixture of Diffusion Experts | Yatharth Gupta, Vishnu V Jaddipal, Harish Prabhala | February 3, 2024 | SegMoE is an exciting framework for creating Mixture-of-Experts Diffusion models from scratch! SegMoE is comprehensively integrated within the Hugging Face ecosystem and comes supported with diffusers 🔥!Among the features and integrations being released today:Models on the Hub, with their model cards and licenses (Apa... |
https://huggingface.co/blog/leaderboard-nphardeval | NPHardEval Leaderboard: Unveiling the Reasoning Abilities of Large Language Models through Complexity Classes and Dynamic Updates | Lizhou Fan, Wenyue Hua, Haoyang Ling, Clémentine Fourrier | February 2, 2024 | We're happy to introduce the NPHardEval leaderboard, using NPHardEval, a cutting-edge benchmark developed by researchers from the University of Michigan and Rutgers University. NPHardEval introduces a dynamic, complexity-based framework for assessing Large Language Models' (LLMs) reasoning abilities. It poses 900 algor... |
https://huggingface.co/blog/constitutional_ai | Constitutional AI with Open LLMs | Shengyi Costa Huang, Lewis Tunstall, Edward Beeching, Leandro von Werra, Omar Sanseviero, Kashif Rasul, Thomas Wolf | February 1, 2024 | Since the launch of ChatGPT in 2022, we have seen tremendous progress in LLMs, ranging from the release of powerful pretrained models like Llama 2 and Mixtral, to the development of new alignment techniques like Direct Preference Optimization. However, deploying LLMs in consumer applications poses several challenges, i... |
https://huggingface.co/blog/text-generation-inference-on-inferentia2 | Hugging Face Text Generation Inference available for AWS Inferentia2 | Philipp Schmid, David Corvoysier | February 1, 2024 | We are excited to announce the general availability of Hugging Face Text Generation Inference (TGI) on AWS Inferentia2 and Amazon SageMaker. Text Generation Inference (TGI), is a purpose-built solution for deploying and serving Large Language Models (LLMs) for production workloads at scale. TGI enables high-performance... |
https://huggingface.co/blog/patchtst | Patch Time Series Transformer in Hugging Face - Getting Started | Nam Nguyen, Wesley M. Gifford, Arindam Jati, Vijay Ekambaram, Kashif Rasul | February 1, 2024 | In this blog, we provide examples of how to get started with PatchTST. We first demonstrate the forecasting capability of PatchTST on the Electricity data. We will then demonstrate the transfer learning capability of PatchTST by using the previously trained model to do zero-shot forecasting on the electrical transforme... |
https://huggingface.co/blog/leaderboard-patronus | Introducing the Enterprise Scenarios Leaderboard: a Leaderboard for Real World Use Cases | Selvan Sunitha Ravi, Rebecca Qian, Anand Kannappan, Clémentine Fourrier | January 31, 2024 | Today, the Patronus team is excited to announce the new Enterprise Scenarios Leaderboard, built using the Hugging Face Leaderboard Template in collaboration with their teams. The leaderboard aims to evaluate the performance of language models on real-world enterprise use cases. We currently support 6 diverse tasks - Fi... |
https://huggingface.co/blog/intel-starcoder-quantization | Accelerate StarCoder with 🤗 Optimum Intel on Xeon: Q8/Q4 and Speculative Decoding | Ofir Zafrir, Ella Charlaix, Igor Margulis, Jonathan Mamou, Guy Boudoukh, Oren Pereg, Moshe Wasserblat, Haihao Shen, Ahmad Yasin, FanZhao | January 30, 2024 | IntroductionRecently, code generation models have become very popular, especially with the release of state-of-the-art open-source models such as BigCode’s StarCoder and Meta AI’s Code Llama. A growing number of works focuses on making Large Language Models (LLMs) more optimized and accessible. In this blog, we are hap... |
https://huggingface.co/blog/leaderboard-hallucinations | The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models | Pasquale Minervini, Ping Nie, Clémentine Fourrier, Rohit Saxena, Aryo Pradipta Gema, Xuanli He | January 29, 2024 | In the rapidly evolving field of Natural Language Processing (NLP), Large Language Models (LLMs) have become central to AI's ability to understand and generate human language. However, a significant challenge that persists is their tendency to hallucinate — i.e., producing content that may not align with real-world fac... |
https://huggingface.co/blog/leaderboard-decodingtrust | An Introduction to AI Secure LLM Safety Leaderboard | Chenhui Zhang, Chulin Xie, Mintong Kang, Chejian Xu, Bo Li | January 26, 2024 | Given the widespread adoption of LLMs, it is critical to understand their safety and risks in different scenarios before extensive deployments in the real world. In particular, the US Whitehouse has published an executive order on safe, secure, and trustworthy AI; the EU AI Act has emphasized the mandatory requirements... |
https://huggingface.co/blog/gcp-partnership | Hugging Face and Google partner for open AI collaboration | Jeff Boudier, Philipp Schmid | January 25, 2024 | At Hugging Face, we want to enable all companies to build their own AI, leveraging open models and open source technologies. Our goal is to build an open platform, making it easy for data scientists, machine learning engineers and developers to access the latest models from the community, and use them within the platfo... |
https://huggingface.co/blog/open-source-llms-as-agents | Open-source LLMs as LangChain Agents | Aymeric Roucher, Joffrey THOMAS, Andrew Reed | January 24, 2024 | TL;DROpen-source LLMs have now reached a performance level that makes them suitable reasoning engines for powering agent workflows: Mixtral even surpasses GPT-3.5 on our benchmark, and its performance could easily be further enhanced with fine-tuning.IntroductionLarge Language Models (LLMs) trained for causal language ... |
https://huggingface.co/blog/fine-tune-w2v2-bert | Fine-Tune W2V2-Bert for low-resource ASR with 🤗 Transformers | Yoach Lacombe | January 19, 2024 | New (01/2024): This blog post is strongly inspired by "Fine-tuning XLS-R on Multi-Lingual ASR" and "Fine-tuning MMS Adapter Models for Multi-Lingual ASR".IntroductionLast month, MetaAI released Wav2Vec2-BERT, as a building block of their Seamless Communication, a family of AI translation models.Wav2Vec2-BERT is the res... |
https://huggingface.co/blog/patchtsmixer | PatchTSMixer in HuggingFace - Getting Started | Arindam Jati, Vijay Ekambaram, Nam Nguyen, Wesley M. Gifford, Kashif Rasul, Niels Rogge | January 19, 2024 | PatchTSMixer is a lightweight time-series modeling approach based on the MLP-Mixer architecture. It is proposed in TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting by IBM Research authors Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.For effective minds... |
https://huggingface.co/blog/pref-tuning | Preference Tuning LLMs with Direct Preference Optimization Methods | Kashif Rasul, Edward Beeching, Lewis Tunstall, Leandro von Werra, Omar Sanseviero | January 18, 2024 | AddendumAfter consulting with the authors of the IPO paper, we discovered that the implementation of IPO in TRL was incorrect; in particular, the loss over the log-likelihoods of the completions needs to be averaged instead of summed. We have added a fix in this PR and re-run the experiments. The results are now consis... |
https://huggingface.co/blog/sdxl_ort_inference | Accelerating SD Turbo and SDXL Turbo Inference with ONNX Runtime and Olive | Sophie Schoenmeyer, Tianlei Wu, Morgan Funtowicz | January 15, 2024 | IntroductionSD Turbo and SDXL Turbo are two fast generative text-to-image models capable of generating viable images in as little as one step, a significant improvement over the 30+ steps often required with previous Stable Diffusion models. SD Turbo is a distilled version of Stable Diffusion 2.1, and SDXL Turbo is a d... |
https://huggingface.co/blog/leaderboard-vectara | A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard | Ofer Mendelevitch, Bae, Clémentine Fourrier | January 12, 2024 | Hugging Face’s Open LLM Leaderboard (originally created by Ed Beeching and Lewis Tunstall, and maintained by Nathan Habib and Clémentine Fourrier) is well known for tracking the performance of open source LLMs, comparing their performance in a variety of tasks, such as TruthfulQA or HellaSwag.This has been of tremendou... |
https://huggingface.co/blog/unsloth-trl | Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL | Daniel Han-Chen | January 10, 2024 | Pulling your hair out because LLM fine-tuning is taking forever? In this post, we introduce a lightweight tool developed by the community to make LLM fine-tuning go super fast!Before diving into Unsloth, it may be helpful to read our QLoRA blog post, or be familiar with LLM fine-tuning using the 🤗 PEFT library.Unsloth... |
https://huggingface.co/blog/amused | Welcome aMUSEd: Efficient Text-to-Image Generation | Isamu Isozaki, Suraj Patil, Will Berman, Sayak Paul | January 4, 2024 | We’re excited to present an efficient non-diffusion text-to-image model named aMUSEd. It’s called so because it’s a open reproduction of Google's MUSE. aMUSEd’s generation quality is not the best and we’re releasing a research preview with a permissive license. In contrast to the commonly used latent diffusion approach... |
https://huggingface.co/blog/sdxl_lora_advanced_script | LoRA training scripts of the world, unite! | Linoy Tsaban, Apolinário from multimodal AI art | January 2, 2024 | A community derived guide to some of the SOTA practices for SD-XL Dreambooth LoRA fine tuningTL;DRWe combined the Pivotal Tuning technique used on Replicate's SDXL Cog trainer with the Prodigy optimizer used in theKohya trainer (plus a bunch of other optimizations) to achieve very good results on training Dreambooth Lo... |
https://huggingface.co/blog/whisper-speculative-decoding | Speculative Decoding for 2x Faster Whisper Inference | Sanchit Gandhi | December 20, 2023 | Open AI's Whisper is a general purpose speech transcription model that achieves state-of-the-art results across a range of different benchmarks and audio conditions. The latest large-v3 model tops the OpenASR Leaderboard, ranking as the best open-source speech transcription model for English. The model also demonstrate... |
https://huggingface.co/blog/2023-in-llms | 2023, year of open LLMs | Clémentine Fourrier | December 18, 2023 | 2023 has seen a surge of public interest in Large Language Models (LLMs), and now that most people have an idea of what they are and can do, the public debates around open versus closed source have reached a wide audience as well. At Hugging Face, we follow open models with great interest, as they allow research to be ... |
https://huggingface.co/blog/mixtral | Welcome Mixtral - a SOTA Mixture of Experts on Hugging Face | Lewis Tunstall, Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Olivier Dehaene, Leandro von Werra, Younes Belkada | December 11, 2023 | Mixtral 8x7b is an exciting large language model released by Mistral today, which sets a new state-of-the-art for open-access models and outperforms GPT-3.5 across many benchmarks. We’re excited to support the launch with a comprehensive integration of Mixtral in the Hugging Face ecosystem 🔥!Among the features and int... |
https://huggingface.co/blog/moe | Mixture of Experts Explained | Omar Sanseviero, Lewis Tunstall, Philipp Schmid, Sourab Mangrulkar, Younes Belkada, Pedro Cuenca | December 11, 2023 | With the release of Mixtral 8x7B (announcement, model card), a class of transformer has become the hottest topic in the open AI community: Mixture of Experts, or MoEs for short. In this blog post, we take a look at the building blocks of MoEs, how they’re trained, and the tradeoffs to consider when serving them for inf... |
https://huggingface.co/blog/huggingface-and-optimum-amd | AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU | Félix Marty, Ilyas Moutawwakil, Mohit Sharma, Ella Charlaix, seungrok jung, Morgan Funtowicz | December 5, 2023 | Earlier this year, AMD and Hugging Face announced a partnership to accelerate AI models during the AMD's AI Day event. We have been hard at work to bring this vision to reality, and make it easy for the Hugging Face community to run the latest AI models on AMD hardware with the best possible performance.AMD is powering... |
https://huggingface.co/blog/setfit-absa | SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit | Ronen Laperdon, Tom Aarsen, Lewis Tunstall, Oren Pereg, Moshe Wasserblat | December 6, 2023 | Aspect-Based Sentiment Analysis (ABSA) is the task of detecting the sentiment towards specific aspects within the text. For example, in the sentence, "This phone has a great screen, but its battery is too small", the aspect terms are "screen" and "battery" and the sentiment polarities towards them are Positive and Nega... |
https://huggingface.co/blog/optimum-nvidia | Optimum-NVIDIA on Hugging Face enables blazingly fast LLM inference in just 1 line of code | Laikh Tewari, Morgan Funtowicz | December 5, 2023 | Large Language Models (LLMs) have revolutionized natural language processing and are increasingly deployed to solve complex problems at scale. Achieving optimal performance with these models is notoriously challenging due to their unique and intense computational demands. Optimized performance of LLMs is incredibly val... |
https://huggingface.co/blog/lora-adapters-dynamic-loading | Goodbye cold boot - how we made LoRA Inference 300% faster | raphael g | December 5, 2023 | tl;dr: We swap the Stable Diffusion LoRA adapters per user request, while keeping the base model warm allowing fast LoRA inference across multiple users. You can experience this by browsing our LoRA catalogue and playing with the inference widget.In this blog we will go in detail over how we achieved that. We've been a... |
https://huggingface.co/blog/open-llm-leaderboard-drop | Open LLM Leaderboard: DROP deep dive | Clémentine Fourrier, Alex Cabrera, Stella Biderman, Nathan Habib, Thomas Wolf | December 1, 2023 | Recently, three new benchmarks were added to the Open LLM Leaderboard: Winogrande, GSM8k and DROP, using the original implementations reproduced in the EleutherAI Harness. A cursory look at the scores for DROP revealed something strange was going on, with the overwhelming majority of models scoring less than 10 out of ... |
https://huggingface.co/blog/lcm_lora | SDXL in 4 steps with Latent Consistency LoRAs | Pedro Cuenca, Suraj Patil, Simian Luo, Daniel Gu, Yiqin Tan, Sayak Paul, Apolinário from multimodal AI art | November 9, 2023 | Latent Consistency Models (LCM) are a way to decrease the number of steps required to generate an image with Stable Diffusion (or SDXL) by distilling the original model into another version that requires fewer steps (4 to 8 instead of the original 25 to 50). Distillation is a type of training procedure that attempts to... |
https://huggingface.co/blog/inferentia-llama2 | Make your llama generation time fly with AWS Inferentia2 | David Corvoysier | November 7, 2023 | Update (02/2024): Performance has improved even more! Check our updated benchmarks.In a previous post on the Hugging Face blog, we introduced AWS Inferentia2, the second-generation AWS Inferentia accelerator, and explained how you could use optimum-neuron to quickly deploy Hugging Face models for standard text and visi... |
https://huggingface.co/blog/prodigy-hf | Introducing Prodigy-HF | Vincent D. Warmerdam | November 7, 2023 | Prodigy is an annotation tool made by Explosion, a company well known as the creators of spaCy. It's a fully scriptable product with a large community around it. The product has many features, including tight integration with spaCy and active learning capabilities. But the main feature of the product is that it is prog... |
https://huggingface.co/blog/Lora-for-sequence-classification-with-Roberta-Llama-Mistral | Comparing the Performance of LLMs: A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora | mehdi iraqi | November 7, 2023 | IntroductionIn the fast-moving world of Natural Language Processing (NLP), we often find ourselves comparing different language models to see which one works best for specific tasks. This blog post is all about comparing three models: RoBERTa, Mistral-7b, and Llama-2-7b. We used them to tackle a common problem - classi... |
https://huggingface.co/blog/regions | Introducing Storage Regions on the Hub | Eliott Coyac, Remy TROMPIER, Adrien, Michelle Habonneau, Violette Lepercq, Julien Chaumond | November 3, 2023 | As part of our Enterprise Hub plan, we recently released support for Storage Regions.Regions let you decide where your org's models and datasets will be stored. This has two main benefits, which we'll briefly go over in this blog post:Regulatory and legal compliance, and more generally, better digital sovereigntyPerfor... |
https://huggingface.co/blog/researcher-dataset-sharing | Creating open machine learning datasets? Share them on the Hugging Face Hub! | Daniel van Strien | October 30, 2023 | Who is this blog post for?Are you a researcher doing data-intensive research or using machine learning as a research tool? As part of this research, you have likely created datasets for training and evaluating machine learning models, and like many researchers, you may be sharing these datasets via Google Drive, OneDri... |
https://huggingface.co/blog/personal-copilot | Personal Copilot: Train Your Own Coding Assistant | Sourab Mangrulkar, Sayak Paul | October 27, 2023 | In the ever-evolving landscape of programming and software development, the quest for efficiency and productivity has led to remarkable innovations. One such innovation is the emergence of code generation models such as Codex, StarCoder and Code Llama. These models have demonstrated remarkable capabilities in generatin... |
https://huggingface.co/blog/scalable-data-inspection | Interactively explore your Huggingface dataset with one line of code | Stefan Suwelack, Alexander Druz, Dominik H, Markus Stoll | October 25, 2023 | The Hugging Face datasets library not only provides access to more than 70k publicly available datasets, but also offers very convenient data preparation pipelines for custom datasets.Renumics Spotlight allows you to create interactive visualizations to identify critical clusters in your data. Because Spotlight underst... |
https://huggingface.co/blog/inference-endpoints-embeddings | Deploy Embedding Models with Hugging Face Inference Endpoints | Philipp Schmid | October 24, 2023 | The rise of Generative AI and LLMs like ChatGPT has increased the interest and importance of embedding models for a variety of tasks especially for retrievel augemented generation, like search or chat with your data. Embeddings are helpful since they represent sentences, images, words, etc. as numeric vector representa... |
https://huggingface.co/blog/the_n_implementation_details_of_rlhf_with_ppo | The N Implementation Details of RLHF with PPO | Shengyi Costa Huang, Tianlin Liu, Leandro von Werra | October 24, 2023 | RLHF / ChatGPT has been a popular research topic these days. In our quest to research more on RLHF, this blog post attempts to do a reproduction of OpenAI’s 2019 original RLHF codebase at openai/lm-human-preferences. Despite its “tensorflow-1.x-ness,” OpenAI’s original codebase is very well-evaluated and benchmarked, m... |
https://huggingface.co/blog/simple_sdxl_optimizations | Exploring simple optimizations for SDXL | Sayak Paul, Steven Liu | October 24, 2023 | Stable Diffusion XL (SDXL) is the latest latent diffusion model by Stability AI for generating high-quality super realistic images. It overcomes challenges of previous Stable Diffusion models like getting hands and text right as well as spatially correct compositions. In addition, SDXL is also more context aware and re... |
https://huggingface.co/blog/gradio-lite | Gradio-Lite: Serverless Gradio Running Entirely in Your Browser | Abubakar Abid, Yuichiro Tachibana, Ali Abdalla | October 19, 2023 | Gradio is a popular Python library for creating interactive machine learning apps. Traditionally, Gradio applications have relied on server-side infrastructure to run, which can be a hurdle for developers who need to host their applications. Enter Gradio-lite (@gradio/lite): a library that leverages Pyodide to bring Gr... |
https://huggingface.co/blog/ort-accelerating-hf-models | Accelerating over 130,000 Hugging Face models with ONNX Runtime | Sophie Schoenmeyer, Morgan Funtowicz | October 4, 2023 | What is ONNX Runtime?ONNX Runtime is a cross-platform machine learning tool that can be used to accelerate a wide variety of models, particularly those with ONNX support.Hugging Face ONNX Runtime SupportThere are over 130,000 ONNX-supported models on Hugging Face, an open source community that allows users to build, tr... |
https://huggingface.co/blog/sdxl_jax | Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e | Pedro Cuenca, Juan Acevedo, Alex Spiridonov, Pate Motter, Yavuz Yetim, Vaibhav Singh, Vijaya Singh, Patrick von Platen | October 3, 2023 | Generative AI models, such as Stable Diffusion XL (SDXL), enable the creation of high-quality, realistic content with wide-ranging applications. However, harnessing the power of such models presents significant challenges and computational costs. SDXL is a large image generation model whose UNet component is about thre... |
https://huggingface.co/blog/chat-templates | Chat Templates | Matthew Carrigan | October 3, 2023 | A spectre is haunting chat models - the spectre of incorrect formatting!tl;drChat models have been trained with very different formats for converting conversations into a single tokenizable string. Using a format different from the format a model was trained with will usually cause severe, silent performance degradatio... |
https://huggingface.co/blog/ai-comic-factory | Deploying the AI Comic Factory using the Inference API | Julian Bilcke | October 2, 2023 | We recently announced Inference for PROs, our new offering that makes larger models accessible to a broader audience. This opportunity opens up new possibilities for running end-user applications using Hugging Face as a platform.An example of such an application is the AI Comic Factory - a Space that has proved incredi... |
https://huggingface.co/blog/ethics-soc-5 | Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings | Margaret Mitchell | September 29, 2023 | Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 MusingsHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesEthics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings |
https://huggingface.co/blog/trl-ddpo | Finetune Stable Diffusion Models with DDPO via TRL | luke meyers, Sayak Paul, Kashif Rasul, Leandro von Werra | September 29, 2023 | IntroductionDiffusion models (e.g., DALL-E 2, Stable Diffusion) are a class of generative models that are widely successful at generating images most notably of the photorealistic kind. However, the images generated by these models may not always be on par with human preference or human intention. Thus arises the align... |
https://huggingface.co/blog/Llama2-for-non-engineers | Non-engineers guide: Train a LLaMA 2 chatbot | Andrew Jardine, Abhishek Thakur | September 28, 2023 | Introduction In this tutorial we will show you how anyone can build their own open-source ChatGPT without ever writing a single line of code! We’ll use the LLaMA 2 base model, fine tune it for chat with an open-source instruction dataset and then deploy the model to a chat app you can share with your friends. All by ju... |
https://huggingface.co/blog/llama-sagemaker-benchmark | Llama 2 on Amazon SageMaker a Benchmark | Philipp Schmid | September 26, 2023 | Deploying large language models (LLMs) and other generative AI models can be challenging due to their computational requirements and latency needs. To provide useful recommendations to companies looking to deploy Llama 2 on Amazon SageMaker with the Hugging Face LLM Inference Container, we created a comprehensive bench... |
https://huggingface.co/blog/inference-pro | Inference for PROs | Omar Sanseviero, Pedro Cuenca, Victor Mustar | September 22, 2023 | Today, we're introducing Inference for PRO users - a community offering that gives you access to APIs of curated endpoints for some of the most exciting models available, as well as improved rate limits for the usage of free Inference API. Use the following page to subscribe to PRO. Hugging Face PRO users now have acce... |
https://huggingface.co/blog/rocketmoney-case-study | Rocket Money x Hugging Face: Scaling Volatile ML Models in Production | Nico Kuzak, Chris Poirier | September 19, 2023 | Scaling and Maintaining ML Models in Production Without an MLOps Team We created Rocket Money (a personal finance app formerly known as Truebill) to help users improve their financial wellbeing. Users link their bank accounts to the app which then classifies and categorizes their transactions, identifying recurring pat... |
https://huggingface.co/blog/gaussian-splatting | Introduction to 3D Gaussian Splatting | Dylan Ebert | September 18, 2023 | 3D Gaussian Splatting is a rasterization technique described in 3D Gaussian Splatting for Real-Time Radiance Field Rendering that allows real-time rendering of photorealistic scenes learned from small samples of images. This article will break down how it works and what it means for the future of graphics. What is 3D ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.