Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -16,10 +16,13 @@ In order to access models here, please visit a repo of one of the three families
|
|
| 16 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 17 |
|
| 18 |
Current:
|
| 19 |
-
|
| 20 |
-
* **Llama 3.
|
| 21 |
|
| 22 |
History:
|
|
|
|
|
|
|
|
|
|
| 23 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
| 24 |
* **Llama 3.1 Evals:** a collection that provides detailed information on how we derived the reported benchmark metrics for the Llama 3.1 models, including the configurations, prompts and model responses used to generate evaluation results.
|
| 25 |
* **Llama Guard 3:** a Llama-3.1-8B pretrained model, aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities.
|
|
|
|
| 16 |
In this organization, you can find models in both the original Meta format as well as the Hugging Face transformers format. You can find:
|
| 17 |
|
| 18 |
Current:
|
| 19 |
+
|
| 20 |
+
* **Llama 3.3:** The Llama 3.3 is a text only instruct-tuned model in 70B size (text in/text out).
|
| 21 |
|
| 22 |
History:
|
| 23 |
+
|
| 24 |
+
* **Llama 3.2:** The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).
|
| 25 |
+
* **Llama 3.2 Vision:** The Llama 3.2-Vision collection of multimodal large language models (LLMs) is a collection of pretrained and instruction-tuned image reasoning generative models in 11B and 90B sizes (text + images in / text out)
|
| 26 |
* **Llama 3.1:** a collection of pretrained and fine-tuned text models with sizes ranging from 8 billion to 405 billion parameters pre-trained on ~15 trillion tokens.
|
| 27 |
* **Llama 3.1 Evals:** a collection that provides detailed information on how we derived the reported benchmark metrics for the Llama 3.1 models, including the configurations, prompts and model responses used to generate evaluation results.
|
| 28 |
* **Llama Guard 3:** a Llama-3.1-8B pretrained model, aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to support Llama 3.1 capabilities.
|