Update README.md
Browse files
README.md
CHANGED
|
@@ -7,15 +7,13 @@ arxiv: 2304.08485
|
|
| 7 |
datasets:
|
| 8 |
- HuggingFaceH4/llava-instruct-mix-vsft
|
| 9 |
---
|
| 10 |
-
# Model Card
|
| 11 |
-
|
| 12 |
-
HuggingFaceH4/vsft-llava-1.5-7b-hf-trl is a Vision Language Model, created by performing VSFT on the [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) model with 260k image and conversation pairs from the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co/datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset.
|
| 13 |
|
| 14 |

|
|
|
|
|
|
|
| 15 |
|
| 16 |
Or check out our Spaces demo! [](https://huggingface.co/spaces/HuggingFaceH4/vlm-playground)
|
| 17 |
|
| 18 |
-
|
| 19 |
## Model details
|
| 20 |
|
| 21 |
**Model type:**
|
|
|
|
| 7 |
datasets:
|
| 8 |
- HuggingFaceH4/llava-instruct-mix-vsft
|
| 9 |
---
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |

|
| 12 |
+
# Model Card
|
| 13 |
+
HuggingFaceH4/vsft-llava-1.5-7b-hf-trl is a Vision Language Model, created by performing VSFT on the [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) model with 260k image and conversation pairs from the [HuggingFaceH4/llava-instruct-mix-vsft](https://huggingface.co/datasets/HuggingFaceH4/llava-instruct-mix-vsft) dataset.
|
| 14 |
|
| 15 |
Or check out our Spaces demo! [](https://huggingface.co/spaces/HuggingFaceH4/vlm-playground)
|
| 16 |
|
|
|
|
| 17 |
## Model details
|
| 18 |
|
| 19 |
**Model type:**
|