--- base_model: CohereLabs/aya-vision-8b library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:CohereLabs/aya-vision-8b - lora - sft - transformers - trl - unsloth --- # Afri-Aya Vision 8B Krio **Afri-Aya Vision 8B Krio** is a LoRA-finetuned variant of **[Aya Vision 8B](https://huggingface.co/CohereLabs/aya-vision-8b)** that adds support for the African language Krio to the model, using culturally relevant images from the **[Afri-Aya dataset](https://huggingface.co/datasets/CohereLabsCommunity/afri-aya)**. It keeps the base model’s general capabilities while improving image-grounded Q&A with culturally relevant features in the krio language. ## How to Use ### 1) Quick Start Inference with transformers ```python from transformers import AutoProcessor, AutoModelForImageTextToText from PIL import Image import torch model_id = "Jaward/afri-aya-vision-krio-8b" # Load processor and model processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16, trust_remote_code=True, ) # Input message image_path = "https://pbs.twimg.com/media/Fx7YvfQWYAIp6rZ?format=jpg&name=medium" messages = [ # {"role": "system", "content": [{"type": "text", "text": f"Reply strictly in Krio."}]}, # not needed but can use to boost response in your language { "role": "user", "content": [ {"type": "image", "url": image_path}, {"type": "text", "text": "Wetin dis pikchɔ de sho?"}, # Krio "What does this picture show?" ], } ] # Apply Aya Vision chat template inputs = processor.apply_chat_template( messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) # Generate answer gen_tokens = model.generate( **inputs, max_new_tokens=64, do_sample=True, temperature=0.3, ) image = Image.open(image_path).convert("RGB") image.show() # Decode and print response response = processor.tokenizer.decode( gen_tokens[0][inputs.input_ids.shape[1]:], skip_special_tokens=True ) print(response) ``` Faster inference with unsloth ```python from unsloth import FastVisionModel from PIL import Image # Load model model, tokenizer = FastVisionModel.from_pretrained( "Jaward/afri-aya-vision-krio-8b", load_in_4bit=True, ) FastVisionModel.for_inference(model) # Your image + question (any supported language) image = Image.open("example.jpg").convert("RGB") messages = [ # {"role": "system", "content": [{"type": "text", "text": f"Reply strictly in Krio"}]}, # not needed but use if to boost response in your language {"role":"user","content":[ {"type":"image"}, {"type":"text","text":"Wetin dis pikchɔ de sho?"} # Krio: What does this picture show? ]} ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(image, prompt, return_tensors="pt").to("cuda") out = model.generate(**inputs, max_new_tokens=100, temperature=0.7, top_p=0.9) print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) ```