Instructions to use sixfingerdev/SixFinger-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use sixfingerdev/SixFinger-8B with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/meta-llama-3.1-8b-bnb-4bit") model = PeftModel.from_pretrained(base_model, "sixfingerdev/SixFinger-8B") - Transformers
How to use sixfingerdev/SixFinger-8B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="sixfingerdev/SixFinger-8B")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("sixfingerdev/SixFinger-8B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use sixfingerdev/SixFinger-8B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "sixfingerdev/SixFinger-8B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sixfingerdev/SixFinger-8B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/sixfingerdev/SixFinger-8B
- SGLang
How to use sixfingerdev/SixFinger-8B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "sixfingerdev/SixFinger-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sixfingerdev/SixFinger-8B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "sixfingerdev/SixFinger-8B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sixfingerdev/SixFinger-8B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use sixfingerdev/SixFinger-8B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for sixfingerdev/SixFinger-8B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for sixfingerdev/SixFinger-8B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for sixfingerdev/SixFinger-8B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="sixfingerdev/SixFinger-8B", max_seq_length=2048, ) - Docker Model Runner
How to use sixfingerdev/SixFinger-8B with Docker Model Runner:
docker model run hf.co/sixfingerdev/SixFinger-8B
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/meta-llama-3.1-8b-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
license: apache-2.0
datasets:
- sixfingerdev/turkish-qa-multi-dialog-dataset
language:
- tr
- en
- zh
SixFinger-8B Adapter for LLaMA 3.1 8B
This repository contains a LoRA adapter for the SixFinger-8B model.
The adapter allows fine-tuned responses on top of the base model unsloth/llama-3.1-8b-bnb-4bit without modifying the base weights.
Overview
- Base Model: unsloth/llama-3.1-8b-bnb-4bit
- Adapter Type: LoRA
- Quantization: 4-bit (via bitsandbytes)
- Purpose: Enhanced response generation for Turkish/English mixed datasets.
- Compatibility: Use with Hugging Face Transformers + PEFT library.
Installation
Install required dependencies:
!pip install transformers accelerate bitsandbytes peft
Ensure you have a GPU with sufficient VRAM for 4-bit inference.
Loading the Model
- Load the Base Model
from transformers import AutoTokenizer, AutoModelForCausalLM
"unsloth/llama-3.1-8b-bnb-4bit",
device_map="auto"
)
- Load the Adapter
'from peft import PeftModel'
'model = PeftModel.from_pretrained(' ' base_model,' ' "sixfingerdev/SixFinger-8B"' ')'
- Load the Tokenizer
'tokenizer = AutoTokenizer.from_pretrained("unsloth/llama-3.1-8b-bnb-4bit")'
Example Usage
Generate text using the adapter:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Base model
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/llama-3.1-8b-bnb-4bit",
device_map="auto"
)
# LoRA adapter
model = PeftModel.from_pretrained(base_model, "sixfingerdev/SixFinger-8B")
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("unsloth/llama-3.1-8b-bnb-4bit")
# Örnek text generation
prompt = "Soru: Yapay zeka nedir?\nCevap:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Notes
- The adapter does not modify the base model; it only applies LoRA weights on top.
- 4-bit quantization significantly reduces VRAM usage. Ensure your GPU supports bitsandbytes 4-bit operations.
- You can merge the adapter into the base model for easier deployment if needed.
References
License
The adapter and its usage are provided under the terms specified in the repository.
Ensure compliance with the base model license (Meta’s LLaMA).