Melinoe
Collection
4 items
โข
Updated
Melinoe-14BQwen3-14B[e.g., instruction following, dialogue, code generation].[e.g., Apache 2.0, MIT]This model is designed for [briefly describe the primary use case, e.g., 'serving as a conversational chatbot on technology-related topics']. It should not be used for high-stakes decisions or generating harmful content. Fact-check important information.
The model may produce factually incorrect or biased information. Its knowledge is limited to its training data and it can be prone to hallucination.
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_name = "[your-model-name-on-huggingface-or-local-path]"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate text
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))