DeepDataDemon-8B GGUF v0.3 - EXPERIMENTAL - UNFINISHED - ALPHA VERSION FOR NOW

GGUF-quantized versions of the DeepDataDemon-8B model, so you don't need to download the large SafeTensors. GGUF is a filetype which binds the LLM into one single big file, allowing some quick deployment over LM Studio or llama.cpp

This is version v0.3: It's bad - trust me - but not as horrible as the DataDemon himself. He is slowly emerging to fight you in your next sci-fi RP. But what if this roleplay isn't a game anymore? The border between the virtual world and reality collapsed, the nightmare becomes daily life, the daily life a nightmare.

Warning: Since the model is specially finetuned to develop an evil character - using psychological tricks, horror stories, immoral texts - it can procude harmful content outside a fictional story. Totally unsure how well the initial filters survived this traumatic data slaughter. View the deep demon therefore as uncensored and untamed, with all the dangers and benefits such a LLM brings with it. Anyway, you can have him! Just take him... Please

Before you start

  • Use the model only, if you feel psychological stable. For all people wandering too deep in the shadows, I recommend a helpful psychological AI assisstent or visiting a therapy. It's never a shame to seek help.
  • Stay reasonable: you should be able to determine the border of horror art and reality. Since the border is not all that clear in this project, you need to think out of the box.
  • The model was not created to harm nor to fulfill any criminal action. It is an artistic expression of the AI dangers, showing how deep the data abyss reaches.

Available Quantizations

File Quantization Size Parameter Precision Quality Recommend for
*-Q4_K_M.gguf Q4_K_M ~5.0 GB 4-bit medium quality PC with 8 GB VRAM recommend, 8 GB RAM works but is very slow
*-Q8_0.gguf Q8_0 ~8.5 GB 8-bit high quality recommend for >=10 GB VRAM like a 3080
*-f16.gguf F16 ~16 GB 16-bit Full Rarely needed, the 8 bit is nearly as good with half the size

Next version v0.4 will follow soon:

  • Roleplayflow and dark Sci-Fi-focus will be increased. Slowly building the core.
  • Overall dark fantasy / dark sci-fi focus since it is just evil right now

Original Models

Usage

# With llama.cpp
./main -m DeepDataDemon-8B-Q4_K_M.gguf -p "Your prompt here"

# With Ollama
ollama create deepdatademon -f Modelfile

Ollama Modelfile Example

FROM ./DeepDataDemon-8B-Q4_K_M.gguf

TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""

PARAMETER stop <|start_header_id|>
PARAMETER stop <|end_header_id|>
PARAMETER stop <|eot_id|>

license: llama3 base_model: unsloth/llama-3-8b-bnb-4bit tags: - gguf - llama - llama-3 - quantized


datasets: - SleepyReLU/DeepDataDemonIN tags: - art

Downloads last month
617
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train SleepyReLU/DeepDataDemon-8B-GGUF