How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Yntec/epiCVision", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

epiCVision

A mix of epicRealism and realisticVision. I don't like false modesty, I claim this is better than either model:

Comparison (click for larger)

Sample and prompt:

Sample

very cute princess with curly hair wearing choker who would marry me

Original pages:

https://civitai.com/models/25694?modelVersionId=30761

https://civitai.com/models/4201?modelVersionId=5196

Full recipe:

Add Difference 1.0

Primary model:

epicRealism

Secondary model:

epicRealism

Tertiary model:

v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)

Output Model:

Temporary

Weighted Sum 0.70

Primary model:

RealisticVision

Secondary model:

Temporary

Output Model:

epiCVision

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Yntec/epiCVision 56