Skip to main content

Overview

HyperGen works with any diffusers-compatible model from HuggingFace. Below are popular models that have been tested and work well with HyperGen.
HyperGen uses the diffusers library under the hood, so any model with a DiffusionPipeline should work.

Image Generation Models

FLUX.1

Black Forest Labs’ state-of-the-art image generation model.

FLUX.1 Dev

Model ID: black-forest-labs/FLUX.1-devBest for: High-quality, detailed imagesRequirements:
  • 16GB+ VRAM
  • bfloat16 dtype recommended
Usage:
m = model.load("black-forest-labs/FLUX.1-dev", torch_dtype="bfloat16")

FLUX.1 Schnell

Model ID: black-forest-labs/FLUX.1-schnellBest for: Fast generationRequirements:
  • 12GB+ VRAM
  • Faster than Dev variant
Usage:
m = model.load("black-forest-labs/FLUX.1-schnell", torch_dtype="bfloat16")
Recommended Settings:
  • dtype: bfloat16
  • LoRA rank: 32-64
  • Training steps: 1500-2500

Stable Diffusion XL (SDXL)

Stability AI’s flagship image generation model.

SDXL Base

Model ID: stabilityai/stable-diffusion-xl-base-1.0Best for: General-purpose, high-quality imagesRequirements:
  • 8GB+ VRAM
  • Most popular choice
Usage:
m = model.load("stabilityai/stable-diffusion-xl-base-1.0")

SDXL Turbo

Model ID: stabilityai/sdxl-turboBest for: Fast generation (1-4 steps)Requirements:
  • 8GB+ VRAM
  • Optimized for speed
Usage:
m = model.load("stabilityai/sdxl-turbo")
Recommended Settings:
  • dtype: float16
  • LoRA rank: 16-32
  • Training steps: 1000-2000

Stable Diffusion 3 (SD3)

Latest generation Stability AI model with improved text rendering.

SD3 Medium

Model ID: stabilityai/stable-diffusion-3-medium-diffusersBest for: Text rendering, detailed scenesRequirements:
  • 12GB+ VRAM
  • Excellent prompt following
Usage:
m = model.load("stabilityai/stable-diffusion-3-medium-diffusers")
Recommended Settings:
  • dtype: float16 or bfloat16
  • LoRA rank: 16-32
  • Training steps: 1000-2000

Stable Diffusion 1.5

Classic Stable Diffusion - lightweight and fast.

SD 1.5

Model ID: runwayml/stable-diffusion-v1-5Best for: Fast generation, low VRAMRequirements:
  • 6GB+ VRAM
  • Lightweight and fast
Usage:
m = model.load("runwayml/stable-diffusion-v1-5")

SD 1.4

Model ID: CompVis/stable-diffusion-v1-4Best for: Research, experimentationRequirements:
  • 6GB+ VRAM
Usage:
m = model.load("CompVis/stable-diffusion-v1-4")
Recommended Settings:
  • dtype: float16
  • LoRA rank: 8-16
  • Training steps: 800-1500

Stable Diffusion 2.1

Improved version of SD 1.5 with better quality.

SD 2.1

Model ID: stabilityai/stable-diffusion-2-1Best for: Balance of quality and speedRequirements:
  • 8GB+ VRAM
Usage:
m = model.load("stabilityai/stable-diffusion-2-1")

Video Generation Models

CogVideoX

Tsinghua University’s text-to-video generation model.

CogVideoX-2B

Model ID: THUDM/CogVideoX-2bBest for: Short video clips (2-6s)Requirements:
  • 16GB+ VRAM
  • Smaller, faster variant
Usage:
m = model.load("THUDM/CogVideoX-2b")

CogVideoX-5B

Model ID: THUDM/CogVideoX-5bBest for: Higher quality videosRequirements:
  • 24GB+ VRAM
  • Better quality than 2B
Usage:
m = model.load("THUDM/CogVideoX-5b")
Video model support requires the video optional dependencies: pip install hypergen[video]

Specialized Models

ControlNet Models

ControlNet-enabled models for conditional generation.
ControlNet support is coming in a future release.

PixArt Models

High-resolution text-to-image models.
m = model.load("PixArt-alpha/PixArt-XL-2-1024-MS")

Kandinsky Models

Bilingual (English/Russian) text-to-image models.
m = model.load("kandinsky-community/kandinsky-2-2-decoder")

Fine-Tuned Community Models

HyperGen works with community fine-tunes on HuggingFace:
# Anime style
m = model.load("stablediffusionapi/anything-v5")

# Realistic photos
m = model.load("SG161222/Realistic_Vision_V5.1_noVAE")

# Artistic style
m = model.load("dreamlike-art/dreamlike-photoreal-2.0")
Browse HuggingFace’s diffusion models for thousands of community models.

Model Comparison

Performance Comparison

Based on RTX 4090, 50 inference steps, 1024x1024:
ModelVRAMGeneration TimeQualityBest For
SD 1.5~6GB~2sGoodSpeed, low VRAM
SD 2.1~8GB~2.5sBetterBalanced
SDXL~12GB~4sExcellentGeneral use
SD3~14GB~5sExcellent+Text rendering
FLUX.1 Dev~18GB~8sOutstandingBest quality
FLUX.1 Schnell~16GB~3sExcellentFast + quality

Training Comparison

Approximate LoRA training times (1000 steps, 50 images, rank 16):
ModelVRAMTraining TimeFile Size
SD 1.5~6GB~8 min~30MB
SDXL~12GB~15 min~50MB
SD3~14GB~18 min~60MB
FLUX.1~18GB~25 min~80MB

Model Selection Guide

Choose Based on Use Case

  • Speed Priority
  • Quality Priority
  • Balanced
  • Low VRAM
Best Choice: SDXL Turbo or SD 1.5
m = model.load("stabilityai/sdxl-turbo")
# Or
m = model.load("runwayml/stable-diffusion-v1-5")
  • Fastest generation
  • Good for prototyping
  • Lower VRAM requirements

Choose Based on VRAM

1

6-8GB VRAM

  • SD 1.5 
  • SD 2.1 
  • SDXL (with optimizations) �
2

8-12GB VRAM

  • SDXL 
  • SD3 (with optimizations) �
  • All SD variants 
3

12-16GB VRAM

  • SDXL 
  • SD3 
  • FLUX.1 Schnell �
4

16GB+ VRAM

  • All models 
  • FLUX.1 Dev 
  • CogVideoX-2B 
5

24GB+ VRAM

  • All models 
  • Large batch sizes 
  • CogVideoX-5B 

Loading Custom Models

From Local Path

m = model.load("/path/to/local/model")

From Private HuggingFace Repos

from huggingface_hub import login

login(token="your_hf_token")

m = model.load("your-org/private-model")

With Custom Configuration

m = model.load(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype="bfloat16",
    variant="fp16",  # Load fp16 variant
    use_safetensors=True,
)

Model Licensing

Always check the license for any model you use!
Common licenses:
  • CreativeML OpenRAIL-M - Most Stable Diffusion models (permissive, with restrictions)
  • Apache 2.0 - FLUX.1 Schnell (permissive)
  • FLUX.1 Dev License - FLUX.1 Dev (non-commercial unless licensed)
  • Custom Licenses - Check each model’s HuggingFace page

Testing New Models

To test if a model works with HyperGen:
from hypergen import model

try:
    m = model.load("model/id")
    m.to("cuda")
    image = m.generate("test prompt")
    print("Model works! ")
except Exception as e:
    print(f"Model failed: {e}")

Next Steps