Skip to content

Download ComfyUI workflows and Models with LM Downloader

Workflow Workflows

FLUX Kontext

Kontext is a generative editing model in the FLUX.1 series, enabling precise image modifications via text or image prompts. It supports style conversion, element addition/removal, and attribute adjustments, with core strengths in understanding image context and preserving key information. Accurate prompts are required to ensure editing results align with expectations.

FLUX Kontext Example

⏬Download Kontext GGUF Workflow

Model Storage Location

📂 ComfyUI/
├── 📂 models/
│   ├── 📂 unet/
│   │   └── flux1-kontext-dev-Q4_K_M.gguf
│   ├── 📂 vae/
│   │   └── ae.safetensor
│   └── 📂 text_encoders/
│       ├── clip_l.safetensors
│       └── t5xxl_fp16.safetensors or t5xxl_fp8_e4m3fn_scaled.safetensors

From: bullerwins/FLUX.1-Kontext-dev-GGUF

FLUX Kontext with Nunchaku acceleration

The Nunchaku node accelerates Kontext, significantly improving generation speed for users with NVIDIA graphics cards. Without an NVIDIA GPU, this node cannot be utilized.

⏬Download Nunchaku Kontext Workflow

From: mit-han-lab/nunchaku-flux.1-kontext-dev

FLUX.1 (Text to Image)

FLUX.1 dev GGUF

This workflow loads the FLUX.1 dev GGUF quantized model, with the recommended step count set to 20.

⏬Download FLUX.1 dev GGUF Workflow

From: city96/FLUX.1-dev-gguf

📂 ComfyUI/
├── 📂 models/
│   ├── 📂 clip/
│   │   └── t5-v1_1-xxl-encoder-Q8_0.gguf
│   ├── 📂 vae/
│   │   └── ae.safetensor
│   |── 📂 text_encoders/
│   │   └── clip_l.safetensors
│   └── 📂 unet/
│       └── flux1-kontext-dev-Q6_K.gguf // Select a model according to your VRAM capacity.

FLUX.1 schnell GGUF (Faster)

This workflow loads the FLUX.1 schnell GGUF quantized model, with the recommended step count set to 4. The schnell version is faster than the dev variant but sacrifices some quality.

⏬Download FLUX.1 schnell GGUF Workflow

From: city96/FLUX.1-schnell-gguf

Framepack (Image to Video)

FramePack can quickly generate stable-quality 1-minute videos with only 6GB of low VRAM. It can generate videos from a single image or by setting the first and last frames.

⏬Download Framepack Workflow

From: Kijai/HunyuanVideo_comfy

FastHunyuan (Text to Video)

FastHunyuan is an accelerated HunyuanVideo model. It can sample high quality videos with 6 diffusion steps. We provide workflows and models in GGUF format, allowing users with either limited or ample GPU memory to find the model files that best suit their needs.

⏬Download Fast Hunyuan Video GGUF Workflow

From: calcuis/hyvid

Hunyuan Video GGUF (Image to Video)

HunyuanVideo-I2V-gguf is a GGUF quantized model produced by city96, using their workflow, with the option to choose models of different parameter sizes.

⏬Download Hunyuan Video I2V GGUF Workflow

Source: city96/HunyuanVideo-I2V-gguf

Model Storage Location

📂 ComfyUI/
├── 📂 models/
│   ├── 📂 clip_vision/
│   │   └── llava_llama3_vision.safetensors
│   ├── 📂 text_encoders/
│   │   ├── clip_l.safetensors
│   │   ├── llava_llama3_fp16.safetensors
│   │   └── llava_llama3_fp8_scaled.safetensors
│   ├── 📂 vae/
│   │   └── hunyuan_video_vae_bf16.safetensors
│   └── 📂 unet/
│       └── hunyuan-video-i2v-720p-Q4_K_M.gguf // Select a model according to your VRAM capacity.

Download Model Files

In the ComfyUI runtime window, you can find the "Model Download" feature. We have provided some commonly used models—simply click the download button to start downloading, saving you the hassle of searching for models elsewhere.

 

For more models, check out the following websites:

Mainland China users:

Global users:

Contact us

If you still encounter issues, please contact our technical support team. tech@daiyl.com