Fast LTX 2.3 Image-to-Video GGUF in ComfyUI
Low VRAM · 20-Second AI Video
In this tutorial, we walk through running LTX Video 2.3 in ComfyUI using the GGUF quantized model — making it accessible even on GPUs with limited VRAM. You'll learn how to generate smooth, high-quality 20-second AI videos from a single input image, all inside a custom workflow.
-
Download the Workflow Click the green Download button above and save the .json workflow file.
-
Install ComfyUI & Required Nodes Make sure ComfyUI is up to date and install any missing custom nodes listed in the workflow.
-
Download LTX 2.3 GGUF Model Grab the GGUF quantized weights from Hugging Face and place them in your ComfyUI models folder.
-
Load the Workflow in ComfyUI Open ComfyUI, drag & drop the workflow JSON, then connect your input image.
-
Run & Generate Hit Queue Prompt and watch LTX 2.3 generate your ~20-second video in minutes.
GGUF is a quantization format originally popularized by llama.cpp, now adopted for diffusion video models. By reducing weight precision, GGUF cuts VRAM requirements significantly — allowing models like LTX 2.3 to run on GPUs that would otherwise run out of memory.
The trade-off is minimal — GGUF Q5 and Q8 variants retain near-lossless quality while running comfortably on 6–10GB VRAM cards.
