comfyui
https://github.com/comfyanonymous/comfyui
Python
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported0 Subscribers
Add a CodeTriage badge to comfyui
Help out
- Issues
- Fail-fast on prompt_worker crash (prevent “accept but not execute”)
- AMD graphics cards are forced to switch to FP16 precision mode when using FP8 models.
- Confyuai AMD GPU crash - AMD Radeon RX 6650 XT: failed to run amdgpu-arch binary not found.
- feat: Patch SageAttention 3 Node
- VRAM Out-of-Memory Error When Using LoRA with Qwen_image_edit in ComfyUI v0.7
- No operator found for `memory_efficient_attention_forward` with inputs | 5060 Ti
- "gemma_3_12B_it.safetensors". Is there still room for the quantification? As far as I know, all the graphics cards with 24GB of video memory have been killed by it.
- Some custom nodes broken after Jan 5, 2026 updates (comfy.ldm.lightricks.model)
- Manual update of ComfyUI display on Linux client ERROR
- OOM error, pc almost unresponsive, 4090rxt, 24vram, 64ram
- Docs
- Python not yet supported