F.1-GGUF-Controlnet
V1

Just because you think, doesn't mean you're right.
Please see the complete introduction, which has the directory configuration of the evaluation and related models, and the relevant models are also uploaded to the platform (easy to use but also trouble to point a free like).
Because FLUX has high performance requirements and Controlnet may lead to insufficient video memory, the current quantitative model GGUF model is combined to reduce configuration requirements
Natively configured RTX3060 entire workflow run time 220S (with Controlnet)
The following nodes are used:
1. Joy_caption nodes: https://github.com/StartHua/Comfyui_CXH_joy_caption
Reverse the image to accurately describe the picture
2. CSV Loader nodes: https://github.com/theUpsider/ComfyUI-Styles_CSV_Loader
Choose the style to improve the stylistic effect of the prompt words
GGUF nodes: https://github.com/city96/ComfyUI-GGUF?tab=readme-ov-file
The quantitative version of the model, the reduced configuration of the model (depending on the individual situation to choose a different model) below the Q4 effect may not be very good
UNET model: https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main
My current choice is flux1-dev-Q5_K_S
You can choose the model according to your own situation as follows:


Input image: (with Controlnet comparison)

Output results: (Compare with Controlnet)

Model configuration:
siglip-so400m-patch14-384 Place in clip/siglip-so400m-patch14-384
Meta-Llama-3.1-8B-bnb-4bit to the LLM/Meta-Llama-3.1-8B-bnb-4bit
Joy_caption to models
styles in the ComfyUI root directory
flux-canny-controlnet-v3 to models\xlabs\controlnets
Put flux1-dev-Q5_K_S.gguf into models\unet
