Uncensored Females (FLUX) Update
FLUX_Dev_UFV7-FP16

Uncensored Females (FLUX)
Standalone Checkpoint DO NOT LOAD a separate VAE, TE, or CLIP unless using GGUF
Version 7
Google FLAN T5xxl quantized from Full FP32Reduced weights of some trainings to allow for more face variationNipple, Anus and Vulva details could use finetuning (Message me if you want to use this model as a base)
NF4 Schnell has better performance and quality in my opinion then FP8 (Forge)
However FP8 version is viable for COMFY UI
Links (For GGUF)
Note Google FLAN model has some difficulty interfacing with GGUF and is only available in NF4/FP8 currently
Text EncodersUpdated CLIP - Standard CLIP-L - FP8 CLIP-L -- ** Version Comparison **VAE (AE.safetensors)
Pros
Ages 18-40yo trained on models with 2257 forms.Tested with Adult rated Character models and works well with the ones listed.NF4 vs FP8 speeds listed below in my testing (8GB VRAM)
Cons
Flux tends to draw the same face this model is no exceptionX rated Males still not trained into V5 (Without Loras)XXX has no support (Without Loras)
Per the Apache 2.0 license FLAN is attributed to Google
My Speed on a 3050 8GB
NF4 (Tested in Forge)
832x1216 @ 4.5 Seconds per IT1024x1024 @ 5 Seconds per IT1024x1536 @ 7.5 Seconds per IT2048x2048 @ 110 Seconds per IT (Comfy wins hands down at 2048 with FP8)
FP8
832x1216 @ 5.5 Seconds per IT1024x1024 @ 6 Seconds per IT1024x1536 @ 15 Seconds per IT2048x2048 @ 28 Seconds per IT
FP16/BF16
832x1216 @ 8 seconds per IT1024x1024 @ 10 seconds per IT1024x1536 @ 11 seconds per IT (?) Why is it faster then FP8 maybe the cfg?
This model is a training using many individuals with known ages and 2257 forms, it has also been merged to try and ensure that no known individuals can be reproduced. However FLUX seems to like to learn faces even with less then 10% data rather then merge them into a new face.
