FluxGym Training: Enhanced LORA Flexibility

Recently, I embarked on my second attempt at FLUX DEV training with FluxGym. This time, I took a different approach that yielded significantly improved results.

My strategy involved retraining the model using a more diverse image set, surprisingly, with fewer images overall. This refined methodology proved to be a game-changer.

The outcome is a highly flexible set, particularly in its prompt applications. It’s truly exciting to discover the creative possibilities once you’ve successfully trained a LORA on a specific subject — the results can get really crazy!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert