Style Transfer
Images
Not so long ago, it required lots of algo just to colorize black and white photograph. I learned a couple of days ago that it’s possible to turn line art into anything with high degree of control.
Images
ControlNet is one of the most powerful features in Stable Diffusion. Through experiments, I’ve nailed down the steps to transform line art into photos or paintings, then influence them with art styles with consistent results.
Images
I needed a line art for this post, and didn’t want to use copyrighted works, so I made one inside MJ. The one that I liked has some colors in it, so I converted it into B+W in Photoshop. (I put the seed here just so you can reproduce it)
Images
In Stable Diffusion, I described the photo in txt2img, together with my list of negative prompts (omitted here because it’s quite long). Resolution is set to 512x512, using UniPC sampler at 20 steps because it runs quickly on my machine.
Images
Create the first ControlNet with the Lineart Invert preprocessor. Select the correct model, set the resolution to 1024, and prioritize text prompt. The first time I did this, I was stunned. Setting res to 1024 makes a huge difference so don’t forget that.
Images
For the style, I used something I made in MJ a few days ago, and put that into a second ControlNet with the shuffle preprocessor. Also set this to 1024 res but prioritize ControlNet.
Images
The Results with only CN0 + CN1 are quite impressive, but let’s fix the color with better text prompts.
Images
I used Reference Only as the third ControlNet. Instead of adding CN2 on top of everything, I want to show what having only line art and RefOnly CN2 will look like.
Images
All three together. 9 each. Samples in the two posts before. No cherry picking.
Images
Final post has additional tweaks:
- After Detailer: face_yolov8n.pt
- Gigapixel AI: 4x / HQ
- Photoshop / Lightroom for minor adjustments
Special thanks to Ellric on Discord for suggestions for the CN priorities.