Flowers from the First Kiss
These are the flowers that I used to create many of the scenes seen in the First Kiss - Flowers set. All of them are lilies floating underwater.
I used lilies because it’s a special flower to me, and it’s intertwined in my own romantic history. I have been sending lilies — or where possible, calla lilies — to men that I dated for as long as I could remember.
I don’t remember when and why I started doing that, but I do personally find roses to be overused, and prefer the purity of lilies. I usually try to stick with pure white lilies, but will opt for other colors if they’re not available.
Besides sending them out for special occasions, they’re also extremely effective if I need to apologize for any reasons — a 3 ft (90cm) tall bouquet of lilies sent to any office would get everyone else talking for the whole day, and indirectly win his heart back. It has never failed me. I wouldn’t be able to resist that if someone did that to me either.
I have mentioend that I used flowers that I have created with Midjourney in the ref-shuffle workflow to create the images for men kissing in a field of flowers, but I know that it may not be immediately apparent what type of images I used for the control nets.
So I am posting the images I used so you can use on your own image you like. You don’t need to generate the images with Midjourney. You can just as easily use any image you like, including ones you have already own, or found in stock libraries or the internet. I chose to use images generated from Midjourney to ensure that I have the rights to for every part of the flow — even if the images were not directly present in the render and only used as part of the control nets — which could prevent potential legal issues later on.
You can generate these flowers inside Stable Diffusion if you like. I usually use Midjourney to create these images because it’s faster for me to create a large collection of variations, and also so that I could keep the Stable Diffusion setup the same throughout without having to keep going back and forth from the image setups. Having two separate Stable Diffusion windows open tend to confuse the Automatic1111 backend and is not recommended.
Just like how Midjourney tends to be extremely imaginative when using a specific image over another, this Ref-Shuffle workflow also tends to have different level of responsiveness to two different images that look almost the same, so working with a collection of images is ideal — if one doesn’t seem to do what you ant it to, try an alternative.
All of these images went through extensive limb-fixing with Photoshop generative layers. Sometimes it’s not possible to fix the hands directly, and I have to be very creative in how I handle the fixes, including using additional objects to hide the issues. It’s hard to explain this in text but I will spend some time and write something or make a video for it when I have time.
Tech: Text prompts with Midjourney (for Control Nets) + Text prompts with Stable Diffusion: 30 steps, Euler a, CFG 5, 512x512, Virile Fantasy v1.1, Denoising 0.5, Clip skip 2. CN0 Shuffle. CN1 Reference Only. ADetailer with neg prompts mediapipe_face_full. Hires 1.5x (768x768), 15 steps, 4x_NMKD-Siax_200k. Post: Topaz Gigapixel HQ 4x (3072x3072), Adobe Lightroom color correction. Adobe Photoshop AI (in-painting).