Fire - Part 1 - Midjourney
This series of posts (3 parts) talks about how I turned images from Midjourney into Stable Diffusion using a very specific workflow. It is long, and I will be posting them in three parts, as I write and post to Instagram.
By the time I have posted all three parts, I will link them all altogether here.
Background
I started posting random one shots from my archive recently. I have many works that I don’t post because I either never archieved my vision, or they simply aren’t coherent enough to show as sets. I like to post sets with at least 3 images so that my intentions are clear, but often that means I’m leaving out some of the nice singles, and that’s why I decided to post those singles every now and then on Twitter and Instagram as stories instead.
While browsing, I found some interesting images that I made with Midjourney.
At first I thought that maybe I made these with image prompts, but turns out I didn’t and they’re straight up text prompts.
Note: you can find out your prompts with the /show
command in Midjourney by using the job ID.
A few days ago, someone asked me how I achieved some of what appeared to look like special effects in my images. I told them to just use keywords like “particles, dust, fog, magic, dirt,” etc. They don’t seem to believe me at all. I feel like often people think that I’m just making shit up to end the conversation, but that really is something that I did.
Original prompts in v5
To show that I didn’t make this up, here’s the full prompt to the first image of smoke spinning around the head:
a cloud forming the shape of a 30yo muscular man figure, with fire particles and dusts everywhere --v 5 --seed 2877480443
Or with the yellow dust around the figure:
a cloud forming the shape of a 30yo muscular man in black latex pant and yellow stripes, with fire particles and dusts everywhere --v 5 --seed 2035910909
New Renders in v5.2
These were made in v5, so I thought that I would remaster them and make some new ones with v5.2. To be frank, I have been working with Stable Diffusion so much lately that I have all these unused hours in Midjourney anyway, so I might as well use them up.
I don’t usually share prompts. I find that unnecessary. I will share the keywords for you to make your own images if you tell me which aspect of the image that you want to achieve. But to fully illustrate this article, I’ll share the prompt I used for he first image:
a cloud forming the shape of a 30yo muscular man in black latex pant and yellow stripes, with fire particles and dusts everywhere, photorealistic 3d render in the style of octane render --no shirt
Unfortunately, it’s a bit hard for me to provide the seed number because I got to this image using a few variations and tweaks, and when you apply variations, the jobs don’t come with seeds anymore. If you know a way to get around this issue, let me know.
Images
Here are the rest of the images from the Midjourney v5.2 renders, all fresh from the oven yesterday.
Text prompts in Midjourney v5.2, Topaz Gigapixel HQ 2x, Adobe Lightroom color correction.