Okay, so today I wanted to mess around with Stable Diffusion, and you know, generate some cool images. I’ve been seeing everyone post their “jessica lockhart” creations, so I figured, why not give it a shot myself? Let’s dive into what I did.

Getting Started
First things first, I fired up my Stable Diffusion WebUI. I’m using the Automatic1111 version, ’cause it’s what I’m used to. It is not the most exciting thing to look at it, you know very raw, but does the work.
Setting up the Basics
I made sure I had a good checkpoint loaded. I went with one of those realistic ones, I think it was “revAnimated”. Then I added in the text prompt and start painting the picture. you know.
I typed in “jessica lockhart” into the positive prompt.
I needed to start with basic, so I added, simple stuff, I put in like: “beautiful woman, blonde, long hair”.

Negative Prompts are Key
The negative prompt is where the magic happens, well, kinda. It’s more like, where the “not-magic” happens. You gotta tell the AI what you don’t want.
I threw in a bunch of stuff here like: “ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face”. Just trying to avoid all those common AI image fails,I don’t want extra limbs, of course.
Playing with Parameters
Then came the fiddling. I set the sampling method to “Euler a”, ’cause it usually gives me pretty good results. I bumped up the sampling steps to like, 30. I like to do things, you know, in much steps, so I can see that clearly.
- CFG scale: I usually keep this around 7, but sometimes I push it up a bit if I want the image to really stick to the prompt.
- Image size: I went with 512×768, a portrait aspect ratio.
Generating and Iterating
I hit that big “Generate” button and waited. It took a few tries, and it happens, of course.
I started generating images, and some of them were, well, let’s just say “interesting”. Some had weird eyes, some had wonky hands, and then you start again, and again.

I kept tweaking the prompt. I added more descriptive words, like “detailed eyes, realistic skin texture, photorealistic”. I also messed around with different seeds, to get more variation.
Refining with Highres. fix
Once I got an image I liked, I enabled “Highres. fix”.
I used the “R-ESRGAN 4x+” upscaler and set the denoising strength to around 0.5. This usually helps clean up the details and make the image look sharper, and of course, much better.
Final Touches (Sometimes)
Sometimes, if I’m feeling fancy, I’ll take the image into the “img2img” tab and do some inpainting to fix any small issues. But for this one, the “Highres. fix” did a pretty good job, so I skipped that step.
The Result
After a bunch of tries and tweaks, I finally got an image I was happy with. It’s not perfect, but it’s pretty good for a quick Stable Diffusion session! It was a fun little experiment, and I’m always amazed at what this thing can do with just a few words and some clicks.

And that’s all my record of today, simple, but it is all mine.