Replies: 2 comments
-
Alright, seems it might be a parameter issue and not necessarily bad code so now I can play around with it. |
Beta Was this translation helpful? Give feedback.
0 replies
-
any luck? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to modify the stable diffusion demo to add SDXL inpainting here.
I have added options to load the mask image, generate the mask and masked image latents and feed them into the unet. I can compile, run and generate an image but the masked region doesn't contain an image based on the prompt and the colors for the unmasked region contain a color distorted version of the original. I've been comparing against the pytorch model but I seem to be overlooking something...
The final latents look like this before replacing the unmasked region with the original pixels:
and after replacing them:
I'm using the model from: https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 and running as follows:
I'll keep debugging but I'm wondering if anybody has any insight into what I might be missing.
Beta Was this translation helpful? Give feedback.
All reactions