Inpaint controlnet comfyui reddit May 12, 2025 · ローカルネットワークからComfyUIにアクセスする方法; ComfyUIでのフォントサイズ変更方法: ステップバイステップガイド; ComfyUI 出力フォルダの場所を変更する方法; ComfyUIの新メニューの有効化方法; 同じシードでもComfyUIとA1111で生成される画像が異なる理由 I recently just added the Inpainting function to it, I was just working on the drawing vs rectangles lol. Your efforts are much appreciated. I'm trying to impaint aditional characters into a scene but the poses aren't right. they are normal models, you just copy them into the controlnet models folder and use them. So you'll end up with stuff like backwards hands, too big/small, and other kinds of bad positioning. Keep the same size/shape/pose of original person. The results from inpaint_only+lama usually looks similar to inpaint_only but a bit “cleaner”: less complicated, more consistent, and fewer random objects. Also, if this is new and exciting to you, feel free to post We would like to show you a description here but the site won’t allow us. Generate all key pose / costumes with any model in low res in ComfyUI, Narrow down to 2~3 usable pose. These two values will work in opposite directions, with controlnet inpaint trying to keep the image like the original, and IPadaptor trying to swap the clothes out. See here for an example workflow. All the masking should sill be done with the regular Img2Img on the top of the screen. Promptless Inpaint/Outpaint in ComfyUI made easier with canvas (ipadapter+cn inpaint+reference only) We would like to show you a description here but the site won’t allow us. com/articles/4586 ComfyUI Nodes for Inference. I've spent several hours trying to get OpenPose to work in the Inpaint location but haven't had any success. We would like to show you a description here but the site won’t allow us. Generate. So, what you did was Decode the image into latent. 5 and 2. vae inpainting needs to be run at 1. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. However, since a recent Controlnet update, 2 Inpaint preprocessors have appeared, and I don't really understand how to use them : to outpaint an image, but the caveat is that it requires an image first, also it doesn't use the amazing controlnet inpaint module to do the outpaint. If you want to make denseposes you can install detectron2 on Linux or WSL2 until he fixes it. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users Also, you can upload a custom mask by going to (Advanced>Inpaint) tab. Belittling their efforts will get you banned. Anyway, this is secondary compared to the inpaint issue. Comparison Input image (this image is from Stability's post about Clipdrop) Configuration: We would like to show you a description here but the site won’t allow us. The only thing that kind of work was sequencing several inpaintings, starting from generating a background, then inpaint each character in a specific region defined by a mask. . I tried to combine controlNet with conditioning mask. If you use whole-image inpaint, then the resolution for the hands isn't big enough, and you won't get enough detail. ControlNet Inpainting: ControlNet model: Selects which specific ControlNet model to use, each possibly trained for different inpainting tasks. Mask blur “mixing” the inpainting area with the outer image together. All complex workflows / additional things -> comfyUI Everything else, txt2img, img2img, controlnet, IPadapter, inpaint, etc—-> the „webUI“ part from swarm. ControlNet model for depth-aware structure control. Fooocus' inpaint is by far the highest quality I have ever seen, finding a high quality and easy to use inpaint workflow is so difficult for me. Please share your tips, tricks, and workflows for using this software to create your AI art. I’ve not completely AB tested that, but I think controlnet inpainting has an advantage for outpainting for sure. e Openpose to better control eye look while using Reactor? Suggest that ControlNet Inpainting is much better but in my personal experience it does things worse and with less control Maybe I am using it wrong so I have a few questions: When using ControlNet Inpaint (Inpaint_only+lama, ControlNet is more important) should I use an inpaint model or a normal one Ultimately, I did not screenshot my other two load image groups (similar to the one on bottom left, but connecting to different controlnet preprocessors and ip adapters), I did not screenshot my sampling process (which has three stages, with prompt modification and upscaling between them, and toggles to preserve mask and re-emphasize controlnet Disabling ControlNet inpaint feature results in non-deepfried inpaints, but I really wanna use ControlNet as it promises to deliver inpaints that are more coherent with the rest of the image. I selected previously dropped images to utilize lama and openpose editor. ControlNet weight: Determines the influence of the ControlNet model on the inpainting result; a higher weight gives the ControlNet model more control over the inpainting. For example, if you have a 512x512 image of a dog, and want to generate another 512x512 image with the same dog, some users will connect the 512x512 dog image and a 512x512 blank image into a 1024x512 image, send to inpaint, and mask out the blank 512x512 part to diffuse a dog with similar appearance. How to use. ControlNet, on the other hand, conveys it in the form of images. An example of Inpainting+Controlnet from the controlnet paper. true. It's a 0 to 1 scale that lets you express how much you want the composition to change vs remain. 1 Inpaint (not very sure about what exactly does this one) ControlNet 1. Doing the equivalent of Inpaint Masked Area Only was far more challenging. EDIT: There is something already like this built in to WAS. See comments for more details Vary IPadaptor weight and controlnet inpaint strength in your "clothing pass". 5. Is there anything similar available in ComfyUI? I'm specifically looking for an outpainting workflow that can match the existing style and subject matter of the base image similar to what LaMa is capable of. Also, any suggestion to get a major resemblance of the shirt? I used Canny Controlnet because the result with HED sucked a lot. Apr 21, 2024 · You now know how to inpaint an image using ComfyUI! Inpainting with ControlNet When making significant changes to a character, diffusion models may change key elements. ControlNet 1. Many professional A1111 users know a trick to diffuse image with references by inpaint. I switched to comfyUI and I have a hard time to find a workflow that works in the same way. This is like friggin Factorio, but with AI spaghetti! So, I just set up automasking with Masquerade node pack, but cant figure out how to use ControlNet's Global_Harmonious inpaint. I'll try to be brief and hit major points but it really is a huge topic. I'm wondering if it's possible to use ControlNet -> OpenPose in conjunction with Inpaint to add virtual person to existing photos. But you won't get the best consistency between the background and the characters (in terms of lighting for instance). Using RealisticVision Inpaint & ControlNet Inpaint/SD 1. As someone relatively new to AI imagery, I started off with Automatic 1111 but was tempted by the flexibility of ComfyUI but felt a bit overwhelmed. I want to inpaint at 512p (for SD1. While you have the (Advanced>Inpaint) tab open, you will need to adjust the denoising strength to find a good match for the desired outcome. ComfyUI inpaint/outpaint/img2img made easier (updated GUI, more functionality) Workflow Included Welcome to the unofficial ComfyUI subreddit. Densepose control net in comfyui does not do that when applied. This works fine as I can use the different preprocessors. 0 Fooocus is absolutely amazing with SDXL inpainting. And above all, BE NICE. 18K subscribers in the comfyui community. I'm just struggling to get controlnet to work. I got a makeshift controlnet/inpainting workflow started with SDXL for ComfyUI (WIP). 1 version Reply reply My approach is to split the video into individual frames, use Segment Anything to obtain the human mask, then use VAE inpaint encoder to convert it into latent space. By that I mean it depends what you are trying to inpaint. Hi, I am still getting the hang of ComfyUI. Controlnet inpaint global harmonious, (in my opinion) it's similar to Img2Img with low denoise and some color distortion. I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. You can move, resize, do whatever to the boxes. While using Reactor node, I was wondering if there's a way to use information generated from Controlnet. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This workflow obviously can be used for any source images, style images, and prompts. i. The are colabs out there that do it too. How do you handle it? Any Workarounds? If you use a masked-only inpaint, then the model lacks context for the rest of the body. seen a lot of people asking for something similar, it can be refined but works great for quickly changing the image to run back through an ipadapter or something similar, always thought you had to use 'vae encode for inpainting' , turns out you just vae encode and set a latent noise mask, i usually just leave inpaint controlnet between 0. In addition I also used ControlNet inpaint_only+lama and lastly ControlNet Lineart to retain body shape. xinsir models are for SDXL. 5, as there is no SDXL control net support i was forced to try Comfyui, so i try it. For SD1. in ComfyUI I compare all possible inpainting solutions in this tutorial, BrushNet, Powerpaint, fooocuse, Unet inpaint checkpoint, SdXL ControlNet inpaint and SD1. 15 votes, 26 comments. Impact packs detailer is pretty good. But so far in SD 1. Original deer image was created with SDXL, then I used SD 1. It came out around the time Adobe added generative fill and direct comparisons to that seem better with CN inpaint. 5). I have to 2nd the comments here that this workflow is great. I'm reaching out for some help with using Inpaint in Stable Diffusion (SD). There is no controlNET inpainting for SDXL. I wanted a flexible way to get good inpaint results with any SDXL model. Its a good idea to use the 'set latent noise mask' node instead of vae inpainting node. ComfyUI-Advanced-ControlNet Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. But there is Lora for it, Fooocus inpainting Lora. Use the brush tool in the Controlnet image panel to paint over the part of the image you want to change. 1 Shuffle ControlNet 1. 5 to replace the deer with a dog. 0 denoising, but set latent denoising can use the original background image because it just masks with noise instead of empty latent. 1 Tile (Unfinished) (Which seems very interesting) Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments I’ve been using stable swarmUI, it’s perfect, mix of both comfy UI and WebUI. I thought it could be possible with ControlNet segmentation or some other kind of segmentation but I have no idea about how to do it. I try to add some kind of object to the scene via inpaint in comfyui, sometimes using lora, fooocus generates a very good quality of object, while comfyui is not acceptable at all. Core. Im have it installed only for this. I Upscale with inpaint,(i dont like high res fix), i outpaint with the inpaint-model and ofc i inpaint with it. ControlNet inpaint is probably my favorite model, the ability to use any model for inpainting is incredible in addition to the no prompt inpainting and it's great results when outpainting especially when the resolution is larger than the base model's resolution, my point is that it's a very helpful tool. Here it is a PNG with the I was using the masking feature of the modules to define a subject in a defined region of the image, and guided its pose/action with ControlNet from a preprocessed image. Select Controlnet preprocessor "inpaint_only+lama". The inpaint_only +Lama ControlNet in A1111 produces some amazing results. There are many ways to do this but if you want to inpaint with mask. 5 there is ControlNet inpaint, but so far nothing for SDXL. The problem with it is that the inpainting is performed at the whole resolution image, which makes the model perform poorly on already upscaled images. Paste an empty latent on top of that image latent on the masked area, You will get a new full latent. Before I always have been in the Inpaint anything tab and then the Inpaint tab where that problem occured. A lot of people are just discovering this technology, and want to show off what they created. ControlNet Inpaint should have your input image with no masking. Workflow - https://civitai. I had the same problem. 1 Lineart ControlNet 1. Balance the values until you get a result you like. Download the Realistic Vision model. Usually you will need "Set Latent Noise Mask" node to prepare mask+latent. Which works okay-ish. If you have placed the models in their folders and do not see them in ComfyUI, you need to click on Refresh or restart ComfyUI. Generate character with PonyXL in ComfyUI (put it aside). The "Inpaint Segments" node in the Comfy I2I node pack was key to the solution for me (this has the inpaint frame size and padding and such). It supports text-guided object inpainting, text-free object removal, shape-guided object inpainting and image outpainting. Is there any way to achieve the same in ComfyUi? Or to simply be able to use inpaint_global_harmonious? 26 votes, 11 comments. Img2img + Inpaint workflow Controlnet + img2img workflow Inpaint + Controlnet Workflow Img2img + Inpaint + Controlnet workflow Does anyone have knowledge on how to achieve this? I want the output to incorporate these workflows in harmony, rather than simply layering them. Since the release of SDXL, I never want to go back to 1. Please repost it to the OG question instead. Attempted to address the hands using inpaint_lama, which effectively erases the original inpainting area and starts fresh. Right click on the Preview Bridge and select "Open in Mask Editor", colour the regions of the background you want to be animated and click "Save to Node". Here's what I got going on, I'll probably open source it eventually, all you need to do is link your comfyui url, internal or external as long as it's a ComfyUI url. I've been using ComfyUI for about a week, and am having a blast with building my own workflows. Fooocus came up with a way that delivers pretty convincing results. Furthermore, regular inpainting uses less VRAM. May 1, 2025 · xinsir_controlnet_depth_sdxl. I'd recommend just enabling ControlNet Inpaint since that alone gives much better inpainting results and makes things blend better. 5, inpaint checkpoints, normal checkpoint with and without Differential Diffusion Hey there, im trying to switch from A1111 to ComfyUI as I am intrigued by the nodebased approach. Select Controlnet model "controlnetxlCNXL_h94IpAdapter [4209e9f7]". normally used in txt2img whereas the img2img got more settings like the padding to decide how much to sample the surrounding images , and also u can set the image resolution to do the inpainting whereas the controlnet inpainting I think Img2img + Inpaint workflow Controlnet + img2img workflow Inpaint + Controlnet Workflow Img2img + Inpaint + Controlnet workflow Does anyone have knowledge on how to achieve this? I want the output to incorporate these workflows in harmony, rather than simply layering them. But the outset move the area inside or outside the inpainting area, so it will prevent to Welcome to the unofficial ComfyUI subreddit. - InpaintPreprocessor (1). Please keep posted images SFW. normally used in txt2img whereas the img2img got more settings like the padding to decide how much to sample the surrounding images , and also u can set the image resolution to do the inpainting whereas the controlnet inpainting I think Disabling ControlNet inpaint feature results in non-deepfried inpaints, but I really wanna use ControlNet as it promises to deliver inpaints that are more coherent with the rest of the image. 1 Instruct Pix2Pix ControlNet 1. But here I used 2 controlNet units to transfer style (reference_only without a model and T2IA style with its model). It's just magic animate that does that. Hey everyone! Like many, I like to use Controlnet to condition my inpainting, using different preprocessors, and mixing them. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. Are there any nodes for sketching/drawing directly in comfyui? Of course you can always take things into an external program like photoshop, but i want to try drawing simple shapes for controlnet or paint simple edits before putting things into inpaint. Therefore, I use T2IA color_grid to control the color and replicate this video frame by frame using ControlNet batch I used to use A1111, and ControlNet there had an inpaint preprocessor called inpaint_global_harmonious, which actually got me some really good results without ever needing to create a mask. I looked it up but didn't find any answers for what exactly the model does to improve inpainting. I solved it by clicking in the Inpaint Anything tab the tab ControlNet Inpaint and clicked then run controlnet inpaint. On the first run of your generation, bypass the KSampler node. I know you can do that by adding controlnet openpose in… We would like to show you a description here but the site won’t allow us. Yes this is the settings. - You are using txt2img. I've installed ComfyUI Manager through which I installed ComfyUI's ControlNet Auxiliary Preprocessors. In addition, Differential Diffusion also works with InpaintModelConditioning. Hope it helps you guys <<3 Welcome to the unofficial ComfyUI subreddit. I've found A1111+ Regional Prompter + Controlnet provided better image quality out of the box and I was not able to replicate the same quality in ComfyUI. I'd like to go from text2image, then pad the output image, then use that image as input to the controlnet inpaint. Sep 22, 2024 · I don't see any benefit in using this Flux inpainting controlnet over regular inpainting with the InpaintModelConditioning node, which is supported for Flux and other models in ComfyUI. The ksampler of course will effect the whole image. If you're talking about the union model, then it already has tile, canny, openpose, inpaint (but I've heard it's buggy or doesn't work) and something else. In your case, i think it would be better to use controlnet and face lora. Put it in Comfyui > models > checkpoints folder. I got a workflow working for inpainting (the tutorial which show the inpaint encoder should be removed because its missleading). 1 Anime Lineart ControlNet 1. However, if you get weird poses or extra legs and arms, adding the ControlNet nodes can help. I really need the inpaint model too much, especially the controlNet model has not yet come out. The ControlNet conditioning is applied through positive conditioning as usual. Now, no more tricks or tailored workflows are required for better inpainting results. Forcing ComfyUI to proper inpaint SDXL is a steady fight too. Hi, let me begin with that i already watched countless videos about correcting hands, most detailed are on SD 1. This is what i was doing but im pretty sure the second use of ksampler is incorrect! ComfyUI provides more flexibility in theory, but in practical I've spent more time changing samplers and tweaking denoising factors to get images with unstable quality. Bring it into Fooocus for faceswap multiple times (no upscale, using different models) Bring it back into ComfyUI to upscale/prompt. Inpainting-specific model for optimal repairs. Settings for Stable Diffusion SDXL Automatic1111 Controlnet Inpainting We would like to show you a description here but the site won’t allow us. IPAdapter (PLUS) Style transfer by injecting reference image features. 5-1. People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. Let's say I like an overall image, but I want to change the entire style, in cases like that I'll go inpainting, inpaint not masked and whole picture, then choose the appropriate checkpoint. This makes inpaint_only+lama suitable for image outpainting or object removal. It was more helpful before ControlNet came out but probably still helps in certain scenarios. As you can, the results are indeed coherent, just deepfried. It takes the pixel image and the inpaint mask as the input, and output to the Apply ControlNet node. Link to my setup We would like to show you a description here but the site won’t allow us. In this case, I am using 'Modify Content' since "Improve Details' often add human parts in the inpaint. Jan 20, 2024 · But you use the Inpaint Preprocessor node. An image of the node graph might help (although those aren't that useful to scan at thumbnail size) but the ability to search by nodes or features used, and the generation of models, would Jul 20, 2024 · controlnet++ is for SD 1. 5 i use ControlNet Inpaint for basically everything after the low res Text2Image step. You can further experiment by using the source image or the style image itself instead of an empty latent. ControlNet Inpaint should be used in Img2Img. 5, image-to-image 70% And the result seems as expected, the hand is regenerated (ignore the ugliness) and the rest of the image seems the same: However, when we look closely, there are many subtle changes in the whole image, usually decreasing the quality/detail: More an experiment, proof of concept than a workflow. The developer of the controlnet_aux preprocessors acknowledged that there is a bug. I don’t know why every other UI sucks at SDXL inpaint. Without it SDXL feels incomplete. Select "ControlNet is more important". Performed detail expansion using upscale and adetailer techniques. I use ControlNet to capture the actions of the characters in the original video, then feed the new character portraits with removed backgrounds into IPAdapter. Using text has its limitations in conveying your intentions to the AI model. If you want use your own mask, use "Inpaint Upload". e Openpose to better control eye look while using Reactor? Suggest that ControlNet Inpainting is much better but in my personal experience it does things worse and with less control Maybe I am using it wrong so I have a few questions: When using ControlNet Inpaint (Inpaint_only+lama, ControlNet is more important) should I use an inpaint model or a normal one Ultimately, I did not screenshot my other two load image groups (similar to the one on bottom left, but connecting to different controlnet preprocessors and ip adapters), I did not screenshot my sampling process (which has three stages, with prompt modification and upscaling between them, and toggles to preserve mask and re-emphasize controlnet I used photon checkpoint, grow mask and blur mask, InpaintModelConditioning node, Inpaint controlnet, but the result are like the images below. How to use ControlNet with Inpaint in ComfyUI. It's all situational. MAT_Places512_G_fp16. Option a) t2i + low denoising strength + controlnet tile resample b) i2i inpaint + controlnet tile resample (if you want to maintain all texts) know that controlnet inpainting got unique preprocessor (inpaintonly+lama and inpaint global harmonious). Welcome to the unofficial ComfyUI subreddit. I used the preprocessed image to defines the masks. Please share your tips, tricks, and… The Inpaint Model Conditioning node will leave the original content in the masked area. xhmeddogchmfcrqrkshfpqckxrbscgzaqzdqhnpubqfkcgzesvwwpqf