Comfyui nodes examples reddit The subject and background are rendered separately, blended and then upscaled together. e extensions) that you know of that have a button on them? I was thinking about making my extension compatible with comfyUI but I am at a loss when it comes to placing a button on a node. I wanted to share a summary here in case anyone is interested in learning more about how text conditioning works under the hood. 150 workflow examples of things I created with ComfyUI and ai models from Civitai First, use the FastMuter to switch all the attached nodes off. I have like 500 loras tagged and organized and if you add a keyword at the end of your prompt <Dungeons and Dragons> it can activate a lora. txt but I'm just at a loss right now, I'm not sure if I'm missing something else or what. The node itself (or better, the LLM inside of it) writes the python code that runs the process. When I dragged the photo to ComfyUI, In the bottom left there are two nodes called "PrimitiveNode" (under "Text Prompts" group), Now, if I will go to Add Node->utils->Primitive, it will add a completely different node although the node it self called "PrimitiveNode", Same thing for "CLIP Text Encode" node. com) I had some sucess with zooming in to a doorknob on a house. Anyway have fun! I run some tests this morning. g. Been playing around with ComfyUI and got really frustrated with trying to remember what base model a lora uses and its trigger words. I have them stored in a text file at ComfyUI\custom_nodes\comfyui-dynamicprompts\nodes\wildcards\cameraView. Two nodes are selectors for style and effect, each with its own weight control slider. For your all-in-one workflow, use the Generate tab. Hello r/comfyui, . Queue the flow and you should get a yellow image from the Image Blank. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . so I wrote a custom node that shows a Lora's trigger words, examples and what base model it uses. and remember sdxl does not play well with 1. Or, at least, kinda. I'm keen to have a go at making custom nodes. I didn't think I've have any chance of writing one without docs, but after viewing a few random Github repos of some of those custom nodes, I think I could do all but the more complicated ones just by following those examples. It's basically just a mirror. If so, you can follow the high-res example from the GitHub. I had implemented a similar process in the A1111-WebUI back then, and the results were good, but the code wasn't suitable for publication. 0), but it doesn't have yet the capability to transfer style from a single source image. I've been using A1111, for almost a year. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel Nodes are not always better, for many task yes, but nodes can also makes things way more complicated, for example try creating some shader effects using node based shader editor - some things are such that a few lines code become a huge graph mess. edit:: im hearing alot of arguments for nodes. I hated node design in blender and I hate it here too please don't make comfyui any sort of community standard. However, the other day I accidentally discovered this: comfyui-job-iterator (ali1234/comfyui-job-iterator: A for loop for ComfyUI (github. Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. As usual with custom nodes: download the folder, put it in custom_nodes, and just launch Comfy. Something laid out like the webui. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. Also, if this is new and exciting to you, feel free to post I checked the documentation of a few nodes and I found that there is missing as well as wrong information, unfortunately. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. I would like a way to view image examples of the checkpoint i have selected in the checkpoint loader node. Here are some places where you can find some: ComfyUI Custom Node Manager. Also the hand and face detection have never worked. One tool I would really like is something like the CLIP interrogator where you would give it a song or a sound sample, and it would return a string describing this song in a language and vocabulary that the AI understands. I think a1111 has this feature by default or as an extension. Idea: A custom loop button on the Side Menu, how much time you wanna loop it like Auto Queue with a cap and also make a controller node, by which loop count can be controlled by the values which comes from inside the workflow. (stuff that really should be in main rather than a plugin but eh, =shrugs= ) I think something of a sharpen node would also be great to add to a post-pro workflow. Thank you for your attention. That will get you up and running with all the ComfyUI-Annotation example nodes installed and you can start editing from there. But when using it I find some tasks require a lot of repetitive clicking moving nodes out of the way mostly. A node hub - A node that accepts any input (including inputs of the same type) from any node in any order, able to: transport that set of inputs across the workflow (a bit like u/rgthree 's Context node does, but without the explicit definition of each input, and without the restriction to the existing set of inputs) Then find example workflows . I messed with the conditioning combine nodes but wasn't having much luck unfortunately. The description of a lot of parameters is "unknown". The example given on that page shows how to wire up the nodes. 2), Anime Style, Manga Style, Hand drawn, cinematic, Sharp focus, humorous illustration, big depth of field, Masterpiece, concept art, trending on artstation, Vivid colors, Simplified style, trending on ArtStation, trending on CGSociety I agree that we really ought to see some documentation. start with simple workflows . getExtraMenuOptions. An example is FaceDetailer / FaceDetailerPipe. You can take a look at my AP Workflow for ComfyUI, which makes extensive use of Context and Context Big nodes, together with the Any Switch node, the Reroute node, and the new Fast Groups Muters/Bypassers. So nodes are not better singularly, but they have their place. In A1111, I would invoke the Lora in the prompt and also write "a photo of txwx woman". Yes. Here's a very interesting node 👍 However, I have three small criticisms to make: You need to run the workflow once to get the node number for which you want information and then a second time to get the information (or two more times if you make a mistake). This set of nodes is designed to give some Photoshop-like functionality within ComfyUI. Sometimes the node order is 1 ( step1: generate preview step2: vae encode) But sometimes the node order becomes 2 (step1: vae encode step2:upscale and so on. Ah, I’m sorry, I was pretty new to comfyui and didn’t know how to share workflows. Is it possible to create with nodes a sort of "prompt template" for each model and have it selectable via a switch in the workflow? For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. Also has colorization options for workflow nodes via regex, groups and each node We would like to show you a description here but the site won’t allow us. Am I missing something? ComfyUI seems to have downloaded some models for face/hand detection on using this node for the first time, but I'm not seeing their output. Now, my WAS Node Suite Load Image Batch and Save Image Extended nodes are working lovely again. My gripe with nodes is that it inherently adds redundancy to any design workflows. A few new nodes and functionality for rgthree-comfy went in recently. Seems like a tool that someone could make a really useful node with A node that could inject the trigger words to a prompt for lora, show a view of sample images, or all kinds of things etc. Two nodes are used to manage the strings: in the input fields you can type the portions of the prompt, and with the sliders you can easily set the relative weights. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. 0 + other_model If you are familiar with the “Add Difference” option in other UIs this is how to do it in ComfyUI. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. I also needed to edit the WAS_Node_Suite. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Also some of my ksamplers from other node packs were having issues loading the new samplers, which was interesting but not to big of a deal. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. This I love downloading new nodes and trying them out. So you want to make a custom node? You looked it up online and found very sparse or intimidating resources? I love ComfyUI, but it has to be said: despite being several months old, its documentation surrounding custom nodes is god-awful tier x). We would like to show you a description here but the site won’t allow us. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. What are your favorite custom nodes (or node packs) and what do you use them for? Node menu. If there was a preset menu in comfy it would be much better. Having a computer science background, I feel that the potential for ComfyUI is huge if some basic branching and looping components are added, to unleash the creativity of developers. Is tag "looking at viewer" in list --> save. The DWPreprocessor node can be found at: Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. Whenever I create connections between nodes, as shown in the image above, the order of the nodes becomes completely randomized. but it requires lots of fiddling to get the latents to line up nicely. But instead of returning an options object, this one gets it passed in… Welcome to the unofficial ComfyUI subreddit. Every time you run the . It's usually pretty good at automatically getting the right stuff. I’m afraid I’m not using this any more! The basic setup is there, though. And above all, BE NICE. But I never used a node based system and also I want to understand the basics of ComfyUI. " - Background Input Node: In a parallel branch, add a node to input the new background you want to use. Identify the useful nodes that were executed. For example, I've trained a Lora of "txwx woman". Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). Are you looking for an alternative to sd web faceswaplab? If so, ComfyUI has face swapping nodes which you can install from the ComfyUI Manager. py in the custom nodes directory and it will be in your images/postprocessing node list. 157 votes, 62 comments. Hey everyone. It's installable through the ComfyUI and lets you have a song or other audio files to drive the strengths on your prompt scheduling. When you right click on a node, the menu is similarly generated by node. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. It could be that the impact basic pipe node allows for the switch between widget and input as well. bat file. /r/StableDiffusion is back open after the protest of Reddit killing open API access We would like to show you a description here but the site won’t allow us. com/WASasquatch/comfyui-plugins. Here's a basic example of using a single frequency band range to drive one prompt: Workflow Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. The rest should be self-explanatory. Section IV. For example, this is mine: Some nodes might be called "Mask Refinement" or "Edge Refinement. Now in your 'Save Image' nodes include %folder. , KSampler img2img. Luckily, you have the plugin manager installed, open that up and click "install missing nodes" and it will try to grab them for you. Workspace Templates do help a ton to bring in some pre configured noodles but knowing how blender does it I ju Quite uninspired by AI audio at this point, I would like to hear my favorite artists produce music by any means, but I cant perceive a real message heart and soul or a message, feeling, human emotion to discover in something produced entirely by a device without a soul (even if its inspired by accident). Hi Reddit! In October, we launched https://comfyworkflows. masquerade-nodes-comfyui. High Frequency Strength High Frequency Size Low Frequency Strength Low Frequency Size Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. lastly, it generates the first preview(1) ). But standard A1111 inpaint works mostly same as this ComfyUI example you provided. I'm trying to get used to ComfyUI. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to (This post is addressed to ComfyUI users unless you're interested too of course ^^) Hey guys ! The other day on the comfyui subreddit, I published my LoRA Captioning custom nodes, very useful to create captioning directly from ComfyUI. I absolutely 100% do not care how clever the author of the workflow is. I was getting frustrated by the amount of overhead involved in wrapping simple Python functions to expose as new ComfyUI nodes, so I decided to make a new decorator type to remove all the hassle from it. Open the . I also had issues with this workflow with unusually-sized images. I haven’t seen a tutorial on this yet. I have Lora working but I just don’t know how to do controlnet with this Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. I know a bit of python and understand how the provided example works. You can also use the UpscaleImageBy node so you don't have to enter sizes, or to decrease the number of nodes (but not spaghetti) there's the Ultimate SD Upscale node, but more efficient than what you already did is going to be a challenge. Batch on the latent node offers more options when working with custom nodes because it is still part of the same workflow. 136 votes, 59 comments. Every conceivable blend mode is available. The nodes available are: Blend Modes: Applies an image to another image using a blend mode operation. Any node that is part of a branch that is not useful is disabled. Trying to make a node that selects terms for a prompt (similar with the Preset Text but with different terms per node). For anyone still looking for an easier way, I've created a @ComfyFunc annotator that you can add to your regular python functions to turn them into ComfyUI operations. I'm using ComfyUI portable and had to install it into the embedded Python install. ) You can use any input type for these switches, the important thing is that the input in the switch matches the input required in the subsequent node. I hope you'll enjoy the custom nodes. The node author said it will be implemented in the next few days. example: Is tag "2girl" in list --> do not save. But I highly suggest learning the nodes, it's actually a lot of fun! lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. It is licensed under the Apache 2. This is great for prompts so you don't have to manually change the prompt in every field (for upscalers for example). So as long as you use the same prompt and the LLM gets to the same conclusion, that’s the whole workflow. Finding what nodes you need to do X or Y can be a massive headache and there are many nodes that either lack documentation entirely or have completely worthless documentation. Why? Because fuck you, that's why. Legally the nodes can be shipped in any license because they are packaged separately from the main software and nothing stops someone from writing their own non GPL ComfyUI from scratch that is license compatible with those nodes. The way any node works is that the node is the workflow. 3B (1. A lot of people are just discovering this technology, and want to show off what they created. (My python skills are appalling ; -))) For example, the KSampleAdvanced has inputs like 'steps' and 'end_at_step' which are set using other node's output (using the spaghetti), while 'cfg' or 'noise_seed' are set using input fields. Going to python_embedded and using python -m pip install compel got the nodes working. Then go into the properties (Right Click) and change the 'Node name for S&R' to something simple like 'folder'. Can someone please explain or provide a picture on how to connect 2 positive prompts to a model? 1st prompt: (Studio ghibli style, Art by Hayao Miyazaki:1. 4. YouTube Thumbnail. Also you can listen the music inside ComfyUI. Then, a maths operation of x2 to plug into a widget-converted-to-input on the Upscaler. Soon, there will also be examples showing what can be achieved with advanced workflows. For example, 9 images. I know that several samplers allow for having for example the number of steps as an input instead of a widget you so you supply it from a primitive node and control the steps on multiple samplers at the same time. /r/StableDiffusion is back open This is a question for any node developer out there. edited again to add: if you find yourself wondering how to run this experiment because you can't set your CFG below 0: do a little basic hacking to your nodes. You just need to use Queue Prompt multiple times (Batch Count in Extra option) if you want to loop img, i build a Cache Node. Please share your tips, tricks, and workflows for using this software to create your AI art. Iterate through all useful nodes, walk backwards through the graph enabled all the parent nodes. It's a pain in the ass to be forced to download weird anime checkpoints and a dozen obscure custom nodes, struggle to figure out why this thousand-node spaghetti soup doesn't work, and isolate the tiny section that I want to learn. bat file with notepad, make your changes, then save it. bat file, it will load the arguments. Now, with each generation, you can automatically or manually get the desired image as input for the next node, (e. That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. Welcome to the unofficial ComfyUI subreddit. So is there any suggestion to where to start, any tips or resource for me. Like they said though, a1111 will be better if you don't understand how to use the nodes in comfy. From the first KSampler you take the Latent to a VAE Decoder node (Converting it to a normal image). For example, switching prompts, switching checkpoints, switching controls, loading images foreach, and much more. Are you saying that in ComfyUI, you do NOT need to state "txwx woman" in the prompt? 38 votes, 46 comments. Efficiency Nodes Ultimate SD Upscale ComfyUI roop Checkpoint: epiCRealismSin with add_detail and epiCRealismHelper LoRAs, but those are just my preference - any SD1. With some nervous trepidation, I release my first node for ComfyUI, an implementation of the DemoFusion iterative mixing sampling process. My reasearch didnt yield much result so I might ask here before I start creating my custom nodes. Plus quick run-through of an example ControlNet workflow. One could even say “satan tier”. From the VAE Decoder node you take the image to an Image Preview node. You can get it here: A comfyUI node layout for nesting latents within latents (github. I provide one example JSON to demonstrate how it works. Aside from it being in Japanese, the underlying concepts were not easily understood even after translating. This tutorial does a good job breaking it down. I'm not sure that custom script allows you to select a new checkpoint but what it is doing can be done manually with more nodes. - Composite Node: Use a compositing node like "Blend," "Merge," or "Composite" to overlay the refined masked image of the person onto the new background. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. For txt2img you send it an empty latent image which is what the EmptyLatentImage node generates. Honestly wouldn't be a bad idea to have an a1111 similar node workflow for easier onboarding. PromptToSchedule and prompt parser node can help carry the loras to the sampler. It uses the amplitude of the frequency band and normalizes it to strengths that you can add to the fizz nodes. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. Hi. Then you connect them to a switch node (on/off or bolean). Suggestions? Welcome to the unofficial ComfyUI subreddit. com to make it easier for people to share and discover ComfyUI workflows. The Assembler node collects all incoming strings to combine them into a single final prompt. New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). 24K subscribers in the comfyui community. comfyui manager will identify what is missing and download for you . Please keep posted images SFW. 0 license and offers two versions: 14B (14 billion parameters) and 1. Also how many steps do you run at the end without recombining the latents is a balancing act. ComfyUI node suite for composition, stream webcams or media files in and out, animation, flow control, making masks, shapes and textures like Houdini and Substance Designer, read MIDI devices. You just have to annotate your function so the decorator can inspect it to auto-create the ComfyUI node definition. I don't know A1111 but I guess your AND was the equivalent to one of thoose. The "Attention Couple" node lets you apply a different prompt to different parts of the image by computing the cross-attentions for each prompt, which corresponds to an image segment. Since Loras are a patch on the model weights they can also be merged into the model: You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. 5 so that may give you a lot of your errors. I got ChatGPT to help me understand what this node does. Mirrored nodes, where if you change anything in the node or it's mirror the other linked node will reflect the changes. anyway. - lots of pieces to combine with other workflows: Welcome to the unofficial ComfyUI subreddit. I love your new nodes! Regarding denoise levels, I tried lowering the denoise in the Ksampler, and it just gives me a blank area where the inpainting was supposed to happen. Belittling their efforts will get you banned. KSampler to VAE Decoder to Image Save. true. Examples of "mean" nodes: KSampler, VAE Decode, Upscale with Model. Virtuoso Nodes for ComfyUI. Hi, I am new to ComfyUI and this may be a bit of a dumb question. try civitai . Really like graph editors in general and find it works great for SD. com)) . I'm just curious if anyone has any ideas. I am looking for a way to run a single node without running "the entire thing" so to speak. Anyways neat idea, hope to see further updates. Hey everyone! Looking to see if anyone has any working examples of break being used in comfy ui (be it node based or prompt based). The third example is the anthropomorphic dragon-panda with conditionning average. Reply reply I am at the point where I need to filter out images based on a tag list. Filter and sort from their properties (right-click on the node and select "Node Help" for more info). Custom Nodes/extensions: ComfyUI is extensible and many people have written some great custom nodes for it. The way you set it up in your example workflow is pretty straightforward the basic setup. I just published a video where I explore how the ClipTextEncode node works behind the scenes in ComfyUI. Unless someone did a node with this option, you can’t. Please share your tips, tricks, and… I found it extremely difficult to wrap my head around initially but after a few days of going through example nodes and the ComfyUI source I started being productive. all you have to do is set the minimum cfg for a basic ksampler from 0 (hard-coded default last I checked) to Moondreamer and Llava are for prompting back text (summarize this, make me a list of keywords for that) and output text, this is more like it becomes the node you ask it to be, coding itself, and you can connect any type of input and any type of output, so you can input an image and output some text about that image, or you can input a number and get some random text out of that number, or On the Load Image Batch node, connect the filename_text output field to the text input of the Text to Conditioning node, connect the CONDITIONING output from that same node to the positive input of the sampler. Just reading the custom node repos' code seems to show the authors have a lot of knowledge on how Comfyui works and how to interface with it, but I am a bit lost (in the large amount of code in ComfyUI's repo and the large amount of custom node repos) as to how to get started. if a box is in red then it's missing . A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share Either you maintain a ComfyUI install with every custom node on the planet installed (don't do this), or you steal some code that consumes the JSON and draws the workflow & noodles (without the underlying functionality that the custom nodes bring) and saves it as a JPEG next to each image you upload. I'm not sure if that approach is feasible if you are not an experienced programmer. If you are unfamiliar with break it is part of automatic1111. Is there any ways to achieve this? Or should I look for a different node? May 12, 2025 · Wan2. I tried to write a node to do that but so far i havent gotten far with it. One branch for captions and one branch for manual (usually called “text box”). Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the sampler. Reply reply More replies More replies More replies Fernicles SDTools V3 - ComfyUI nodes First off, its a good idea to get the custom nodes off git, specifically WAS Suite, Derfu's Nodes, and Davemanes nodes. The first example is the panda with a red scarf with less prompt bleeding of the red color thanks to conditionning concat. The constant noise for whole batch doesn't exist in base comfy yet (there's PR about it), I made a simple node to generate the noise instead, which can then be used as latent input in the advanced/custom sampler nodes with "add_noise" off. Warning. However I would probably start with learning just the basic nodes before you move on to more complicated examples. text% and whatever you entered in the 'folder' prompt text will be pasted in. Example of a "nice" node: Preview Image Feb 12, 2025 · In this article, we delve into the realm of ComfyUI's best custom nodes, exploring their functionalities and how they enhance the image generation experience. LLaVA -> LLM -> AudioLDM-2 Example workflow in the examples folder inside github. py file in your comfyui main directory (pretty sure that's the file, IIRC). As you can see in the preview image of the relevant portion of the workflow, just the body information seems to have been generated. And the parameter "force_inpaint" is, for example, explained incorrectly. Mar 28, 2025 · Certain nodes will stop execution of the entire graph if they are missing inputs, others play nice and let your workflow continue. Maybe the problem is figuring out if a node is useful? It could be more than just the nodes that output an image. 5. ComfyUI-paint-by-example. I think it has something to do with this from: GitHub - Gourieff/comfyui-reactor-node: Fast and Simple Face Swap Extension Node for ComfyUI - Scroll down to troubleshooting. 1 ComfyUI Workflow. Maybe something like a frequency separation node would be hella useful. If you find it confusing, please post here for help or create an Issue in GitHub. Not much else. Thank you. Using a 'Clip Text Encode (Prompt)' node you can specify a subfolder name in the text box. The Wan2. What I want to do for starters is just make some "convenience" nodes, which are just combinations of default nodes. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. See the high res fix example, particularly the second pass version. These tools do make use of WAS suite. r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. It goes right after the DecodeVAE node in your workflow. Totally newbie in node development and I'm hitting a wall. I generated images from comfyUI. I want to connect a node that outputs a string to this CLIP Text Encode node (instead of manually input text for the prompt) . If you look at the Refiner's KSampler you'll see the same process. . The KSampler node is the same node for both txt2img and img2img. Also ComfyUI's internal apis are like horrendous. I had a similar issue last night with WAS Node and did the following and it seemed to fix my issue. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. I see that ComfyUI is a better way to create. from a folder a was node for saving output + a concatenate text, ( like this, I just have one node "title" for the full project, and this creat a new root folder for any new project ) and I have a different name node, (so folder ) for every output I need to save, and to avoid spagetti, I use SET node and GET node. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). WAS node suite has a great high pass filter I always use in a blend node overlay. Are there any ComfyUI nodes (i. I should be able to skip the image if some tags are or are not in a tag list. ComfyUI Neural Network Latent Upscale: Nodes:NNLatentUpscale, A custom ComfyUI node designed for rapid latent upscaling using a compact neural network, eliminating the need for VAE-based decoding and encoding. The LoRA Caption custom nodes, just like their name suggests, allow you to caption images so they are ready for LoRA training. Were the 2 KSampler needed? I feel that i could have used a bunch of ConditioningCombiner so everything leads to 1 node that goes to the KSampler. It looks like a cool project. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. For example, if someone wanted to make multiple images at once in, say, A1111, they could just move the batch size slider. Update the VLM Nodes from github. 5 models will do RgThree’s nodes, and probably some other stuff too - CivitAI is a great place to “shop”! I ended up building a custom node that is very custom for the exact workflow I was trying to make, but it isn't good for general use. However, if you are looking for a more extensive lab or studio like interface, there is an interesting project called 'facefusion' with the MIT License. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. It's similar to the concept of inheritance. Hope this helps you guys as much as its helping me. We need to generate a blank image to paint masks onto before doing anything else. comfy_clip_blip_node. (You don't actually need to use the Text to Conditioning node here. Note that I am not responsible if one of these breaks your workflows, your ComfyUI install or anything else. But it gave better results than I thought. Iv searched for such a node or method but i havent found anything. I'm a basic user for now but I want the deep dive. py file. I have two string lists in my node. The ComfyUI node already exists (I've added it to the upcoming AP Workflow 7. For example with the “quality of life” nodes there is one that enable to chose between your pictures from the batch which one you want to process further. It grabs all the Keywords and tags, sample prompts, lists the main triggers by count, as well as dowloads sample images from Civitai. Note that it will return a black image and a NSFW boolean. I haven't tried it yet, but seems like it can do pretty much what Node-Red (an event driven node based programming language) has this functionality so it could defintely work in a node based environment such as ComfyUI . Save it as safety_checker. https://github. Yet, they both look the same in the sampler's class definition, they're all defined as INT/FLOAT with default, min, and max values. Yeah, I was looking for one too, so I ported the Auto1111 plugin into a custom node. Disable all nodes. Fast Groups Muter & Fast Groups Bypasser Like their "Fast Muter" and "Fast Bypasser" counterparts, but collecting groups automatically in your workflow. Only the LCM Sampler extension is needed, as shown in this video. Get the Reddit app Scan this QR code to download the app now Here are approx. I tried some of the maths nodes but nothing wants to connect to anything: pulling a connecting noodle from INT out (primitive) highlights the 'a' input of the maths-> INT -> IntBinaryOperation node, but it fails to connect. oizynmfboiezutctuakthkzwhqouekosqjkplejwxjlqkccwfv