Comfyui workflows examples reddit


  1. Comfyui workflows examples reddit. 1 ComfyUI install guidance, workflow and example. Merging 2 Images together. This is an example of an image that I generated with the advanced workflow. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Table of contents. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Please share your tips, tricks, and workflows for using this software to create your AI art. Create animations with AnimateDiff. Sure, it's not 2. But it separates LORA to another workflow (and it's not based on SDXL either). com/. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi Antique_Juggernaut_7 this could help me massively. No, because it's not there yet. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. you sound very angry. Join the largest ComfyUI community. SDXL Default ComfyUI workflow. This repo contains examples of what is achievable with ComfyUI. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Please keep posted images SFW. true. io/VixFlowsDocs/ComfyUI2VixMigration. Img2Img ComfyUI workflow. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. The first one is very similar to the old workflow and just called "simple". In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Aug 2, 2024 · Flux Dev. 0 for ComfyUI. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. WAS suite has some workflow stuff in its github links somewhere as well. but mine do include workflows for the most part in the video description. An example of the images you can generate with this workflow: 4 - The best workflow examples are through the github examples pages. Infinite Zoom: 157 votes, 62 comments. html). It covers the following topics: ComfyUI Examples. Welcome to the unofficial ComfyUI subreddit. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. best external source willbe @comfyui-chat website which i believed is from comfyui official team. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. Say, for example, you made a ControlNet workflow for copying the pose of an image. You feed it an image, it runs through openpose, canny, lineart, whatever you decide to include. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. A good place to start if you have no idea how any of this works is the: No, because it's not there yet. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. I meant using an image as input, not video. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. That's the one I'm referring to. To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite , supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. Now, because im not actually an asshole, ill explain some things. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. These people are exceptional. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Belittling their efforts will get you banned. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Two workflows included. So. - lots of pieces to combine with other workflows: 6. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. Flux. I played for a few days with ComfyUI and SDXL 1. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Jul 28, 2024 · You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: https://visionatrix. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. 1. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. ComfyUI Fooocus Inpaint with Segmentation Workflow Welcome to the unofficial ComfyUI subreddit. It works by converting your workflow. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. you may need fo an external finding as most of missing custom nodes that may outdate from latest comfyui could not be detect or show to manager. But it is extremely light as we speak, so much so 157 votes, 62 comments. The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. (Same seed, etc, etc. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. github. . 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. A good place to start if you have no idea how any of this works Welcome to the unofficial ComfyUI subreddit. ControlNet Depth ComfyUI workflow. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Everything else is the same. there you just search the custom node and you comfy uis inpainting and masking aint perfect. 1 or not. For the AP Workflow 9. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. Infinite Zoom: I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. It provides workflow for SDXL (base + refiner). AP Workflow 9. Workflow. That being said, here's a 1024x1024 comparison also. sft file in your: ComfyUI/models/unet/ folder. 2. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. json files into an executable Python script that can run without launching the ComfyUI server. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. be/ppE1W0-LJas - the tutorial. ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. Put the flux1-dev. You can find the Flux Dev diffusion model weights here. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. The AP Workflow wouldn't exist without the incredible work done by all the node authors out there. And above all, BE NICE. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. I originally wanted to release 9. 0, I worked closely with u/Kijai, u/glibsonoran, u/tzwm, and u/rgthree, to test new nodes, optimize parameters (don't ask me about SUPIR), develop new features, and correct bugs. Svelte is a radical new approach to building user interfaces. Is there a workflow with all features and options combined together that I can simply load and use ? To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Share, discover, & run thousands of ComfyUI workflows. Only the LCM Sampler extension is needed, as shown in this video . Breakdown of workflow content. if you needed clarification, all you had to do was ask, not this rude outburst of fury. This guide is about how to setup ComfyUI on your Windows computer to run Flux. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. For your all-in-one workflow, use the Generate tab. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. second pic. But it is extremely light as we speak, so much so This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. If you see a few red boxes, be sure to read the Questions section on the page. Hi there. https://youtu. however we need it unless there slight possibility of other alt or some1 nodes-pack can do same process . How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. of course) To make differences somewhat easiser to see, the above image is at 512x512. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. Upscaling ComfyUI workflow. Civitai has few workflows as well. or through searching reddit, the comfyUI manual needs updating imo. this is just a simple node build off what's given and some of the newer nodes that have come out. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. all in one workflow would be awesome. A lot of people are just discovering this technology, and want to show off what they created. aawxwo txyae qiewo sgcybchkc tztsyag skbut rataw eau zldsdq qbwn