Skip to content

Comfyui upscale methods reddit

Comfyui upscale methods reddit. But it does take longer to make. latent upscale introduces noise as I said in other posts here. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Click on Manager on the ComfyUI windows. 17K subscribers in the comfyui community. Edit to add: When using tiled upscalers with the right settings you can get enhancements in details without using latent upscaling and relying on . -Regular upscale (different models for different situations), I usually like Remacri, but there are other new ones that work well for different styles. ckpt motion with Kosinkadink Evolved. Tutorial 6 - upscaling. upscale_method. The method used for resizing. More consistency, higher resolutions and much longer videos too. Initial Setup for Upscaling in ComfyUI. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample New to Comfyui, so not an expert. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. 5 denoise. So if you want 2. I send the output of AnimateDiff to UltimateSDUpscale with 2x ControlNet Tile and 4xUltraSharp. It's why you need at least 0. Hires fix with add detail lora. Please share your tips, tricks, and… PLANET OF THE APES - Stable Diffusion Temporal Consistency. The steps are as follows: Start by installing the drivers or kernel listed or newer in the Installation page of IPEX linked above for Windows and Linux if needed. (Change the Pos and Neg Prompts in this method to match the Primary Pos and Neg Prompts). The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. Upscale x1. Here's how you can do it; Launch the ComfyUI manager. 55 -Inpaint upscale (face/hands/details u want to improve). 2 Welcome to the unofficial ComfyUI subreddit. articles on new photogrammetry software or techniques. This specific image is the result from repeated upscaling from: 512 -> 1024 -> 2048 -> 3072 -> 4096 using a denoise strength of 1. 5+ denoise. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. Please keep posted images SFW. to combat it you must increase the denoising value of any sampler you feed an upscale into. As a member of our community, you'll enjoy: 📚 Easy-to-understand explanations of business analysis concepts, without the jargon. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 Choose your platform and method of install and follow the instructions. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. - latent upscale looks much more detailed, but gets rid of the detail of the original image. 5x upscale back to source image and upscale again to 2x lookup latent upscale method as-well this performs a staggered upscale to your desired resolution in one workflow queue. so a latent upscale is inherrantly lossy. I'm trying to find a way of upscaling the SD video up from its 1024x576. io comments sorted by Best Top New Controversial Q&A Add a Comment Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. so i. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. crop I too use SUPIR, but just to sharpen my images on the first pass. Belittling their efforts will get you banned. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec 43 votes, 16 comments. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. This is the 'latent chooser' node - it works but is slightly unreliable. The downside is that it takes a very long time. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. 2x upscale using Ultimate SD Upscale and TileE Controlnet. The Upscale Image node can be used to resize pixel images. - image upscale is less detailed, but more faithful to the image you upscale. Ultimate SD upscale is good and plays nice with lower-end GFX cards, Supir is great but very resource-intensive. Ugh. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. There are a lot of options in regards to this, such as iterative upscale; in my experience, all of them are too intensive for bad GPUs or they are too inconsistent. ultrasharp), then downscale. with a denoise setting of 0. 0. Please share your tips, tricks, and workflows for using this… This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). You just have to use the node "upscale by" using bicubic method and a fractional value (0. 2 and resampling faces 0. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. A lot of people are just discovering this technology, and want to show off what they created. Latent upscale is different from pixel upscale. 5 -> 0. in a1111 the controlnet Welcome to the unofficial ComfyUI subreddit. Mar 22, 2024 · As you can see, in the interface we have the following: Upscaler: This can be in the latent space or as an upscaling model; Upscale By: Basically, how much we want to enlarge the image; Hires 17K subscribers in the comfyui community. To upscale images using AI see the Upscale Image Using Model node. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Please share your tips, tricks, and… Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. height. 0 Alpha + SD XL Refiner 1. 5 based model and 30 seconds using 30 steps/SD 2. Decoding the latent 2. Search for upscale and click on Install for the models you want. Look at this workflow : Jan 8, 2024 · 2. Welcome to the Business Analysis Hub. Start ComfyUI. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. I made a tiled sampling node for ComfyUI that i just wanted to briefly show off. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. it's nothing spectacular but gives good consistent results without You guys have been very supportive, so I'm posting here first. Encoding it and doing a tiny refining step to sharpen up the edges. second pic. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Jan 5, 2024 · Installation. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. Hi, guys. The final node is where comfyui take those images and turn it into a video. I'm using mm_sd_v15_v2. Our friendly Reddit community is here to make the exciting field of business analysis accessible to everyone. a. Please share your tips, tricks, and workflows for using this software to create your AI art. feed the 1. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. 2x, upscale using a 4x model (e. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. The best method I The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. this is just a simple node build off what's given and some of the newer nodes that have come out. fix then going to img2img and using controlnet + Ultimate SD Upscale script and 4x Ultrasharp Upscaler. The workflow tutorial includes a method to upscale images up to 5. Try immediately VAEDecode after latent upscale to see what I mean. Side by side comparison with the original. The upscale quality is mediocre to say the least. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. . The t-shirt and face were created separately with the method and recombined. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). the factor 2. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. 4 on denoiser due to the fact that to upscale the latent it basically grows a bunch of dark space between each pixel unlike an image upscale which adds more pixels. 1-0. Welcome to the unofficial ComfyUI subreddit. You can find the node here. 2 options here. comfyanonymous. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 4 and tiles of 768x768. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Width. And when purely upscaling, the best upscaler is called LDSR. Jul 29, 2023 · I can see only 5 methods available : nearest-exact,bilinear,area,bicubic,bislerp. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. And above all, BE NICE. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. 51 denoising. 5 if you want to divide by 2) after upscaling by a model. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 4 -> 0. 0 -> 0. Thanks. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. I tried them all, most gives terrible artifacts with a denoise strength under 0. There are also "face detailer" workflows for faces specifically. Oct 21, 2023 · Non-latent upscale method. This means that your prompt (a. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. Also, both have a denoise value that drastically changes the result. 25 i get a good blending of the face without changing the image to much. 4x the input resolution using consumer-grade hardware. The pixel images to be upscaled. That's because latent upscale turns the base image into noise (blur). 19K subscribers in the comfyui community. 5 "Upscaling with model" and then denoising 0. My nonscientific answer is that A1111 can do it around 60 seconds at 30 steps using a 1. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. This is a community to share and discuss 3D photogrammetry modeling. Upscale Image node. That's because of the model upscale. With this method, you can upscale the image while also preserving the style of the model. 5 to get a 1024x1024 final image (512 *4*0. Running it through an image upscale on bilinear and 3. g. ) I haven't managed to reproduce this process i Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. -Img2Img upscale (either with SD upscale or ultimate SD upscale, i’ve found different use cases for each). If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 5 to high 0. Go to the custom nodes installation section. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. 10 votes, 15 comments. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. k. This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent space, and perform the sampler pass. 💡 Practical tips and techniques to sharpen your analytical skills. The target height in pixels. Click on the image below and drag and drop the full-size image to the ComfyUI canvas. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 5, which I always need to improve any image, especially while applying LoRAs. This is done through a series of nodes and processes that aim to maintain the quality and detail of the original image while enhancing its resolution. NICE DOGGY - Dusting off my method again as it still seems to give me more control than AnimateDiff or Pika/Gen2 etc. 1 but I resize with 4x-Ultrasharp set to x2 and in ComfyUI this workflow uses a nearest/exact latent upscale. Here it is, the method I was searching for. I have a custom image resizer that ensures the input image matches the output dimensions. Latent upscales require the second sampler to be set at over 0. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Are there any other methods that achieve better/faster results? Hi guys, Has anyone managed to implement Krea. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. A. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. 5 (+ Controlnet,PatchModel. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. We would like to show you a description here but the site won’t allow us. I upscaled it to a… 10 votes, 18 comments. 0 = 0. github. It includes literally everything possible with AI image generation. I did once get some noise I didn't like, but rebooted & all was good second try. upscale in smaller jumps, take 2 steps to reach double the resolution. The target width in pixels. I gave up on latent upscale. 5=1024). 35 -> 0. inputs. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. e. Depending on the noise and strength it end up treating each square as an individual image. You end up with images anyway after ksampling so you can use those upscale node. What has worked best for me has been 1. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. 2 / 4. To start enhancing image quality with ComfyUI you'll first need to add the Ultimate SD Upscale custom node. ***** Off topic question***** Does this method of linking no longer work? If I had chosen not to use the upscale with model step, I would have considered using the Ultimate SD Upscale method instead. 15-0. image. I've been wondering what methods people use to upscale all types of images, and which upscalers to use? So far I've been just using Latent(bicubic antialiased) for Hires. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Tutorial 7 - Lora Usage For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. Click on Install Models on the ComfyUI Manager Menu. Search, for "ultimate”, in the search bar to find the Ultimate SD Upscale node. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. sampling methods are entirely an own choice thing, some can have different effects when upscaling because some are better at removing latent noise than others, some produce artifacts I don't Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. gahwsk rsrsp mwbct tcznv wpeyrz bmqo pthyeut rmoy gele nfpgnnwn