Well, it is ok but honestly not for advanced ComfyUI users
AppSumo deal was pretty interesting and quite honest. However if we have to rate MimicPC, well... Couple of things. I am mainly a comfyUI user. I have hundreds of workflow running locally on a 4060 16GB and I wanted to offload something to this mimicpc. First thing you notice is that the application is quite easy to use and set-up. The filesystem manager is pretty barebone but it works. 50 GB are definitively very tight, specially if you are using SDXL and Flux custom checkpoints and/or Loras. btw the very problematic point to me is the overall speed. ComfyUI takes roughly 2 minutes to startup, and it is still acceptable. However, if you choose Medium size, consider the workflow which normally on a 4060 takes less than 3 minutes, here it takes almost 14 !!!! It is pretty slow, and I tested using a SILVI-2 like method of upscale (2x) and SDXL refiner. VERY slow, in a 30 minutes session I could generate two images (stated that all models and controlnet were already in-place). In 30 minutes on a 4060 same workflow produced 10 images. Do your math here. So, yeah can be an interesting option maybe for FaceSwap or, for Comfy, if you have simple workflow (I did not try Flux yet, but SD15+SDXL are painfully slow in medium mode). Definitively I would not recommend as your main AI tool (unless you use bigger shapes than Medium) AND if you already have all your workflows. Remember than every minutes count, and if you plan to setup workflow within mimicpc you end up to pay really an amount I think. It is good to have a comfyUI on the go, that works, but as said, do not expect too much. Or you can plan your budget using Large flavour as your reference. Did not try yet but most probably that size will give you acceptable performance but with higher costs.
Claire_MimicPC
Nov 26, 2024Thank you for taking the time to provide such detailed feedback on your experience with MimicPC and ComfyUI. We truly value the insights shared by our users as they are crucial for us to understand where we can improve.
We have noted your concerns regarding the initial slow loading times of models in ComfyUI, particularly when using complex workflows like SDXL and Flux custom checkpoints. Our team is actively working on optimizing this aspect, and we aim to roll out updates soon that should significantly enhance the speed and efficiency of model loading.
We acknowledge that the performance on the Medium configuration does not meet the expectations set by high-performance local setups like your 4060 16GB system. For demanding tasks, especially those involving heavier workflows, we recommend considering our Large or even Ultra configurations. These higher-tier options are designed to handle more intensive processes, ensuring better performance though at a higher cost.
To address concerns about expenses, we're excited to introduce our Bargain Plan, which allows users to access these higher-performance GPUs at 50% of the usual cost. This plan is designed to provide more value while maintaining the capability to handle robust workflows efficiently.
We appreciate your insights and are committed to improving our service to better fit your needs. If there's anything more you'd like to discuss or need assistance with, please do not hesitate to reach out.
Thank you once again for your feedback.