Aktualisierung von AI Playground, um Flux.1 Kontext und Wan 2.1 VACE auf Intel AI PCs auszuführen

by Bob Duffy |

The following instructions will walk you through steps needed to update ComfyUI for AI Playground v2.5.5, allowing you to run Flux.1 Kontext [dev] or Wan 2.1 VACE workflows from ComfyUI.

About these workflows

Flux.1 Kontext [dev] from Black Forest Labs is a breakthrough image editing model and workflow, allowing you to edit, stylize, and combine images together, simply by telling the model what should change or be done. Currently this model is released as a developer version, meaning it is use restricted.  Be sure to check the terms.
See example

Wan 2.1 VACE from Alibaba is high quality video generation model where you have high control on how the action of the video is generated, allowing you to use both reference images and reference video to control the generation.
See example

Both workflows require a newer version of ComfyUI than AI Playground installs by default. By following these instructions, you can run these new models and workflows in ComfyUI. Flux.1 Kontext can be run directly in AI Playground. However this requires you manually install models.  We are looking at including these for future releases where everything needed to run is taken care of by AI Playground for installation and generation. For those interested in running these now we’ve provided instructions to get them installed.  Note if you later need to reinstall ComfyUI through AI Playground, it will remove models and custom nodes installed.  It is recommended to back up the ComfyUI models and custom_nodes folders.

Instructions

Step 1: Update ComfyUI:

Both workflows require an update of ComfyUI. This is the first step to running these features

  • Install AI Playground without ComfyUI
  • Under Basic Settings select Manager Backend Components
  • To the far right of the ComfyUI row, select the gear icon, then Settings. Set version to “v0.3.43”
  • Click the action to Install ComfyUI
    • Workflow fix – Note this newer version has a required change in the LTX-Video Image to Video workflow, requiring “strength: 1.0,” to be added the LTXVImgToVideo Node. Download this updated version and place this in the [location of installation]/AI Playground/resources/workflows folder to fix
  • After install restart AI Playground
  • Launch AI Playground, then CTRL SHIFT I to see console window – wait for all tasks to complete

Step 2a: FLUX.1 Kontext:

No additional packages needed. Simply set up the workflow as described and run in ComfyUI.  Tested on Intel Core Ultra 200V

  1. Launch AI Playground, CTRL SHIFT I to see console – wait for all tasks to complete
  2. Open ComfyUI in a web browser at localhost:49000
  3. In menu select New, Workflow, Browse Templates, FLUX, Flux Kontext Dev (Basic) – note This model is use restricted, for development and research purposes only
  4. The workflow will tell you are missing models. Install all models
  5. After models are downloaded move them as follow:
    • flux1-dev-kontext_fp8_scaled.safetensors TO [location of AI Playground installation]/AI Playground/resources/ComfyUI/models/diffusion_models/
    • ae.safetensor TO [location of AI Playground installation]/AI Playground/resources/ComfyUI/models/vae/
    •  clip_l.safetensors, t5xxl_fp16.safetensors AND t5xxl_fp8_e4m3fn_scaled.safetensors TO [location of AI Playground installation]/AI Playground/resources/ComfyUI/models/text_encoders/
  6. Refresh your browser where ComfyUI is, and then load a reference image to edit using the Load Image Node
  7. In the Positive Prompt Clip node describe how the image should change ie describe a style (anime), or describe what in the picture should be removed or added, describe if it’s a different time of day, or if the character in the image should be different ie make a zombie etc

Example Workflows: Download then drag into ComfyUI

Step 2b: Run FLUX.1 Kontext in AI Playground:

This will allow you to use AI Playground as the front end for Flux Kontext image editing. However you must first manually install the models in ComfyUI using Step 2a . Once done adding the below workflow json will allow you to run this directly from AI Playground

  1. Download this AI Playground workflow (json), and place this in [location of AI Playground installation]/AI Playground/resources/workflows
  2. Start AI Playground
  3. Go to Settings, Image tab, select Workflows, then select Flux-Kontext1
  4. Be sure steps are set to 20, number of images set to 1.
    (in my experience small changes like remove a coffee cup, adding a mustache or sunglasses may not require many steps.  Experiment with fewer steps. I’ve small changes look similar a 4 steps and 20 steps)
  5. Load an image into the image field
  6. In the prompt on the create tab, describe what you want to change about the photo. Then generate
    (generation will be slower compared to Flux Schnell, but you have more control, likely requiring less iterations)
Screenshot of AI Playground running Flux.1 Kontext [dev] where the prompte "remove the microphone" having edited the image
Screenshot of AI Playground running Flux.1 Kontext [dev] where the prompt “remove the microphone” edited the images in 8 steps

Step 3 Wan 2.1 VACE

This solution does require additional packages and models outside of what ComfyUI updates. This workflow was tested on an Intel Arc A770

Clone GGUF Nodes (outdated GGUF Nodes may need to be remove and updated)

  1. Open the directory [location of AI Playground installation]/AI Playground/resources /ComfyUI/custom_nodes
  2. Go to the title bar of this window and type CMD to launch Command window at this location.
  3. In the CMD window run each of these: (paste each line then hit return, wait for it complete then do the same for the next line)
    • git clone https://github.com/city96/ComfyUI-GGUF
    • git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
    • git clone https://github.com/Fannovel16/comfyui_controlnet_aux

Install Node Depedencies

  1. Open the directory [location of AI Playground installation]/ AI Playground\resources\comfyui-backend-env
  2. Go to the title bar of this window and type CMD to launch Command window at this location.
  3. In the CMD window run each of these: (paste each line then hit return, wait for it complete then do the same for the next line)
    • python -s -m pip install -r ..\ComfyUI\custom_nodes\ComfyUI-GGUF\requirements.txt
    • python -s -m pip install -r ..\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\requirements.txt
    • python -s -m pip install -r ..\ComfyUI\custom_nodes\comfyui_controlnet_aux\requirements.txt

Download these GGUF models

Set up workflow

  1. Launch AI Playground, CTRL SHIFT I to see console – wait for all tasks to complete
  2. Restart AI Playground one more time , CTRL SHIFT I to see console – wait for all tasks to complete
  3. Open browser to localhost:49000
  4. Download either of these workflows and drag into ComfyUI
Screenshot of the reference video workflow in ComfyUI

Notes on running

  1. The following nodes should be off/pink (to toggle a node on or off select the node and type CTRL B)
    • 3B Nodes: Load Diffusion, Load Clip, Load Lora
    • 14B Nodes: Load Diffusion, Load Clip
    • Optional: Turn off the 14B Group LoRA for higher quality but longer generation
  2. Set values in K Sampler
    • If LoRA is off then set samples to 20 and CFG to 6
    • If LoRA is on then set samples to 4 and CFG to 1
  3. Add in a reference image (best is background is solid color) and set the values of the Wan Vace node to match the resolution. Set number of frames (suggest 49 to start)
  4. Describe what is happening in the positive clip prompt
  5. For the Reference Video version, add in a control video to guide the action – length should be the same length of the desired clip – resolution doesn’t need to match

There you have it. We will work to get these workflows part of an upcoming release, making these steps unnecessary. Meanwhile enjoy experimenting with these models. Be sure to adhere to their terms, and if you have questions or comments, drop us a line at http://discord.gg/intel in the ai-playground threads, or chat with me on X @bobduffy or linkedin – Bob Duffy.