Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Skyreels I2V workflow working better #372

Open
slmonker opened this issue Feb 18, 2025 · 22 comments
Open

New Skyreels I2V workflow working better #372

slmonker opened this issue Feb 18, 2025 · 22 comments

Comments

@slmonker
Copy link

much better then the 1st version

Image

HunyuanVideo_skyreel_I2V_00068.mp4
@slmonker
Copy link
Author

HunyuanVideo_skyreel_I2V_00071.mp4

@RhapsodyHayden
Copy link

I'm trying to use this but I keep getting this error.

Prompt outputs failed validation
HyVideoSampler:

  • Return type mismatch between linked nodes: stg_args, received_type(LATENT) mismatch input_type(STGARGS)
  • Return type mismatch between linked nodes: teacache_args, received_type(FETAARGS) mismatch input_type(TEACACHEARGS)

@slmonker
Copy link
Author

I'm trying to use this but I keep getting this error.

Prompt outputs failed validation HyVideoSampler:

  • Return type mismatch between linked nodes: stg_args, received_type(LATENT) mismatch input_type(STGARGS)
  • Return type mismatch between linked nodes: teacache_args, received_type(FETAARGS) mismatch input_type(TEACACHEARGS)

Have u upadate the nodes and new workflow? I dun think the STG node connected in the workflow..............

@RhapsodyHayden
Copy link

I'll try updating, hopefully I don't lose ReActor lmao.

@slmonker
Copy link
Author

I'll try updating, hopefully I don't lose ReActor lmao.

the new I2V workflow pretty good,i think it's really close to Kling 1.0 or even better.....even the Vram using lower to 12gb.

@tanh609
Copy link

tanh609 commented Feb 18, 2025

@slmonker are you using this workflow?

https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/main/example_workflows/hyvideo_skyreel_img2vid_example_01.json

I kept getting out of vram issue, despite of having 24 GB available.

@slmonker
Copy link
Author

@slmonker are you using this workflow?

https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/main/example_workflows/hyvideo_skyreel_img2vid_example_01.json

I kept getting out of vram issue, despite of having 24 GB available.

yep,i m using this ,and u can turn on the Auto cpu offload, it took me abt 12 -14g vram to run it

Image

@RhapsodyHayden
Copy link

Oh. I'll give it a shot and adjust what I need to but I'm using 12GB VRAM. Looks like the Text Encoder is eating ALL my VRAM when it loads to it.

@slmonker
Copy link
Author

Oh. I'll give it a shot and adjust what I need to but I'm using 12GB VRAM. Looks like the Text Encoder is eating ALL my VRAM when it loads to it.

then u may turn Auto Cpu offload on,it s working

@tanh609
Copy link

tanh609 commented Feb 18, 2025

Wow @slmonker thank you for the quick response.

How do I turn on "Auto cpu offload"?

Oh wait I found it. I'm trying it now.

Image

@slmonker
Copy link
Author

Wow @slmonker thank you for the quick response.

How do I turn on "Auto cpu offload"?

Oh wait I found it. I'm trying it now.

Image

here broImage

@ronbere
Copy link

ronbere commented Feb 19, 2025

DownloadAndLoadHyVideoTextEncoder

Failed to import transformers.models.timm_wrapper.configuration_timm_wrapper because of the following error (look up to see its traceback):
cannot import name 'ImageNetInfo' from 'timm.data' (D:\ComfyUI_windows_portable\venv\Lib\site-packages\timm\data_init_.py)

@slmonker
Copy link
Author

DownloadAndLoadHyVideoTextEncoder

Failed to import transformers.models.timm_wrapper.configuration_timm_wrapper because of the following error (look up to see its traceback): cannot import name 'ImageNetInfo' from 'timm.data' (D:\ComfyUI_windows_portable\venv\Lib\site-packages\timm\data__init__.py)

feels like smthing wrong about ur environment

@gxground
Copy link

可灵最好用 但太贵了 开源的和可灵比感觉还是有点差距

@tanh609
Copy link

tanh609 commented Feb 19, 2025

Hello @slmonker,

How did you get your video to match with the original image? My video does not match with my original image.

I put this on the positive prompt

The cat extends its front paw upward, claws hooking onto the surface as its hind legs push off. Its body stretches and contracts fluidly, muscles tensing with each pull. The tail sways for balance, adjusting with every shift in weight. Its back legs find quick, sure footing, propelling it higher in a smooth, determined rhythm. With each movement, it climbs steadily, pausing only briefly before reaching for the next hold.

Original Image

Image

Result

failed_render.mp4

@slmonker
Copy link
Author

Hello @slmonker,

How did you get your video to match with the original image? My video does not match with my original image.

I put this on the positive prompt

The cat extends its front paw upward, claws hooking onto the surface as its hind legs push off. Its body stretches and contracts fluidly, muscles tensing with each pull. The tail sways for balance, adjusting with every shift in weight. Its back legs find quick, sure footing, propelling it higher in a smooth, determined rhythm. With each movement, it climbs steadily, pausing only briefly before reaching for the next hold.

Original Image

Image

Result

failed_render.mp4

Have u upadted the nodes and workflow? or u put FPS-24 infront of ur prompt?

@DCVirtualCosmos
Copy link

@slmonker are you using this workflow?
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/main/example_workflows/hyvideo_skyreel_img2vid_example_01.json
I kept getting out of vram issue, despite of having 24 GB available.

yep,i m using this ,and u can turn on the Auto cpu offload, it took me abt 12 -14g vram to run it

Image

I'm trying to use that workflow but I keep getting errors related to the model's dimension. "Given groups=1, weight of size [3072, 32, 1, 2, 2], expected input[2, 16, 25, 64, 64] to have 32 channels, but got 16 channels instead" I got this on the HunyuanVideo Sampler Node.
I have updated the hunyuan wrapped and the kijai nodes, which I had working just fine with the original hunyuan model. But for some reason, it is not working with the SkyReels

@DCVirtualCosmos
Copy link

lel I did the most basic workflow with this Skyreels models (FP8 and BF16) using only comfyui native nodes (Load Diffusion, KSampler, EmptyHunyuanLatentVideo, and DualClipLoader), I still got the same error with both SkyReels models ("Given groups=1, weight of size [3072, 32, 1, 2, 2], expected input[1, 16, 24, 64, 64] to have 32 channels, but got 16 channels instead") but works fine with the normal HunyuanVideo model

@slmonker
Copy link
Author

Hello @slmonker,

How did you get your video to match with the original image? My video does not match with my original image.

I put this on the positive prompt

The cat extends its front paw upward, claws hooking onto the surface as its hind legs push off. Its body stretches and contracts fluidly, muscles tensing with each pull. The tail sways for balance, adjusting with every shift in weight. Its back legs find quick, sure footing, propelling it higher in a smooth, determined rhythm. With each movement, it climbs steadily, pausing only briefly before reaching for the next hold.

Original Image

Image

Result

failed_render.mp4

i tired on my pc and the prompt is“FPS-24, a cat climbs on a steep mountain”https://github.com/user-attachments/assets/5408bdb1-bc57-42a3-b35f-18b8011a336f

@niftyflora
Copy link

is it supposed to take 50 minutes to generate a 2 second video?

Image

@slmonker
Copy link
Author

is it supposed to take 50 minutes to generate a 2 second video?

Image

base on ur gpu level

@pftq
Copy link

pftq commented Feb 22, 2025

Hello @slmonker,

How did you get your video to match with the original image? My video does not match with my original image.

This happens to me sometimes if the steps are too low - the default settings for the templates (CFG 6 and 30 steps) are a bit too low in general imo. I usually have to push steps to 50-100 for something decent on CFG 6 or lower the CFG to 3 (at cost of prompt adherence).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants