Skip to main content

HunyuanImageToVideo

conditioning/video_models
HunyuanImageToVideo

Example

JSON Example
{
  "class_type": "HunyuanImageToVideo",
  "inputs": {
    "positive": [
      "node_id",
      0
    ],
    "vae": [
      "node_id",
      0
    ],
    "width": 848,
    "height": 480,
    "length": 53,
    "batch_size": 1,
    "guidance_type": null
  }
}

This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.

Inputs

NameTypeStatusConstraintsDefault
positiveCONDITIONINGrequired--
vaeVAErequired--
widthINTrequiredmin: 16, max: 16384, step: 16848
heightINTrequiredmin: 16, max: 16384, step: 16480
lengthINTrequiredmin: 1, max: 16384, step: 453
batch_sizeINTrequiredmin: 1, max: 40961
guidance_typeCOMBOrequired--
start_imageIMAGEoptional--

Outputs

IndexNameTypeIs ListConnection Reference
0positiveCONDITIONINGNo["{node_id}", 0]
1latentLATENTNo["{node_id}", 1]
How to connect to these outputs

To connect another node's input to an output from this node, use the connection reference format:

["node_id", output_index]

Where node_id is the ID of this HunyuanImageToVideo node in your workflow, and output_index is the index from the table above.

Example

If this node has ID "5" in your workflow:

  • positive (CONDITIONING): ["5", 0]
  • latent (LATENT): ["5", 1]
Was this page helpful?