TextEncodeHunyuanVideo_ImageToVideo
advanced/conditioning
TextEncodeHunyuanVideo_ImageToVideoExample
JSON Example
{
"class_type": "TextEncodeHunyuanVideo_ImageToVideo",
"inputs": {
"clip": [
"node_id",
0
],
"clip_vision_output": [
"node_id",
0
],
"prompt": "a beautiful landscape, high quality, detailed",
"image_interleave": 2
}
}This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.
Inputs
| Name | Type | Status | Constraints | Default |
|---|---|---|---|---|
clip | CLIP | required | - | - |
clip_vision_output | CLIP_VISION_OUTPUT | required | - | - |
prompt | STRING | required | - | - |
image_interleave? | INT | required | min: 1, max: 512 | 2 |
Outputs
| Index | Name | Type | Is List | Connection Reference |
|---|---|---|---|---|
0 | CONDITIONING | CONDITIONING | No | ["{node_id}", 0] |
How to connect to these outputs
To connect another node's input to an output from this node, use the connection reference format:
["node_id", output_index]Where node_id is the ID of this TextEncodeHunyuanVideo_ImageToVideo node in your workflow, and output_index is the index from the table above.
Example
If this node has ID "5" in your workflow:
CONDITIONING (CONDITIONING):["5", 0]
Was this page helpful?