OpenPose Pose
ControlNet Preprocessors/Faces and Poses Estimators
OpenposePreprocessorExample
JSON Example
{
"class_type": "OpenposePreprocessor",
"inputs": {
"image": [
"node_id",
0
]
}
}This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.
Inputs
| Name | Type | Status | Constraints | Default |
|---|---|---|---|---|
image | IMAGE | required | - | - |
detect_hand | ENUM2 options
| optional | - | "enable" |
detect_body | ENUM2 options
| optional | - | "enable" |
detect_face | ENUM2 options
| optional | - | "enable" |
resolution | INT | optional | min: 64, max: 16384, step: 64 | 512 |
scale_stick_for_xinsr_cn | ENUM2 options
| optional | - | "disable" |
Outputs
| Index | Name | Type | Is List | Connection Reference |
|---|---|---|---|---|
0 | IMAGE | IMAGE | No | ["{node_id}", 0] |
1 | POSE_KEYPOINT | POSE_KEYPOINT | No | ["{node_id}", 1] |
How to connect to these outputs
To connect another node's input to an output from this node, use the connection reference format:
["node_id", output_index]Where node_id is the ID of this OpenposePreprocessor node in your workflow, and output_index is the index from the table above.
Example
If this node has ID "5" in your workflow:
IMAGE (IMAGE):["5", 0]POSE_KEYPOINT (POSE_KEYPOINT):["5", 1]
Was this page helpful?