AnimalPose Estimator (AP10K)
ControlNet Preprocessors/Faces and Poses Estimators
AnimalPosePreprocessorExample
JSON Example
{
"class_type": "AnimalPosePreprocessor",
"inputs": {
"image": [
"node_id",
0
]
}
}This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.
Inputs
| Name | Type | Status | Constraints | Default |
|---|---|---|---|---|
image | IMAGE | required | - | - |
bbox_detector | ENUM6 options
| optional | - | "yolox_l.torchscript.pt" |
pose_estimator | ENUM2 options
| optional | - | "rtmpose-m_ap10k_256_bs5.torchscript.pt" |
resolution | INT | optional | min: 64, max: 16384, step: 64 | 512 |
Outputs
| Index | Name | Type | Is List | Connection Reference |
|---|---|---|---|---|
0 | IMAGE | IMAGE | No | ["{node_id}", 0] |
1 | POSE_KEYPOINT | POSE_KEYPOINT | No | ["{node_id}", 1] |
How to connect to these outputs
To connect another node's input to an output from this node, use the connection reference format:
["node_id", output_index]Where node_id is the ID of this AnimalPosePreprocessor node in your workflow, and output_index is the index from the table above.
Example
If this node has ID "5" in your workflow:
IMAGE (IMAGE):["5", 0]POSE_KEYPOINT (POSE_KEYPOINT):["5", 1]
Was this page helpful?