Skip to main content

AnimalPose Estimator (AP10K)

ControlNet Preprocessors/Faces and Poses Estimators
AnimalPosePreprocessor

Example

JSON Example
{
  "class_type": "AnimalPosePreprocessor",
  "inputs": {
    "image": [
      "node_id",
      0
    ]
  }
}

This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.

Inputs

NameTypeStatusConstraintsDefault
imageIMAGErequired--
bbox_detectorENUM
6 options
  • None
  • yolox_l.torchscript.pt
  • yolox_l.onnx
  • yolo_nas_l_fp16.onnx
  • yolo_nas_m_fp16.onnx
  • yolo_nas_s_fp16.onnx
optional-"yolox_l.torchscript.pt"
pose_estimatorENUM
2 options
  • rtmpose-m_ap10k_256_bs5.torchscript.pt
  • rtmpose-m_ap10k_256.onnx
optional-"rtmpose-m_ap10k_256_bs5.torchscript.pt"
resolutionINToptionalmin: 64, max: 16384, step: 64512

Outputs

IndexNameTypeIs ListConnection Reference
0IMAGEIMAGENo["{node_id}", 0]
1POSE_KEYPOINTPOSE_KEYPOINTNo["{node_id}", 1]
How to connect to these outputs

To connect another node's input to an output from this node, use the connection reference format:

["node_id", output_index]

Where node_id is the ID of this AnimalPosePreprocessor node in your workflow, and output_index is the index from the table above.

Example

If this node has ID "5" in your workflow:

  • IMAGE (IMAGE): ["5", 0]
  • POSE_KEYPOINT (POSE_KEYPOINT): ["5", 1]
Was this page helpful?