Stability AI Audio Inpaint
api node/audio/Stability AI
StabilityAudioInpaintTransforms part of existing audio sample using text instructions.
Inputs
| Name | Type | Status | Constraints | Default |
|---|---|---|---|---|
model | COMBO | required | - | - |
prompt | STRING | required | - | "" |
audio? | AUDIO | required | - | - |
duration? | INT | optional | min: 1, max: 190, step: 1 | 190 |
seed? | INT | optional | min: 0, max: 4294967294, step: 1 | 0 |
steps? | INT | optional | min: 4, max: 8, step: 1 | 8 |
mask_start | INT | optional | min: 0, max: 190, step: 1 | 30 |
mask_end | INT | optional | min: 0, max: 190, step: 1 | 190 |
Outputs
| Index | Name | Type | Is List | Connection Reference |
|---|---|---|---|---|
0 | AUDIO | AUDIO | No | ["{node_id}", 0] |
How to connect to these outputs
To connect another node's input to an output from this node, use the connection reference format:
["node_id", output_index]Where node_id is the ID of this StabilityAudioInpaint node in your workflow, and output_index is the index from the table above.
Example
If this node has ID "5" in your workflow:
AUDIO (AUDIO):["5", 0]
Was this page helpful?