End Recording CUDAMemory History
KJNodes/experimental
EndRecordCUDAMemoryHistoryRecords CUDA memory allocation history between start and end, saves to a file that can be analyzed here: https://docs.pytorch.org/memory_viz or with VisualizeCUDAMemoryHistory node
Example
JSON Example
{
"class_type": "EndRecordCUDAMemoryHistory",
"inputs": {
"input": [
"node_id",
0
],
"output_path": "https://example.com/path/to/file.bin"
}
}This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.
Inputs
| Name | Type | Status | Constraints | Default |
|---|---|---|---|---|
input | * | required | - | - |
output_path | STRINGURL: File | required | - | "comfy_cuda_memory_history" |
Outputs
| Index | Name | Type | Is List | Connection Reference |
|---|---|---|---|---|
0 | input | * | No | ["{node_id}", 0] |
1 | output_path | STRING | No | ["{node_id}", 1] |
How to connect to these outputs
To connect another node's input to an output from this node, use the connection reference format:
["node_id", output_index]Where node_id is the ID of this EndRecordCUDAMemoryHistory node in your workflow, and output_index is the index from the table above.
Example
If this node has ID "5" in your workflow:
input (*):["5", 0]output_path (STRING):["5", 1]
Was this page helpful?