Workflows
cmfy.cloud executes ComfyUI workflows. This page explains the workflow format and how to prepare your workflows for the API.
What Is a Workflow?
A ComfyUI workflow is a graph of nodes that process data. Each node performs one operation:
In this example:
- Load Checkpoint - Loads the AI model from disk
- CLIP Encode - Converts your text prompt to embeddings
- Empty Latent - Creates a blank latent image
- KSampler - The main generation step
- VAE Decode - Converts latent to pixel image
- Save Image - Outputs the final image
Workflow Format
The API accepts workflows in ComfyUI's native format - a JSON object where each key is a node ID:
{
"3": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://huggingface.co/stabilityai/sdxl/model.safetensors"
}
},
"6": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a beautiful sunset over mountains",
"clip": ["3", 1]
}
},
"5": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
}
}
}
Node Structure
Every node has:
| Field | Required | Description |
|---|---|---|
class_type | Yes | The ComfyUI node type (e.g., KSampler, VAEDecode) |
inputs | Yes | Parameters for this node |
_meta | No | Optional metadata (ignored by the system) |
Connecting Nodes
Nodes connect by referencing other nodes' outputs. The format is [node_id, output_index]:
{
"6": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a cat",
"clip": ["3", 1] // Output #1 from node "3"
}
}
}
The ["3", 1] means "take output slot 1 from node 3". Output slots are zero-indexed, so slot 1 is the second output from that node (slot 0 is the first).
Converting from ComfyUI
If you have a workflow in the ComfyUI desktop app:
Step 1: Enable Dev Mode
In ComfyUI, go to Settings > Enable Dev mode options.
Step 2: Save API Format
Click Save (API Format) instead of regular Save. This exports the workflow in the format the API expects.
Step 3: Replace Model Paths
Change local file paths to URLs:
// Before (local file)
"ckpt_name": "v1-5-pruned-emaonly.ckpt"
// After (URL)
"ckpt_name": "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt"
Model URLs
cmfy.cloud downloads models on-demand from URLs you provide. Supported sources:
| Source | URL Pattern |
|---|---|
| Hugging Face | https://huggingface.co/... |
| Civitai | https://civitai.com/api/download/... |
| Amazon S3 | https://*.s3.amazonaws.com/... |
| Google Cloud | https://storage.googleapis.com/... |
| Azure Blob | https://*.blob.core.windows.net/... |
| Cloudflare R2 | https://*.r2.cloudflarestorage.com/... |
Always use the same URL for the same model. The routing system uses URL matching to find nodes with cached models. Different URLs for the same model will be treated as different models.
Common Node Types
Here are frequently used nodes and their typical inputs:
Checkpoint Loaders
Load the main AI model:
{
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://..."
}
}
Text Encoders
Convert text to model-understandable embeddings:
{
"class_type": "CLIPTextEncode",
"inputs": {
"text": "your prompt here",
"clip": ["checkpoint_node", 1]
}
}
Samplers
The core generation step:
{
"class_type": "KSampler",
"inputs": {
"seed": 42,
"steps": 20,
"cfg": 7.5,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1.0,
"model": ["checkpoint_node", 0],
"positive": ["positive_prompt_node", 0],
"negative": ["negative_prompt_node", 0],
"latent_image": ["latent_node", 0]
}
}
LoRA Loaders
Add style or character LoRAs:
{
"class_type": "LoraLoader",
"inputs": {
"lora_name": "https://...",
"strength_model": 0.8,
"strength_clip": 0.8,
"model": ["checkpoint_node", 0],
"clip": ["checkpoint_node", 1]
}
}
Workflow Limits
To ensure fair resource usage:
| Limit | Value |
|---|---|
| Maximum nodes | 500 |
| Maximum payload size | 10 MB |
| Required fields | class_type, inputs |
Best Practices
1. Use Minimal Workflows
Include only the nodes you need. Smaller workflows are faster to validate and execute.
2. Reuse Model URLs
When possible, use popular models that other users also use. These are more likely to be cached on GPU nodes.
3. Test Locally First
Run your workflow in ComfyUI locally before submitting to the API. This catches errors faster.
4. Use Deterministic Seeds
Set explicit seed values for reproducible results. Random seeds make debugging harder.
Example: Complete Workflow
Here's a minimal text-to-image workflow:
{
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a serene mountain landscape at sunset, digital art",
"clip": ["1", 1]
}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "blurry, low quality",
"clip": ["1", 1]
}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 12345,
"steps": 25,
"cfg": 7.0,
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {
"samples": ["5", 0],
"vae": ["1", 2]
}
},
"7": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "output",
"images": ["6", 0]
}
}
}
What's Next?
- Cache-Aware Routing - Learn how model caching affects performance
- API Reference - Full API documentation