Skip to main content

First Workflow

This guide walks through submitting a complete workflow, handling the response, and receiving results via webhook. By the end, you'll understand the full job lifecycle.

The Workflow Format

cmfy.cloud uses ComfyUI's native workflow format. A workflow is a JSON object where each key is a node ID:

{
"1": {
"class_type": "NodeType",
"inputs": { ... }
},
"2": {
"class_type": "AnotherNode",
"inputs": { ... }
}
}

Nodes connect by referencing other nodes' outputs using [node_id, output_index] notation.

Building a Text-to-Image Workflow

Let's build a workflow step by step.

Step 1: Load the Model

Every image generation workflow starts with loading a checkpoint model:

{
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
}
}
}

This node outputs:

  • Output 0: The model
  • Output 1: The CLIP text encoder
  • Output 2: The VAE decoder

Step 2: Encode the Prompts

Convert your text prompts into embeddings the model understands:

{
"2": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a majestic lion in a savanna at golden hour, photorealistic",
"clip": ["1", 1]
}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "blurry, low quality, watermark, text",
"clip": ["1", 1]
}
}
}
  • Node 2 encodes the positive prompt (what you want)
  • Node 3 encodes the negative prompt (what to avoid)
  • Both reference output 1 from node 1 (the CLIP encoder)

Step 3: Create the Latent Space

Create an empty latent image to generate into:

{
"4": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
}
}
}

Step 4: Sample the Image

The sampler is where generation happens:

{
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 12345,
"steps": 25,
"cfg": 7.0,
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
}
}

Key parameters:

ParameterDescription
seedRandom seed for reproducibility
stepsNumber of denoising steps (higher = more refined)
cfgClassifier-free guidance scale (higher = more prompt adherence)
sampler_nameSampling algorithm
denoiseDenoising strength (1.0 for full generation)

Step 5: Decode and Save

Convert the latent to pixels and save:

{
"6": {
"class_type": "VAEDecode",
"inputs": {
"samples": ["5", 0],
"vae": ["1", 2]
}
},
"7": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "output",
"images": ["6", 0]
}
}
}

The Complete Workflow

Here's everything combined:

{
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a majestic lion in a savanna at golden hour, photorealistic",
"clip": ["1", 1]
}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "blurry, low quality, watermark, text",
"clip": ["1", 1]
}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 12345,
"steps": 25,
"cfg": 7.0,
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {
"samples": ["5", 0],
"vae": ["1", 2]
}
},
"7": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "output",
"images": ["6", 0]
}
}
}

Submitting with cURL

curl -X POST https://api.cmfy.cloud/v1/jobs \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": {
"1": { "class_type": "CheckpointLoaderSimple", "inputs": { "ckpt_name": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors" } },
"2": { "class_type": "CLIPTextEncode", "inputs": { "text": "a majestic lion in a savanna at golden hour, photorealistic", "clip": ["1", 1] } },
"3": { "class_type": "CLIPTextEncode", "inputs": { "text": "blurry, low quality, watermark, text", "clip": ["1", 1] } },
"4": { "class_type": "EmptyLatentImage", "inputs": { "width": 1024, "height": 1024, "batch_size": 1 } },
"5": { "class_type": "KSampler", "inputs": { "seed": 12345, "steps": 25, "cfg": 7.0, "sampler_name": "euler_ancestral", "scheduler": "normal", "denoise": 1.0, "model": ["1", 0], "positive": ["2", 0], "negative": ["3", 0], "latent_image": ["4", 0] } },
"6": { "class_type": "VAEDecode", "inputs": { "samples": ["5", 0], "vae": ["1", 2] } },
"7": { "class_type": "SaveImage", "inputs": { "filename_prefix": "output", "images": ["6", 0] } }
},
"webhook": "https://your-server.com/webhook"
}'

Submitting with Python

import requests
import os

API_KEY = os.environ.get("CMFY_API_KEY")

workflow = {
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "a majestic lion in a savanna at golden hour, photorealistic",
"clip": ["1", 1]
}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": "blurry, low quality, watermark, text",
"clip": ["1", 1]
}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": 1024,
"height": 1024,
"batch_size": 1
}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": 12345,
"steps": 25,
"cfg": 7.0,
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {
"samples": ["5", 0],
"vae": ["1", 2]
}
},
"7": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "output",
"images": ["6", 0]
}
}
}

response = requests.post(
"https://api.cmfy.cloud/v1/jobs",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"prompt": workflow,
"webhook": "https://your-server.com/webhook"
}
)

data = response.json()
print(f"Job ID: {data['job_id']}")
print(f"Status: {data['status']}")
print(f"Estimated wait: {data['estimated_wait_seconds']}s")

Submitting with JavaScript

const API_KEY = process.env.CMFY_API_KEY;

const workflow = {
"1": {
class_type: "CheckpointLoaderSimple",
inputs: {
ckpt_name: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"
}
},
"2": {
class_type: "CLIPTextEncode",
inputs: {
text: "a majestic lion in a savanna at golden hour, photorealistic",
clip: ["1", 1]
}
},
"3": {
class_type: "CLIPTextEncode",
inputs: {
text: "blurry, low quality, watermark, text",
clip: ["1", 1]
}
},
"4": {
class_type: "EmptyLatentImage",
inputs: { width: 1024, height: 1024, batch_size: 1 }
},
"5": {
class_type: "KSampler",
inputs: {
seed: 12345,
steps: 25,
cfg: 7.0,
sampler_name: "euler_ancestral",
scheduler: "normal",
denoise: 1.0,
model: ["1", 0],
positive: ["2", 0],
negative: ["3", 0],
latent_image: ["4", 0]
}
},
"6": {
class_type: "VAEDecode",
inputs: { samples: ["5", 0], vae: ["1", 2] }
},
"7": {
class_type: "SaveImage",
inputs: { filename_prefix: "output", images: ["6", 0] }
}
};

const response = await fetch("https://api.cmfy.cloud/v1/jobs", {
method: "POST",
headers: {
"Authorization": `Bearer ${API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
prompt: workflow,
webhook: "https://your-server.com/webhook"
})
});

const data = await response.json();
console.log(`Job ID: ${data.job_id}`);
console.log(`Status: ${data.status}`);

Receiving Results via Webhook

When you provide a webhook URL, cmfy.cloud POSTs results to your endpoint when the job completes.

Setting Up a Webhook Endpoint

Your endpoint must:

  • Accept POST requests
  • Handle JSON payloads
  • Return 2xx status to acknowledge receipt
  • Use HTTPS

Webhook Payload

On success:

{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "completed",
"created_at": "2024-01-15T10:30:00Z",
"started_at": "2024-01-15T10:30:05Z",
"completed_at": "2024-01-15T10:30:20Z",
"execution_time_ms": 12450,
"outputs": {
"images": [
"https://cdn.cmfy.cloud/outputs/550e8400/image_0.png"
]
}
}

On failure:

{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "failed",
"created_at": "2024-01-15T10:30:00Z",
"started_at": "2024-01-15T10:30:05Z",
"completed_at": "2024-01-15T10:30:08Z",
"error": {
"code": "workflow_error",
"message": "Node 5: Invalid sampler_name 'invalid'"
}
}

Example Webhook Handler (Express.js)

const express = require("express");
const app = express();

app.use(express.json());

app.post("/webhook", (req, res) => {
const { job_id, status, outputs, error } = req.body;

if (status === "completed") {
console.log(`Job ${job_id} completed!`);
console.log(`Images: ${outputs.images.join(", ")}`);
// Download and process images...
} else if (status === "failed") {
console.error(`Job ${job_id} failed: ${error.message}`);
// Handle error, maybe retry...
}

// Always return 200 to acknowledge receipt
res.status(200).send("OK");
});

app.listen(3000);

Example Webhook Handler (Python/Flask)

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route("/webhook", methods=["POST"])
def webhook():
data = request.json
job_id = data["job_id"]
status = data["status"]

if status == "completed":
images = data["outputs"]["images"]
print(f"Job {job_id} completed with {len(images)} images")
# Download and process images...
elif status == "failed":
error = data["error"]["message"]
print(f"Job {job_id} failed: {error}")
# Handle error...

return jsonify({"received": True}), 200

if __name__ == "__main__":
app.run(port=3000)

Polling for Status

If you can't use webhooks, poll the job status endpoint:

import time
import requests

def wait_for_job(job_id, api_key, timeout=300, poll_interval=5):
"""Poll until job completes or timeout."""
start = time.time()

while time.time() - start < timeout:
response = requests.get(
f"https://api.cmfy.cloud/v1/jobs/{job_id}",
headers={"Authorization": f"Bearer {api_key}"}
)
data = response.json()

if data["status"] in ("completed", "failed", "cancelled"):
return data

time.sleep(poll_interval)

raise TimeoutError(f"Job {job_id} did not complete within {timeout}s")
Use Exponential Backoff

For production polling, use exponential backoff to be kind to the API:

poll_interval = 2
while True:
# ... check status ...
time.sleep(poll_interval)
poll_interval = min(poll_interval * 1.5, 30) # Cap at 30s

Troubleshooting

"Invalid node connection"

Check that node references use the correct format: ["node_id", output_index]

// Wrong - node_id should be a string
"clip": [1, 1]

// Correct
"clip": ["1", 1]

"Model download failed"

Verify your model URL:

  • Must be HTTPS
  • Must be from an allowed domain (Hugging Face, Civitai, S3, etc.)
  • Must be publicly accessible or include authentication tokens

"Webhook delivery failed"

Ensure your webhook endpoint:

  • Is publicly accessible (not localhost)
  • Uses HTTPS
  • Returns 200-299 status codes
  • Responds within 30 seconds

What's Next?

Was this page helpful?