Skip to main content

CLIP Text Encode (Prompt)

conditioning
CLIPTextEncode

Encodes a text prompt using a CLIP model into an embedding that can be used to guide the diffusion model towards generating specific images.

Example

JSON Example
{
  "class_type": "CLIPTextEncode",
  "inputs": {
    "text": "example text",
    "clip": [
      "node_id",
      0
    ]
  }
}

This example shows required inputs only. Connection values like ["node_id", 0] should reference actual node IDs from your workflow.

Inputs

NameTypeStatusConstraintsDefault
text?STRINGrequired--
clip?CLIPrequired--

Outputs

IndexNameTypeIs ListConnection Reference
0CONDITIONINGCONDITIONINGNo["{node_id}", 0]
How to connect to these outputs

To connect another node's input to an output from this node, use the connection reference format:

["node_id", output_index]

Where node_id is the ID of this CLIPTextEncode node in your workflow, and output_index is the index from the table above.

Example

If this node has ID "5" in your workflow:

  • CONDITIONING (CONDITIONING): ["5", 0]
Was this page helpful?