Chat¶
Run text through the AI chat model.
Fields¶
Model-
The applicable model for this operation type.
Token input-
The data input for AI. Please provide the token name only, without brackets.
Token result-
The response from AI will be stored into the token result field to be used in future steps. Please provide the token name only, without brackets.
Specific configuration for the model-
Some models require specific configuration settings, like temperature, voice or response_format.
The "profile" helps set the behavior of the LLM response. You can change/influence how it response by adjusting the system prompt. Eg.system_name: system
system_prompt: you are a helpful assistant Prompt-
Enter your text here. When submitted, AI will generate a response from its Chats endpoint. Based on the complexity of your text, AI traffic, and other factors, a response can sometimes take up to 10-15 seconds to complete. Please allow the operation to finish. Be cautious not to exceed the requests per minute quota (20/Minute by default), or you may be temporarily blocked.
Schema-
Provide an optional schema for the output.