OpenAI 文本操作#
使用此操作向 OpenAI 中的模型发送消息或对文本进行违规分类。有关 OpenAI 节点本身的更多信息,请参阅 OpenAI。
以前的节点版本
n8n 版本 1.117.0 引入了支持 OpenAI Responses API 的 OpenAI 节点 V2。它将"Message a Model"(向模型发送消息)操作重命名为"Generate a Chat Completion"(生成聊天完成),以阐明其与 Chat Completions API 的关联,并引入了使用 Responses API 的单独"Generate a Model Response"(生成模型响应)操作。
生成聊天完成#
使用此操作向 OpenAI 模型发送消息或提示 - 使用 Chat Completions API - 并接收响应。
输入这些参数:
- Credential to connect with: Create or select an existing OpenAI credential.
- Resource: Select Text.
- Operation: Select Generate a Chat Completion.
- Model: Select the model you want to use. If you’re not sure which model to use, try
gpt-4oif you need high intelligence orgpt-4o-miniif you need the fastest speed and lowest cost. Refer to Models overview | OpenAI Platform for more information. - Messages: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles. Choose from one of these roles:
- User: Sends a message as a user and gets a response from the model.
- Assistant: Tells the model to adopt a specific tone or personality.
- System: By default, there is no system message. You can define instructions in the user message, but the instructions set in the system message are more effective. You can set more than one system message per conversation. Use this to set the model's behavior or context for the next user message.
- Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
- Output Content as JSON: Turn on to attempt to return the response in JSON format. Compatible with
GPT-4 Turboand allGPT-3.5 Turbomodels newer thangpt-3.5-turbo-1106.
Options#
- Frequency Penalty: Apply a penalty to reduce the model's tendency to repeat similar lines. The range is between
0.0and2.0. - Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.
- Number of Completions: Defaults to 1. Set the number of completions you want to generate for each prompt. Use carefully since setting a high number will quickly consume your tokens.
- Presence Penalty: Apply a penalty to influence the model to discuss new topics. The range is between
0.0and2.0. - Output Randomness (Temperature): Adjust the randomness of the response. The range is between
0.0(deterministic) and1.0(maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to1.0. - Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example,
0.5means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to1.0.
Refer to Chat Completions | OpenAI documentation for more information.
生成模型响应#
Use this operation to send a message or prompt to an OpenAI model - using the Responses API - and receive a response.
输入这些参数:
- Credential to connect with: Create or select an existing OpenAI credential.
- Resource: Select Text.
- Operation: Select Generate a Model Response.
- Model: Select the model you want to use. Refer to Models overview | OpenAI Platform for an overview.
- Messages: Choose from one of these a Message Types:
- Text: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles.
- Image: Provide an Image either through an Image URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
- File: Provide a File in a supported format (currently: PDF only), either through a File URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
- For any message type, you can choose from one of these roles:
- User: Sends a message as a user and gets a response from the model.
- Assistant: Tells the model to adopt a specific tone or personality.
- System: By default, the system message is
"You are a helpful assistant". You can define instructions in the user message, but the instructions set in the system message are more effective. You can only set one system message per conversation. Use this to set the model's behavior or context for the next user message.
- Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
Built-in Tools#
The OpenAI Responses API provides a range of built-in tools to enrich the model's response:
- Web Search: Allows models to search the web for the latest information before generating a response.
- MCP Servers: Allows models to connect to remote MCP servers. Find out more about using remote MCP servers as tools here.
- File Search: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the OpenAI documentation for more information.
- Code Interpreter: Allows models to write and run Python code in a sandboxed environment.
Options#
- Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.
- Output Randomness (Temperature): Adjust the randomness of the response. The range is between
0.0(deterministic) and1.0(maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to1.0. - Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example,
0.5means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to1.0. - Conversation ID: The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes.
- Previous Response ID: The ID of the previous response to continue from. Can't be used in conjunction with Conversation ID.
- Reasoning: The level of reasoning effort the model should spend to generate the response. Includes the ability to return a Summary of the reasoning performed by the model (for example, for debugging purposes).
- Store: Whether to store the generated model response for later retrieval via API. Defaults to
true. - Output Format: Whether to return the response as Text, in a specified JSON Schema or as a JSON Object.
- Background: Whether to run the model in background mode. This allows executing long-running tasks more reliably.
Refer to Responses | OpenAI documentation for more information.
对文本进行违规分类#
Use this operation to identify and flag content that might be harmful. OpenAI model will analyze the text and return a response containing:
flagged: A boolean field indicating if the content is potentially harmful.categories: A list of category-specific violation flags.category_scores: Scores for each category.
输入这些参数:
- Credential to connect with: Create or select an existing OpenAI credential.
- Resource: Select Text.
- Operation: Select Classify Text for Violations.
- Text Input: Enter text to classify if it violates the moderation policy.
- Simplify Output: Turn on to return a simplified version of the response instead of the raw data.
Options#
- Use Stable Model: Turn on to use the stable version of the model instead of the latest version, accuracy may be slightly lower.
Refer to Moderations | OpenAI documentation for more information.
常见问题#
有关常见错误或问题以及建议的解决步骤,请参阅常见问题。
此页面是否
有帮助
没有帮助
感谢你的反馈!
提交
返回顶部