diff --git a/_snippets/integrations/openai-api-issues.md b/_snippets/integrations/openai-api-issues.md new file mode 100644 index 00000000000..093a60cd13b --- /dev/null +++ b/_snippets/integrations/openai-api-issues.md @@ -0,0 +1,146 @@ +## The service is receiving too many requests from you + +This error displays when you've exceeded [OpenAI's rate limits](https://platform.openai.com/docs/guides/rate-limits){:target=_blank .external-link}. + +There are two ways to work around this issue: + +1. Split your data up into smaller chunks using the [Loop Over Items](/integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/) node and add a [Wait](/integrations/builtin/core-nodes/n8n-nodes-base.wait/) node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template. + ``` + { + "nodes": [ + { + "parameters": {}, + "id": "35d05920-ad75-402a-be3c-3277bff7cc67", + "name": "When clicking ‘Test workflow’", + "type": "n8n-nodes-base.manualTrigger", + "typeVersion": 1, + "position": [ + 880, + 400 + ] + }, + { + "parameters": { + "batchSize": 500, + "options": {} + }, + "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6", + "name": "Loop Over Items", + "type": "n8n-nodes-base.splitInBatches", + "typeVersion": 3, + "position": [ + 1120, + 420 + ] + }, + { + "parameters": { + "resource": "chat", + "options": {}, + "requestOptions": {} + }, + "id": "a519f271-82dc-4f60-8cfd-533dec580acc", + "name": "OpenAI", + "type": "n8n-nodes-base.openAi", + "typeVersion": 1, + "position": [ + 1380, + 440 + ] + }, + { + "parameters": { + "unit": "minutes" + }, + "id": "562d9da3-2142-49bc-9b8f-71b0af42b449", + "name": "Wait", + "type": "n8n-nodes-base.wait", + "typeVersion": 1, + "position": [ + 1620, + 440 + ], + "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13" + } + ], + "connections": { + "When clicking ‘Test workflow’": { + "main": [ + [ + { + "node": "Loop Over Items", + "type": "main", + "index": 0 + } + ] + ] + }, + "Loop Over Items": { + "main": [ + null, + [ + { + "node": "OpenAI", + "type": "main", + "index": 0 + } + ] + ] + }, + "OpenAI": { + "main": [ + [ + { + "node": "Wait", + "type": "main", + "index": 0 + } + ] + ] + }, + "Wait": { + "main": [ + [ + { + "node": "Loop Over Items", + "type": "main", + "index": 0 + } + ] + ] + } + }, + "pinData": {} + } + ``` +2. Use the [HTTP Request](/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node with the built-in batch-limit option against the [OpenAI API](https://platform.openai.com/docs/quickstart){:target=_blank .external-link} instead of using the OpenAI node. + +## Insufficient quota + +This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit. + +To troubleshoot this error, on your [OpenAI settings](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} page: + +* Select the correct organization for your API key in the first selector in the upper-left corner. +* Select the correct project for your API key in the second selector in the upper-left corner. +* Check the organization-level [billing overview](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page. +* Check the organization-level [usage limits](https://platform.openai.com/settings/organization/limits){:target=_blank .external-link} page. Double-check that you select the correct organization for this page and scroll to the **Usage limits** section to verify that you haven't exceeded your organization's usage limits. +* Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select **Project** > **Limits** to view or change the project limits. +* Check that the [OpenAI API](https://status.openai.com/){:target=_blank .external-link} is operating as expected. + +/// note | Balance waiting period +After topping up your balance, there may be a delay before your OpenAI account reflects the new balance. +/// + +In n8n: + +* check that the [OpenAI credentials](/integrations/builtin/credentials/openai/) use a valid [OpenAI API key](https://platform.openai.com/api-keys){:target=_blank .external-link} for the account you've added money to +* ensure that you connect the [OpenAI node](/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/) to the correct [OpenAI credentials](/integrations/builtin/credentials/openai/) + +If you find yourself frequently running out of account credits, consider turning on auto recharge in your [OpenAI billing settings](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} to automatically reload your account with credits when your balance reaches $0. + +## Bad request - please check your parameters + +This error displays when the request results in an error but n8n wasn't able to interpret the error message from OpenAI. + +To begin troubleshooting, try running the same operation using the [HTTP Request](/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node, which should provide a more detailed error message. diff --git a/docs/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/common-issues.md b/docs/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/common-issues.md index 128fa010abb..d05e3689c5d 100644 --- a/docs/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/common-issues.md +++ b/docs/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/common-issues.md @@ -10,151 +10,5 @@ priority: critical Here are some common errors and issues with the [OpenAI node](/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/) and steps to resolve or troubleshoot them. -## The service is receiving too many requests from you - -This error displays when you've exceeded [OpenAI's rate limits](https://platform.openai.com/docs/guides/rate-limits){:target=_blank .external-link}. - -There are two ways to work around this issue: - -1. Split your data up into smaller chunks using the [Loop Over Items](/integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/) node and add a [Wait](/integrations/builtin/core-nodes/n8n-nodes-base.wait/) node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template. - ``` - { - "nodes": [ - { - "parameters": {}, - "id": "35d05920-ad75-402a-be3c-3277bff7cc67", - "name": "When clicking ‘Test workflow’", - "type": "n8n-nodes-base.manualTrigger", - "typeVersion": 1, - "position": [ - 880, - 400 - ] - }, - { - "parameters": { - "batchSize": 500, - "options": {} - }, - "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6", - "name": "Loop Over Items", - "type": "n8n-nodes-base.splitInBatches", - "typeVersion": 3, - "position": [ - 1120, - 420 - ] - }, - { - "parameters": { - "resource": "chat", - "options": {}, - "requestOptions": {} - }, - "id": "a519f271-82dc-4f60-8cfd-533dec580acc", - "name": "OpenAI", - "type": "n8n-nodes-base.openAi", - "typeVersion": 1, - "position": [ - 1380, - 440 - ] - }, - { - "parameters": { - "unit": "minutes" - }, - "id": "562d9da3-2142-49bc-9b8f-71b0af42b449", - "name": "Wait", - "type": "n8n-nodes-base.wait", - "typeVersion": 1, - "position": [ - 1620, - 440 - ], - "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13" - } - ], - "connections": { - "When clicking ‘Test workflow’": { - "main": [ - [ - { - "node": "Loop Over Items", - "type": "main", - "index": 0 - } - ] - ] - }, - "Loop Over Items": { - "main": [ - null, - [ - { - "node": "OpenAI", - "type": "main", - "index": 0 - } - ] - ] - }, - "OpenAI": { - "main": [ - [ - { - "node": "Wait", - "type": "main", - "index": 0 - } - ] - ] - }, - "Wait": { - "main": [ - [ - { - "node": "Loop Over Items", - "type": "main", - "index": 0 - } - ] - ] - } - }, - "pinData": {} - } - ``` -2. Use the [HTTP Request](/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node with the built-in batch-limit option against the [OpenAI API](https://platform.openai.com/docs/quickstart){:target=_blank .external-link} instead of using the OpenAI node. - -## Insufficient quota - -This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit. - -To troubleshoot this error, on your [OpenAI settings](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} page: - -* Select the correct organization for your API key in the first selector in the upper-left corner. -* Select the correct project for your API key in the second selector in the upper-left corner. -* Check the organization-level [billing overview](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page. -* Check the organization-level [usage limits](https://platform.openai.com/settings/organization/limits){:target=_blank .external-link} page. Double-check that you select the correct organization for this page and scroll to the **Usage limits** section to verify that you haven't exceeded your organization's usage limits. -* Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select **Project** > **Limits** to view or change the project limits. -* Check that the [OpenAI API](https://status.openai.com/){:target=_blank .external-link} is operating as expected. - -/// note | Balance waiting period -After topping up your balance, there may be a delay before your OpenAI account reflects the new balance. -/// - -In n8n: - -* check that the [OpenAI credentials](/integrations/builtin/credentials/openai/) use a valid [OpenAI API key](https://platform.openai.com/api-keys){:target=_blank .external-link} for the account you've added money to -* ensure that you connect the [OpenAI node](/integrations/builtin/app-nodes/n8n-nodes-langchain.openai/) to the correct [OpenAI credentials](/integrations/builtin/credentials/openai/) - -If you find yourself frequently running out of account credits, consider turning on auto recharge in your [OpenAI billing settings](https://platform.openai.com/settings/organization/billing/overview){:target=_blank .external-link} to automatically reload your account with credits when your balance reaches $0. - -## Bad request - please check your parameters - -This error displays when the request results in an error but the OpenAI node wasn't able to interpret the error message from OpenAI. - -To begin troubleshooting, try running the same operation using the [HTTP Request](/integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node, which should provide a more detailed error message. - +--8<-- "_snippets/integrations/openai-api-issues.md" --8<-- "_snippets/integrations/referenced-node-unexecuted.md" diff --git a/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/common-issues.md b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/common-issues.md new file mode 100644 index 00000000000..05f36db050b --- /dev/null +++ b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/common-issues.md @@ -0,0 +1,21 @@ +--- +#https://www.notion.so/n8n/Frontmatter-432c2b8dff1f43d4b1c8d20075510fe4 +title: OpenAI Chat Model node common issues +description: Documentation for common issues and questions in the OpenAI Chat Model node in n8n, a workflow automation platform. Includes details of the issue and suggested solutions. +contentType: integration +priority: high +--- + +# OpenAI Chat Model node common issues + +Here are some common errors and issues with the [OpenAI Chat Model node](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/) and steps to resolve or troubleshoot them. + +## Processing parameters + +The OpenAI Chat Model node is a sub-node. Sub-nodes behave differently than other nodes when processing multiple items using expressions. + +Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression `{{ $json.name }}` resolves to each name in turn. + +In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression `{{ $json.name }}` always resolves to the first name. + +--8<-- "_snippets/integrations/openai-api-issues.md" diff --git a/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai.md b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/index.md similarity index 92% rename from docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai.md rename to docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/index.md index c3790f927c6..e0c516aa854 100644 --- a/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai.md +++ b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/index.md @@ -76,4 +76,9 @@ Use this option to set the probability the completion should use. Use a lower va Refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/){:target=_blank .external-link} for more information about the service. --8<-- "_snippets/integrations/builtin/cluster-nodes/langchain-overview-link.md" + +## Common issues + +For common questions or issues and suggested solutions, refer to [Common issues](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/common-issues/). + --8<-- "_glossary/ai-glossary.md" diff --git a/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/common-issues.md b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/common-issues.md index 006499d883b..b11133ac9c2 100644 --- a/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/common-issues.md +++ b/docs/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/common-issues.md @@ -12,7 +12,7 @@ Here are some common errors and issues with the [Window Buffer Memory node](/int ## Single memory instance -[[% include "_includes/integrations/cluster-nodes/memory-shared.html" %]] +If you add more than one Window Buffer Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. ## Managing the Session ID diff --git a/mkdocs.yml b/mkdocs.yml index 492c48a4937..22a4afd182f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -867,7 +867,9 @@ nav: - Groq Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgroq.md - Mistral Cloud Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatmistralcloud.md - Ollama Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatollama.md - - OpenAI Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai.md + - OpenAI Chat Model: + - OpenAI Chat Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/index.md + - Common issues: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/common-issues.md - Cohere Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmcohere.md - Ollama Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmollama.md - Hugging Face Inference Model: integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmopenhuggingfaceinference.md