You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @1057504487, I'm here to help with any bugs, questions, or contributions you have. Let's tackle this together!
To address the issue of the max_token limit affecting data output in Scrapegraph-ai, you can try the following approaches:
Adjust the max_token Parameter: If the maximum size is 16384 and you're currently using 8192, consider increasing it closer to the maximum if your use case allows. This might help in retrieving more data.
Optimize Data Requests: Break down your data requests into smaller chunks that fit within the max_token limit. This can be done by segmenting the data or making multiple requests.
Refine the Output Schema: Use a more concise output schema to reduce the token usage. This can help in fitting more data within the token limit.
Use Efficient Models: Choose a model that is optimized for your specific task, as some models might handle token limits more efficiently.
Configure Other Parameters: Adjust other parameters like TEMPERATURE and MAX_SEARCH_RESULTS to optimize the data retrieval process [1][2][3][4][5].
These strategies can help you manage the max_token limit and improve data output.
If max_token is set to 8192, about 5 pieces of data will be output.
The max_token has a maximum size of 16384, and it cannot obtain all data.
How do we deal with this?
The text was updated successfully, but these errors were encountered: