-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding OpenAI token #90
Comments
So I am looking into this issue, the thing is OPENAI has a token limit and that's common for all models. The only way to mitigate this is to make it streamable and I am working on this correction this is only an issue with NMAP and its related output I will be working on this and maybe in the next update, this will be corrected. Thanks for letting me know 👍 |
Thank you for the feedback. |
I guess yes. It boils down to how the AI can process. And what is the optimal parameter for accuracy? I am pretty sure that's what is going on here. Maybe openai limits the input tokens to maintain a consistent accurate output 🤷♂️ |
On my error i can see that actually openai provides the limit as the maximum context length is 16385 tokens, but i requested for 17216 tokens (14716 in your prompt, and 2500(max_token value) for the completion) |
So the thing is the output limit and input limits are different gpt-4 has more output and input capacity but can't leverage that properly with direct prompts like what I am doing so I need to modify that |
Hello, as tested in multiple models this is still the same so you are correct. |
I was kinda busy with uni exams and stuff so not able to work on this 😅. I will work on this when I get the time for it. |
Hello
Playing around with the project, after running nmap with OpenAI using profile 5 or 12 i come with the error:
"message: "This model's maximum context length is 16385 tokens, however you requested 17216 tokens (14716 in your prompt; 2500 for the completion). Please reduce your prompt; or completion length.","
I'm getting this message after changing the model from gpt-3.5-turbo-0613 to gpt-3.5-turbo-0125
What would be the best approach here?
Thank you for your time.
The text was updated successfully, but these errors were encountered: