Skip to content

Fixed Typo in gpt-5_prompting_guide.ipynb #2060

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions examples/gpt-5/gpt-5_prompting_guide.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,15 @@
"#### Prompting for less eagerness\n",
"GPT-5 is, by default, thorough and comprehensive when trying to gather context in an agentic environment to ensure it will produce a correct answer. To reduce the scope of GPT-5’s agentic behavior—including limiting tangential tool-calling action and minimizing latency to reach a final answer—try the following: \n",
"- Switch to a lower `reasoning_effort`. This reduces exploration depth but improves efficiency and latency. Many workflows can be accomplished with consistent results at medium or even low `reasoning_effort`.\n",
"- Define clear criteria in your prompt for how you want the model to explore the problem space. This reduces the model’s need to explore and reason about too many ideas:\n",
"- Define clear criteria in your prompt for how you want the model to explore the problem space. This reduces the model’s need to explore and reason about too many ideas.\n",
"\n",
"```\n",
"<context_gathering>\n",
"Goal: Get enough context fast. Parallelize discovery and stop as soon as you can act.\n",
"\n",
"Method:\n",
"- Start broad, then fan out to focused subqueries.\n",
"- In parallel, launch varied queries; read top hits per query. Deduplicate paths and cache; don’t repeat queries.\n",
"- In parallel, launch varied queries; read top hits per query. Deduplicate paths and cache, and don’t repeat queries.\n",
"- Avoid over searching for context. If needed, run targeted searches in one parallel batch.\n",
"\n",
"Early stop criteria:\n",
Expand Down Expand Up @@ -292,7 +292,7 @@
"```\n",
"\n",
"By resolving the instruction hierarchy conflicts, GPT-5 elicits much more efficient and performant reasoning. We fixed the contradictions by:\n",
"- Changing auto-assignment to occur after contacting a patient, auto-assign the earliest same-day slot after informing the patient of your actions. to be consistent with only scheduling with consent.\n",
"- Changing auto-assignment to occur after contacting a patient, auto-assign the earliest same-day slot after informing the patient of your actions. To be consistent with only scheduling with consent.\n",
"- Adding Do not do lookup in the emergency case, proceed immediately to providing 911 guidance. to let the model know it is ok to not look up in case of emergency.\n",
"\n",
"We understand that the process of building prompts is an iterative one, and many prompts are living documents constantly being updated by different stakeholders - but this is all the more reason to thoroughly review them for poorly-worded instructions. Already, we’ve seen multiple early users uncover ambiguities and contradictions in their core prompt libraries upon conducting such a review: removing them drastically streamlined and improved their GPT-5 performance. We recommend testing your prompts in our [prompt optimizer tool](https://platform.openai.com/chat/edit?optimize=true) to help identify these types of issues.\n",
Expand All @@ -305,7 +305,7 @@
"1. Prompting the model to give a brief explanation summarizing its thought process at the start of the final answer, for example via a bullet point list, improves performance on tasks requiring higher intelligence.\n",
"2. Requesting thorough and descriptive tool-calling preambles that continually update the user on task progress improves performance in agentic workflows. \n",
"3. Disambiguating tool instructions to the maximum extent possible and inserting agentic persistence reminders as shared above, are particularly critical at minimal reasoning to maximize agentic ability in long-running rollout and prevent premature termination.\n",
"4. Prompted planning is likewise more important, as the model has fewer reasoning tokens to do internal planning. Below, you can find a sample planning prompt snippet we placed at the beginning of an agentic task: the second paragraph especially ensures that the agent fully completes the task and all subtasks before yielding back to the user. \n",
"4. Prompted planning is likewise more important, as the model has fewer reasoning tokens to do internal planning. Below, you can find a sample planning prompt snippet we placed at the beginning of an agentic task. The second paragraph especially ensures that the agent fully completes the task and all subtasks before yielding back to the user. \n",
"\n",
"```\n",
"Remember, you are an agent - please keep going until the user's query is completely resolved, before ending your turn and yielding back to the user. Decompose the user's query into all required sub-request, and confirm that each is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure that the problem is solved. You must be prepared to answer multiple queries and only finish the call once the user has confirmed they're done.\n",
Expand Down