OpenAI Just Admitted Its New Browser Is a Security Liability
OpenAI's head of security, Dane Stuckey, issued a warning this week about ChatGPT Atlas, the company's new AI-powered browser: it carries...
OpenAI has published a comprehensive prompting guide for GPT-5.1, their newest flagship model. The guide documents technical patterns developed through internal testing and production deployments with enterprise partners. Here's what changed and what it means for teams building AI systems.
Here's what's new.
GPT-5.1 dynamically adjusts reasoning token consumption based on query difficulty. Simple questions use fewer tokens; complex problems receive deeper computational resources. This represents a shift from GPT-5's more uniform token allocation, with direct implications for operational costs at scale.
The model now responds more reliably to instructions about:
GPT-5.1 introduces a none reasoning setting that completely eliminates reasoning tokens, making it functionally equivalent to GPT-4.1 for low-latency applications. Critically, this mode still supports hosted tools like web search and file search—a capability previous minimal reasoning modes lacked.
OpenAI provides specific migration recommendations:
Here are the agentic updates.
The guide details how to configure agent progress reports during long-running tasks:
OpenAI demonstrates how to define agent personas for customer-facing applications, with examples showing how to balance warmth, directness, and efficiency. The guide includes specific prompting patterns for controlling acknowledgment phrases and conversational rhythm.
To prevent premature task termination, the guide recommends prompting for:
Parallel Tool Calling
GPT-5.1 executes parallel tool calls more efficiently. The guide recommends:
New Named Tools
apply_patch: Creates, updates, and deletes files using structured diffs instead of full rewrites. Available as a named tool type without custom descriptions. Internal testing showed 35% reduction in failure rates compared to custom implementations.
shell: Enables controlled command-line interactions through a plan-execute loop. The model proposes commands; your system executes them and returns outputs.
Both tools use the Responses API and require specific implementation patterns documented in the guide.
For coders:
For medium-to-large coding tasks, OpenAI recommends implementing a planning tool that maintains task status. Requirements include:
The guide includes patterns for constraining visual outputs to match brand guidelines, particularly for frontend development using Tailwind CSS. Instructions emphasize token-based color systems over hard-coded values.
When using none reasoning mode:
The guide notes that none mode substantially improves custom function-calling performance compared to GPT-5's minimal reasoning setting.
OpenAI documents a two-step process for systematic prompt debugging:
Provide the model with your system prompt and failure logs. Request root-cause analysis identifying contradictions and problematic phrasing.
Use the analysis to generate surgical prompt edits that resolve conflicts without complete rewrites. Focus on clarifying ambiguous guidance and removing redundancies.
This approach treats prompt optimization as an iterative engineering process rather than guesswork.
For teams running AI systems in production:
The guide represents operational refinement rather than architectural change. Teams currently using GPT-5 can migrate incrementally; those on GPT-4.1 may find none mode offers a cleaner upgrade path.
OpenAI's head of security, Dane Stuckey, issued a warning this week about ChatGPT Atlas, the company's new AI-powered browser: it carries...
1 min read
There's nothing quite like watching a $500 billion company face-plant in real time. Last Thursday, Sam Altman promised us a "legitimate PhD-level...
ChatGPT isn't a chatbot anymore. It's a storefront. OpenAI just announced that PayPal will integrate with ChatGPT's Instant Checkout feature starting...