News

OpenAI Announces GPT-5: Multimodal Reasoning at Human Expert Level

February 15, 2026 2 min read

OpenAI has announced GPT-5, its most advanced AI model to date, claiming performance that matches or exceeds human experts across a wide range of professional domains.

Revolutionary Capabilities

GPT-5 introduces several breakthrough features:

Native Multimodal Understanding: Unlike previous models that processed different modalities separately, GPT-5 understands text, images, video, and audio through a unified architecture. This enables seamless reasoning across media types.

Extended Context: With a 500K token context window, GPT-5 can process entire books, lengthy codebases, or hours of video in a single conversation.

Real-Time Learning: GPT-5 can update its understanding during conversations, learning from corrections and new information without requiring model retraining.

Benchmark Performance

OpenAI reports that GPT-5 achieves:

  • 95% on the GPQA Diamond benchmark (graduate-level science)
  • 98% on professional bar exam questions
  • 92% on complex mathematical reasoning tasks
  • Near-human performance on creative writing evaluations

Availability

GPT-5 will roll out to ChatGPT Plus subscribers in March 2026, with API access following in April. Enterprise customers will receive early access starting next week.

Pricing

OpenAI announced GPT-5 API pricing at $30 per million input tokens and $60 per million output tokens—roughly double GPT-4o pricing but with significantly enhanced capabilities.

Industry Reaction

Competitors have remained largely silent, though analysts note this announcement puts pressure on Anthropic’s Claude and Google’s Gemini to demonstrate competitive capabilities. The AI industry’s rapid advancement shows no signs of slowing.