Appreciation of OpenAI's Naming Art

All OpenAI models are documented here: https://platform.openai.com/docs/models/.

Without delving too far into the past, let’s start with the GPT-4 series and contemporary models. Here is my compiled table:

NameDescriptionModel Card Link
ChatGPT-4oGPT-4o model used in ChatGPThttps://platform.openai.com/docs/models/chatgpt-4o-latest
GPT-4An older high-intelligence GPT modelhttps://platform.openai.com/docs/models/gpt-4
GPT-4 TurboAn older high-intelligence GPT modelhttps://platform.openai.com/docs/models/gpt-4-turbo
GPT-4 Turbo PreviewDeprecated An older fast GPT modelhttps://platform.openai.com/docs/models/gpt-4-turbo-preview
GPT-4.1Smartest non-reasoning modelhttps://platform.openai.com/docs/models/gpt-4.1
GPT-4.1 miniSmaller, faster version of GPT-4.1https://platform.openai.com/docs/models/gpt-4.1-mini
GPT-4.1 nanoFastest, most cost-efficient version of GPT-4.1https://platform.openai.com/docs/models/gpt-4.1-nano
GPT-4.5 PreviewDeprecated large model.https://platform.openai.com/docs/models/gpt-4.5-preview
GPT-4oFast, intelligent, flexible GPT modelhttps://platform.openai.com/docs/models/gpt-4o
GPT-4o AudioGPT-4o models capable of audio inputs and outputshttps://platform.openai.com/docs/models/gpt-4o-audio-preview
GPT-4o miniFast, affordable small model for focused taskshttps://platform.openai.com/docs/models/gpt-4o-mini
GPT-4o mini AudioSmaller model capable of audio inputs and outputshttps://platform.openai.com/docs/models/gpt-4o-mini-audio-preview
GPT-4o mini RealtimeSmaller realtime model for text and audio inputs and outputshttps://platform.openai.com/docs/models/gpt-4o-mini-realtime-preview
GPT-4o mini Search PreviewFast, affordable small model for web searchhttps://platform.openai.com/docs/models/gpt-4o-mini-search-preview
GPT-4o mini TranscribeSpeech-to-text model powered by GPT-4o minihttps://platform.openai.com/docs/models/gpt-4o-mini-transcribe
GPT-4o mini TTSText-to-speech model powered by GPT-4o minihttps://platform.openai.com/docs/models/gpt-4o-mini-tts
GPT-4o RealtimeModel capable of realtime text and audio inputs and outputshttps://platform.openai.com/docs/models/gpt-4o-realtime-preview
GPT-4o Search PreviewGPT model for web search in Chat Completionshttps://platform.openai.com/docs/models/gpt-4o-search-preview
GPT-4o TranscribeSpeech-to-text model powered by GPT-4ohttps://platform.openai.com/docs/models/gpt-4o-transcribe
GPT-4o Transcribe DiarizeTranscription model that identifies who’s speaking whenhttps://platform.openai.com/docs/models/gpt-4o-transcribe-diarize
o1Previous full o-series reasoning modelhttps://platform.openai.com/docs/models/o1
o1-miniA small model alternative to o1https://platform.openai.com/docs/models/o1-mini
o1-proVersion of o1 with more compute for better responseshttps://platform.openai.com/docs/models/o1-pro
o3Reasoning model for complex tasks, succeeded by GPT-5https://platform.openai.com/docs/models/o3
o3-proVersion of o3 with more compute for better responseshttps://platform.openai.com/docs/models/o3-pro
o3-miniA small model alternative to o3https://platform.openai.com/docs/models/o3-mini
o4-miniFast, cost-efficient reasoning model, succeeded by GPT-5 minihttps://platform.openai.com/docs/models/o4-mini

There are dedicated models for scenarios like audio (Audio), real-time (Realtime), search (Search), speech-to-text (Transcribe), and text-to-speech (TTS).
For the same scenario, such as audio (Audio), two models are provided: GPT-4o Audio and GPT-4o mini Audio. Users need to experiment to determine which meets their quality expectations.
In the speech-to-text (Transcribe) scenario, three models are available: GPT-4o Transcribe, GPT-4o mini Transcribe, and GPT-4o Transcribe Diarize.
ChatGPT-4o is a special version of GPT-4o used exclusively in ChatGPT and unavailable for other scenarios.

It’s intuitive that GPT-4.1 is better than GPT-4, but comparing GPT-4o (4-O) with GPT-4.1 or GPT-4 is less straightforward. Chronologically, GPT-4 came first, followed by GPT-4o, and then GPT-4.1. Still, it’s unclear whether GPT-4o or GPT-4.1 performs better.

Here, I’ll define “better” as having higher accuracy in mathematical and reasoning tasks, excluding speed from the definition. Previously, OpenAI may have valued speed and intelligence equally, leading to confusing model comparisons. Fortunately, in the GPT-5 era, OpenAI began prioritizing intelligence over speed, eliminating ambiguity in model comparisons. A fast but incorrect answer is a waste of time and meaningless.

During the GPT-4 era, it took me nearly half a year to discover that o3 was the best model at the time. Once confirmed, I exclusively used it for debugging. Its description reads: “Reasoning model for complex tasks, succeeded by GPT-5.” OpenAI directly chose the o3 path to develop the next-generation model GPT-5. I believe many users, like me, experimented with these models before settling on o3, which OpenAI could observe from backend usage data.

There’s also an isolated model o4-mini without a corresponding o4 model. The successor to o3 is GPT-5, while o3-mini has no successor. The successor to o4-mini is GPT-5 mini.

The transition to the GPT-5 era marked a watershed moment for OpenAI, as the company began prioritizing intelligence over speed. Abandoning so many specialized models was undoubtedly a difficult decision for a large enterprise, but I believe it was the right one.

This is the GPT-5 series model selection interface in Cursor:

cursor_gpt-5_models

The GPT-5 series models have more systematic and intuitive naming conventions.

The new naming rules: Base Version + Specialization + Scale + Reasoning Strength + Compute Resources

Example: GPT-5.1-Codex-Max-High-Fast

Although the combinations still result in numerous models, at least they’re no longer as bewildering as the GPT-4 series.