Keeping track of all the new AI models getting released at the moment is practically a full-time job. I would know: I've been writing about them for the past few years, and it feels like every time I publish an article, another new model drops. OpenAI is one of the worst offenders (or prolific innovators), and things aren't helped by how confusing all the OpenAI model names are.
So, let's try to simplify things: consider this your complete guide to all of OpenAI's major models. We'll look at what each one is good for, how much they cost, and how to make the most of them in your work.
But first, I have to apologize for the crimes against consistent naming conventions, capitalization, and hyphenation we're all going to have to endure. GPT-5 nano, gpt-oss-120b, and GPT Image 1 are three of OpenAI's latest models. Hang in there.
Table of contents:
Why are there so many OpenAI models?
OpenAI is one of the most important AI companies. Its ChatGPT chatbot and DALL·E 2 image generator essentially kickstarted the current AI boom. Over the past few years, it's continued to develop large language models, multimodal models, and text-to-image models.
OpenAI is moving fast. As the technology advances, new models have come out, and we've naturally been left with an extensive list—including some legacy models.
Note: Some of the models below are accessible through ChatGPT, but we'll mostly be looking at the models available through OpenAI's API. That means you can also use them to develop your own AI-powered tools and connect them to thousands of other apps using Zapier.
OpenAI models at a glance
Model | Best for | Inputs | Outputs | Context window | Pricing (input / output per M tokens) | Notes |
---|---|---|---|---|---|---|
GPT-5 | Advanced reasoning and logic | Text, Images | Text | 400,000 tokens | $1.25 / $10 | OpenAI's flagship reasoning model; uses reasoning effort (minimal–high); best for coding, logic, scientific tasks |
GPT-5 mini | Affordable reasoning and logic | Text, Images | Text | 400,000 tokens | $0.25 / $2 | Faster and cheaper alternative to GPT-5; supports reasoning effort control |
GPT-5 nano | Very affordable reasoning and logic | Text, Images | Text | 400,000 tokens | $0.05 / $0.40 | Fastest, cheapest GPT-5 reasoning model; ideal for summaries and classification |
GPT-5 Chat | Chat reasoning model | Text, Images | Text | 128,000 tokens | $1.25 / $10 | ChatGPT's reasoning model; API available but standard GPT-5 often preferred |
GPT-4.1 | Complex tasks without advanced reasoning | Text, Images | Text | 1,047,576 tokens | $2 / $8 | Versatile, powerful general model |
GPT-4.1 mini | Balance of power, performance, and affordability | Text, Images | Text | 1,047,576 tokens | $0.40 / $1.60 | Great general-purpose starting point |
GPT-4.1 nano | Speed and price optimization | Text, Images | Text | 1,047,576 tokens | $0.10 / $1.40 | Fastest and cheapest GPT-4.1 model |
GPT-4o | Multimodal tasks (text, images, audio) | Text, Audio, Images | Text, Audio | 128,000 tokens | Text: $2.50 / $10; Audio: $40 / $80 | Only model with audio in/out in API |
GPT-4o mini | Multimodal on a budget | Text, Audio, Images | Text, Audio | 128,000 tokens | Text: $0.15 / $0.60; Audio: $10 / $20 | Budget-friendly multimodal model |
gpt-oss-120b | Open weight model | Text | Text | 131,072 tokens | Varies by provider | 117B params; 5.1B active; Apache 2.0; downloadable, fine-tunable, single H100 capable |
gpt-oss-20b | Medium open weight model | Text | Text | 131,072 tokens | Varies by provider | 21B params; 3.6B active; Apache 2.0; laptop/local deployable open weight model |
o3 | N/A; superseded by GPT-5 | Text, Images | Text | 200,000 tokens | $2 / $8 | Best for technical, scientific, coding tasks |
o3-pro | Really advanced reasoning and logic | Text, Images | Text | 200,000 tokens | $20 / $80 | Boundary-pushing research and coding |
o4-mini | N/A; superseded by GPT-5 mini | Text, Images | Text | 200,000 tokens | $1.10 / $4.40 | High performance at lower cost |
Whisper | Affordable transcription | Audio | Text | N/A | $0.006/minute | Can only transcribe or translate audio |
GPT Image 1 | Image generation | Text, Images | Images | N/A | Text: $5; Image: $10 / $40 | Successor to DALL·E; top-tier image generation |
Sora | Video generation | Text, Images, Videos | Videos | N/A | $20/month (Plus); $200/month (Pro) | Video generation not available via API |
GPT-5
Best OpenAI model for advanced reasoning and logic
GPT-5 is OpenAI's flagship reasoning model and one of the most powerful AI models available right now.
Reasoning models like GPT-5 are best for logical, technical, and scientific tasks. With GPT-5, you set a reasoning effort—minimal, low, medium, or high—that determines how many tokens it uses to work through a problem.
If you need the most powerful model OpenAI has for generating computer code, analyzing text documents and images, or solving complex puzzles, GPT-5 is the one to use. But because reasoning models use additional resources to work through problems, they're more expensive for tasks that don't require advanced reasoning.
GPT-5 inputs: Text, images
GPT-5 outputs: Text
GPT-5 context window: 400,000 tokens
GPT-5 input pricing: $1.25/M tokens
GPT-5 output pricing: $10/M tokens
GPT-5 mini
Best OpenAI model for affordable reasoning and logic
GPT-5 mini is a faster and more affordable alternative to GPT-5. For well-defined or chatbot responses, it offers a great balance between power and performance.
Like GPT-5, you have to set a reasoning effort with GPT-5 mini. The lower the reasoning effort, the faster and more affordable it will be.
GPT-5 mini inputs: Text, images
GPT-5 mini outputs: Text
GPT-5 mini context window: 400,000 tokens
GPT-5 mini input pricing: $0.25/M tokens
GPT-5 mini output pricing: $2/M tokens
GPT-5 nano
Best OpenAI model for very affordable reasoning and logic
GPT-5 nano is OpenAI's fastest and most affordable reasoning model. It excels at tasks like summarization and classification. Like GPT-5 and GPT-5 mini, you have to set a reasoning effort with GPT-5 nano.
GPT-5 nano inputs: Text, images
GPT-5 nano outputs: Text
GPT-5 nano context window: 400,000 tokens
GPT-5 nano input pricing: $0.05/M tokens
GPT-5 nano output pricing: $0.40/M tokens
GPT-5 Chat
Best OpenAI chat model (kind of)
GPT-5 Chat is the version of GPT-5 that's used for reasoning in ChatGPT. It's available through the API, but OpenAI recommends using GPT-5. GPT-4.1, or some other specialized model in most situations.
GPT-5 inputs: Text, images
GPT-5 outputs: Text
GPT-5 context window: 128,000 tokens
GPT-5 input pricing: $1.25/M tokens
GPT-5 output pricing: $10/M tokens
GPT-4.1
Best OpenAI model for complex tasks that don't require advanced reasoning
GPT-4.1 is OpenAI's smartest non-reasoning multimodal LLM.
It's best to consider GPT-4.1 the ultimate Swiss Army knife of OpenAI's models. While there's very little it can't do, there are some situations where another model might be more appropriate. For example, a dedicated reasoning model will perform better on multi-step tasks like code generation, while a smaller cost-optimized model will offer better value for basic text generation.
GPT-4.1 inputs: Text, images
GPT-4.1 outputs: Text
GPT-4.1 context window: 1,047,576 tokens
GPT-4.1 input pricing: $2/M tokens
GPT-4.1 output pricing: $8/M tokens
GPT-4.1 mini
Best OpenAI model for balancing power, performance, and affordability
GPT-4.1 mini is a fast and affordable general-purpose AI model. In benchmarks, it performs almost as well as GPT-4.1 but at ⅕th the price. If you aren't sure which model to use, GPT-4.1 mini is probably the sensible starting point. Unless you know you need the extra power of GPT-4.1 or are concerned about cost optimization, it's a very strong option.
GPT-4.1 mini inputs: Text, images
GPT-4.1 mini outputs: Text
GPT-4.1 mini context window: 1,047,576 tokens
GPT-4.1 mini input pricing: $0.40/M tokens
GPT-4.1 mini output pricing: $1.60/M tokens
GPT-4.1 nano
Best OpenAI model for speed and price
GPT-4.1 nano is the smallest GPT-4.1 model available. As a result, it's the fastest and cheapest model—though it isn't as powerful as GPT-4.1 or GPT-4.1 mini. It's the model to use for simple tasks, or when you need to prioritize speed and cost-effectiveness.
GPT-4.1 nano inputs: Text, images
GPT-4.1 nano outputs: Text
GPT-4.1 nano context window: 1,047,576 tokens
GPT-4.1 nano input pricing: $0.10/M tokens
GPT-4.1 nano output pricing: $1.40/M tokens
GPT-4o
Best OpenAI model for multimodality
While a specially trained version of GPT-4o can still run in ChatGPT (though it's deemed a legacy model by OpenAI), it's largely been replaced by the newer, more powerful, and more affordable GPT-4.1 model in the API. But GPT-4o still has one key API feature that its successor lacks: GPT-4o Audio supports audio inputs and outputs. It can transcribe audio into text or reply with speech.
GPT-4o inputs: Text, audio, images
GPT-4o outputs: Text, audio
GPT-4o context window: 128,000 tokens
GPT-4o input pricing: $2.50/M tokens for text; $40/M tokens for audio
GPT-4o output pricing: $10/M tokens for text; $80/M tokens for audio
GPT-4o mini
Best OpenAI model for multimodality on a budget
Like GPT-4o, GPT-4o mini has been replaced by the more affordable and powerful GPT-4.1 mini in the API. Similarly, GPT-4o mini Audio offers audio inputs and outputs for ¼th the cost of GPT-4o Audio. For developers that need an audio model but are cost constrained, it's the best option.
GPT-4o mini inputs: Text, images, audio
GPT-4o mini outputs: Text, audio
GPT-4o mini context window: 128,000 tokens
GPT-4o mini input pricing: $0.15/M tokens for text; $10/M tokens for audio
GPT-4o mini output pricing: $0.60/M tokens for text; $20/M tokens for audio
gpt-oss-120b
Best open weight OpenAI model
gpt-oss-120b is OpenAI's most powerful open weight model, and it's the most powerful open model that can run on a single H100 GPU. It's released under an Apache 2.0 license, so you can download it and run it on your own server or choose from any third-party inference provider. You can also fine-tune the model to suit your needs. This is big for OpenAI, which historically, has only released proprietary models.
gpt-oss-120b has 117 billion parameters. It uses a mixture-of-experts architecture, so only 5.1 billion parameters are active at once.
gpt-oss-120b inputs: Text
gpt-oss-120b outputs: Text
gpt-oss-120b context window: 131,072 tokens
gpt-oss-120b input pricing: Varies by provider
gpt-oss-120b output pricing: Varies by provider
gpt-oss-20b
Best medium open weight OpenAI model
gpt-oss-20b is OpenAI's medium open weight model. It's designed to run locally and can even run on a sufficiently powerful laptop. Like gpt-oss-120b, it's released under an Apache 2.0 license, so you can download it yourself or choose from any third-party inference provider, and you can fine-tune the model to suit your needs.
gpt-oss-20b has 21 billion parameters. It uses a mixture-of-experts architecture so only 3.6 billion parameters are active at once.
gpt-oss-120b inputs: Text
gpt-oss-120b outputs: Text
gpt-oss-120b context window: 131,072 tokens
gpt-oss-120b input pricing: Varies by provider
gpt-oss-120b output pricing: Varies by provider
o3
Superseded by GPT-5
o3 was OpenAI's most powerful reasoning model, but it has now been superseded by GPT-5. It's still available, at least for the time being, but you probably shouldn't choose to use it.
OpenAI o3 inputs: Text, images
OpenAI o3 outputs: Text
OpenAI o3 context window: 200,000 tokens
OpenAI o3 input pricing: $2/M tokens
OpenAI o3 output pricing: $8/M tokens
o3-pro
Best OpenAI model for really advanced reasoning and logic
o3-pro is the same underlying model as o3, but it's allowed to reason for more time to provide more reliable responses. It hasn't yet been officially superseded by GPT-5. OpenAI claims that o3-pro consistently outperformed o3 in head-to-head tests, so in some edge cases, it may still be the best model available. Otherwise, the performance gains may not be worth the price premium.
OpenAI o3-pro inputs: Text, images
OpenAI o3-pro outputs: Text
OpenAI o3-pro context window: 200,000 tokens
OpenAI o3-pro input pricing: $20/M tokens
OpenAI o3-pro output pricing: $80/M tokens
o4-mini
Superseded by GPT-5 mini
o4-mini was OpenAI's most powerful small reasoning model until the release of GPT-5 mini. It's still available, at least for now, but you should probably use GPT-5 mini instead.
OpenAI o4-mini inputs: Text, images
OpenAI o4-mini outputs: Text
OpenAI o4-mini context window: 200,000 tokens
OpenAI o4-mini input pricing: $1.10/M tokens
OpenAI o4-mini output pricing: $4.40/M tokens
Whisper
Best OpenAI model for affordable audio transcription
Whisper is an older audio transcription and translation model. While GPT-4o is far more powerful, at just $0.006 per minute of audio transcribed or translated, Whisper is a solid budget option for low-stakes audio services.
Whisper inputs: Text, images
Whisper outputs: Text
Whisper input pricing: $0.006/minute
GPT Image 1
Best OpenAI model for generating images
GPT Image 1 is the successor to OpenAI's DALL·E image models, and it's one of the best text-to-image models available.
OpenAI's pricing for image models is quite complicated because images are converted into tokens based on their size and quality. It currently costs $0.011 to generate a 1024 x 1024 low-quality image and $0.167 to generate a 1024 x 1024 high-quality image.
GPT Image 1 inputs: Text, images
GPT Image 1 outputs: Images
GPT Image 1 input pricing: $5/M text tokens; $10/M image tokens
GPT Image 1 output pricing: $40/M image tokens
Sora
Best OpenAI model for generating video
Sora is OpenAI's video generation model. It can take a combination of text prompts, still images, and videos, and generate a video based on all that. Sora isn't available as an API.
ChatGPT Plus subscribers can generate watermarked 10-second videos in 720p resolution. ChatGPT Pro subscribers can generate unwatermarked 20-second videos in 1080p resolution.
Sora inputs: Text, images, videos
Sora outputs: Videos
Sora pricing: Available as part of ChatGPT Plus ($20/month) and ChatGPT Pro ($200/month)
Legacy OpenAI models
OpenAI has developed and deprecated dozens of models over the past few years. Here are some of the major models that are no longer supported, actively used, or state-of-the-art:
o1
o1-pro
GPT-4.5
GPT-4
GPT-3.5 Turbo
GPT-3
DALL·E 3
DALL·E 2
How to choose the right OpenAI model
OpenAI has lots of models, but they each serve a purpose. GPT-5 and GPT-4.1 are the top reasoning and non-reasoning models, respectively, with the various mini and nano versions offering a different balance between price, performance, and speed. There's certainly a lot of overlap between the different models' abilities, especially when you take into account older models that haven't yet been replaced. But in general, one should strike you as the best balance between power, price, and ongoing development.
OpenAI keeps releasing new models, changing pricing, and otherwise mixing things up. I'll do my best to keep this list as current and accurate as possible in the meantime.
Automate OpenAI models
All the OpenAI models are the most helpful when they're part of your existing workflow. With Zapier's ChatGPT integration, you can use OpenAI's state-of-the-art models to automate everything from intelligent sales outreach to content generation to customer support. This means doing things like automatically identifying opportunities in your CRM, summarizing business information, and prioritizing your workflow—using all the apps already in your tech stack.
Learn more about how to automate OpenAI models with Zapier, or get started with one of these pre-made templates.
Send prompts to ChatGPT for Google Forms responses and add the ChatGPT response to a Google Sheet
Automatically reply to Google Business Profile reviews with ChatGPT
Create email copy with ChatGPT from new Gmail emails and save as drafts in Gmail
Zapier is the most connected AI orchestration platform—integrating with thousands of apps from partners like Google, Salesforce, and Microsoft. Use interfaces, data tables, and logic to build secure, automated, AI-powered systems for your business-critical workflows across your organization's technology stack. Learn more.
Related reading:
This article was originally published in May 2025. The most recent update was in October 2025.