OpenAI open-weight models 2025 mark a major shift in the company’s approach to artificial intelligence. After more than five years of focusing on proprietary tools, OpenAI has released GPT-OSS-120B and GPT-OSS-20B, two open-weight language models that developers can run locally and fine-tune freely.
Sam Altman, OpenAI’s CEO, described the release as part of a mission to democratize AI access. “We’re excited to make this model, the result of billions of dollars of research, available to the world,” he said. These models are available for free download via Hugging Face, a leading platform for AI tools.
Open-weight models stand out because they share their internal weights, allowing researchers and developers to examine how they process information. Unlike ChatGPT, these new models can run offline, behind firewalls, and without needing to connect to OpenAI servers. Co-founder Greg Brockman emphasized their value as “complementary” to OpenAI’s proprietary API services.
Both models feature chain-of-thought reasoning, a technique that helps the AI take multiple logical steps before providing an answer. While they are text-only and not multimodal, the models support web browsing, software navigation, code execution, and cloud model interaction. The smaller model, gpt-oss-20b, requires just 16GB of RAM, making it accessible for consumer-level devices.
The models use the Apache 2.0 license, allowing commercial use, redistribution, and integration into other software projects. This licensing choice aligns OpenAI with other major players in the open-weight space, such as Mistral and Alibaba’s Qwen.
OpenAI initially planned to release the models in March 2025 but delayed the launch for additional safety testing. Eric Wallace, a safety researcher at OpenAI, explained that the company simulated potential misuse scenarios. “We fine-tuned the model internally on risk areas and measured how far those risks could be pushed,” he said. Despite those tests, the models scored low on OpenAI’s preparedness framework, suggesting minimal risk.
Performance-wise, GPT-OSS-120B compares well with OpenAI’s proprietary o3 and o4-mini models. In some benchmarks, it even outperforms them. Researcher Chris Koch highlighted the model’s strong latency and cost-efficiency. OpenAI also published a detailed model card outlining GPT-OSS’s technical benchmarks.
The release follows a growing global race in open-weight AI development. Earlier this year, Chinese firm DeepSeek gained attention for its low-cost open-weight model. Although OpenAI did not mention DeepSeek directly, Altman underlined the importance of building an “open AI stack created in the United States… based on democratic values.”
In the U.S., Meta has led the charge in open-weight models, starting with the release of Llama in 2023 and continuing with Llama 4 in 2025. However, Meta’s recent moves suggest it may pull back from open-source development, citing growing safety concerns. CEO Mark Zuckerberg is now investing heavily in building superintelligence, led by AI veteran Alexandr Wang.
As competition intensifies between OpenAI and Meta, the release of GPT-OSS adds fresh fuel to the AI talent war. Researchers with rare expertise are now commanding record salaries and bonuses, and open-weight models like these could shape where the best minds in AI choose to work.
With the release of GPT-OSS-120B and GPT-OSS-20B, OpenAI is signaling a new era—one where open-weight tools may sit alongside commercial products, giving developers, researchers, and companies more freedom to shape the future of AI on their own terms.













