OpenAI finally launches ‘open’ AI models after over five years | Why now?

For the first time in more than five years, OpenAI has released two open-weight AI reasoning models, amid China’s rise in open source AI technology, and questions around OpenAI swaying away from its initial objective of building openly available technology.

The models released by OpenAI are free to download from Hugging Face and do not need high computing power to run. They have similar capabilities as the company’s o-series models. The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory.

This is the company’s first ‘open’ language model since GPT-2 in 2019.

Story continues below this ad

For OpenAI, this is a shift from its focus on building primarily proprietary models, but one that was necessitated by the meteoric rise of China’s DeepSeek, which was open source and took the AI world by storm. It also affirmed China’s lead in open source AI, with the US taking a backseat, and its administration having to urge developers to open source more technologies.

Earlier this year, OpenAI CEO Sam Altman said that the company has been on the wrong side of history when it comes to open sourcing its technologies.

Festive offer

Open weight vs open source AI models

To be sure, the models released by OpenAI are ‘open weight,’ and not open source models — the former has less transparency compared to the latter.

Open source models provide full transparency, sharing source code, model architecture, training algorithms, and weights under a licence allowing free use, modification, and distribution. Ideally, training data is disclosed, but legal constraints often limit this. In contrast, open weight models only have the trained model weights, not the source code, training data, or full architecture details. This restricts transparency and customisation, since users can run the model but not fully modify or retrain it.

Story continues below this ad

Why OpenAI is changing tack

After years of focusing on closed source technology, the shift in strategy at OpenAI was triggered by the emergence of China’s DeepSeek. The latter showed the world that a language model, which was open sourced, could be made at a fraction of the cost that it took some of its competitors to develop a model. Meta has also found success through its open weight model, Llama, which has hit more than a billion downloads — even though developers have complained that its model’s licence terms could be commercially restrictive.

OpenAI currently offers its AI models through a chatbot and the cloud, unlike its rivals, whose models can be downloaded and modified by people.

In a recent Reddit Q&A, OpenAI CEO Sam Altman said that the company has been on the wrong side of history when it comes to open sourcing its technologies. “[I personally think we need to] figure out a different open source strategy,” Altman said. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority… We will produce better models, but we will maintain less of a lead than we did in previous years.”

According to a feedback form published by OpenAI on its website, the company was inviting “developers, researchers, and [members of] the broader community” and included questions like, “What would you like to see in an open weight model from OpenAI?” and “What open models have you used in the past?”.

.

Source link

Share me..

Leave a Reply

Your email address will not be published. Required fields are marked *