Under the project code-named "Strawberry,” OpenAI is reportedly developing advanced AI models with enhanced reasoning capabilities.
According to internal documents, Project Strawberry aims to automate the process of more profound information take-in and enable OpenAI's AI models to plan for more complicated tasks ahead of time.
OpenAI aims to empower its AI models to conduct "deep research," marking a substantial leap from the current capabilities available. The project builds upon the earlier work of the project Q*(pronounced “Q star”), which showcased its ability to answer tricky science and math questions.
While many details about Project Strawberry remain undisclosed, insiders claim that the workings of Strawberry are strictly confidential even within OpenAI.
It is unclear how far along the development of Strawberry is and whether it is the same system with "human-like reasoning" skills demonstrated internally. Some media further reported that OpenAI created a model with "human-like" reasoning skills but did not specify if it was related to Strawberry.
Controversies Surrounding OpenAI and Project Strawberry
Recently, a group of OpenAI employees sent a seven-page letter to the Securities and Exchange Commission (SEC), detailing their concerns about the company's technology and its potential risks to humanity.
The employees accused OpenAI of silencing staff on AI risks, raising questions about transparency and accountability.