OpenAI Chief Researcher Quits: Claims The Company Not Worried About Safety Of AI Systems
OpenAI Chief Researcher Quits: Claims The Company Not Worried About Safety Of AI Systems
OpenAI is building AI tech and chatbots at a fast pace without thinking about the need to control the safety of these systems, he warned.

OpenAI is going after the shiny products and not focusing on safety of AI systems and processes, claims Jan Leike, who quit recently from his role as AI researcher at OpenAI. Leike has lashed out at the company, warning them of the need to control the rapid advancement in AI that can turn into a dangerous situation for all of humanity.

The long post on X earlier this month highlights the concerns that OpenAI employees have placed on Sam Altman and his team over their priorities. Leike’s departure was confirmed just a few hours after chief scientist Ilya Sutskever decided to end his journey at OpenAI.

Losing Sight Of Safety Spells Danger

Leike has been a core part of OpenAI’s growth in the past few years, and he was part of the team that is building AGI tech at the company.

But he has been sceptical of OpenAI’s approach towards the technology and how they plan to go about building these futuristic use cases without giving heed to the security and concerns posed by AI to humans. He also talked about the disagreement with OpenAI over their roadmap which seems to have fast tracked his decision to leave the company.

“I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” he mentioned in the post.

Altman seems to have split opinions at OpenAI, and these latest departures suggest his decision making has come into question quite often lately. AI tech is evolving fast, and this development is proving to be a major concern, not only for the governments across the globe, but also people working at companies like OpenAI.

Progress That Has Everyone Rightly Worried

If you all got a glimpse of ChatGPT 4o earlier this month, you can see how quickly AI is growing and becoming smarter, leaving human intelligence in jeopardy. Leike’s comments illustrate the need for OpenAI to take a back seat on the supposed shiny products, and work towards a safety culture and AI systems that will allow AI and humans to flourish together rather than compete or even surpass the latter in the near future.

He ends the post saying, “OpenAI must become a safety-first AGI company,” and now it is up to Altman and Co. to work towards a more structured approach while building AI for the future.

What's your reaction?

Comments

https://sharpss.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!