The CEO of OpenAI Sam Altman has already refuted rumours that Chat GPT-5 is already training and developing.

Speaking at an event at the Massachusetts Institute of Technology (MIT) Altman was questioned on the open letter that was circulated among the tech world requesting AI labs to pull the breaks on developing any other new AI systems that go beyond the computational power of Chat GPT-4.

However, with so many signatories involved in the letter and outside experts also commenting on the current state of affairs within the AI world, it has proven difficult for industry leaders to reach a unified consensus.

The overall goal of the open letter is to safeguard humanity from harmful AI, and crucially to determine how a pause might look like that will not compromise potential innovation.

Sam Altman has made it very clear that the version numbers that have been attached to the Chat GPT models hold little meaning, and to speak about releasing Chat GPT-5 so soon after 4 makes little sense. He states that there are plenty of opportunities to make it smarter, more efficient, and most importantly, safer.

“An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not, and won’t for some time. So in that sense, [the letter] was sort of silly”.

Sam Altman

This is especially true when OpenAI recently released a plugin store to further enhance the capabilities and experience of using the premium version of ChatGPT. Some of these include being able to read and summarise a PDF, others increase its ability to complete complex mathematical equations. One particularly popular plugin lets you browse the internet and ask questions relevant to the page itself.

Although privacy concerns have prompted many to question the latter’s level of danger.

However, the significant leaps being phased into Chat GPT can be a cause for concern. As capabilities increase, so does the risk of danger. With the increased risk of danger, more security is needed and the safety bar may have to increase to protect ourselves from unwarranted action.

Although Altman would acknowledge that the current state of AI systems was far from perfect, he recognized the importance of engaging people in discussions on AI. However, as time goes on, the lack of transparency from OpenAI might cause some to lose faith in how they operate and might opt for more open-source models that showcase how they operate and train their data.

With AI development continuing to reach breakneck speeds, there is no surprise that we become overstimulated by the subsequent shiny AI development. We are always looking for the next big thing or tool that will revolutionize the industry.

However, we must also learn that increasing the parameters of the LLMs will not mean much, and we now need to pay attention and focus on the safety element of AI.

After the creation of this article, we have seen Elon Musk release his own AI, named xAI. It will be interesting to see how this might change the future plans for Sam Altman and OpenAI.

With that being said, with the recent breakthroughs occurring within the open-source community, it would not be surprising to see OpenAI release Chat GPT-5 or a beefed-up version of 4. The question is, are we going too fast? and are we even ready?