An AI task force advisor to the UK government has warned that humans have a narrow window of two years to control and regulate AI before it becomes too powerful.
Matt Clifford, is an advisor to the UK government who also chairs the “Advanced Research and Invention Agency, or ARIA. He would discuss these risks during an interview with TalkTV that humans have a small window to better understand and regulate AI before it becomes too powerful, where it might be difficult to safeguard it from significant threats.
Due to an increase in development that is going on within the AI industry, it comes as no surprise that countless industry leaders and experts have signed a letter signalling to pause or slow down the development of new and existing systems. This is to allow a robust and secure framework on ai safety to prevent future tragedies from occuring.
This also comes during a tense period of AI hysteria, after reports (that were later confirmed to be false) over an AI pilot that disobeyed orders.
Keeping up with the rapid pace of advancement poses a considerable challenge. However, some countries have decided to take a more pragmatic approach, with the UK being more open to cooperation with the tech giants such as OpenAI & DeepMind.
Mr. Clifford would later highlight that AI can create bioweapon recipies and launch large-scale cyber attacks, with many references to the letter signed by 350 AI experts that were previously mentioned.
However, one may argue that this increase in aritifical intelligence paranoia may only stifle advancement and innovation. With many fears of potential job losses being reported on social media, there are concerns this increase in fear could result in less optimism in the technology and ultimately less investment.
Going back to Mr. Clifford, he, however, does not paint a completely bleak picture and proceeds to express reasons for optimism.
Referencing back to the open letter, there are a large number of people that argue the main reason for the AI heavyweights such as OpenAI, Meta & Microsoft to sign such treaties and letters is due to the fast emergence of open source competitors. These competitors are already nearly reaching the potential of ChatGPT and the like.
They believe that these tech giants would aim for a pause to allow them to monopolise the market and lobby for regulations that would largely affect the open-source industry. With open-source struggling, this will pave the way for total domination of AI by these massive corporations.
With only a limited time to act, and the speed of artificial intelligence growth being exponential, policymakers, researchers and the government must work together and collaborate. In doing so, they should focus their efforts in minimising risks, on ai safety, and encouraging further development without getting mired by bureaucracy.
They must quickly learn to put their differences aside to realise the significant dangers this may hold if left unchecked for too long. However, are we already too late, and is Pandora’s box already open?
[…] leaps being phased into Chat GPT can be a cause for concern. As capabilities increase, so does the risk of danger. With the increased risk of danger, more security is needed and the safety bar may have to increase […]