At the moment, artificial intelligence – AI is becoming one of the most discussed topics. Appearing in every field, AI has been gradually proving its usefulness and influence. The most prominent of which is probably ChatGPT – an application that has shown the flexibility as well as the strong development of artificial intelligence over the years. But surprisingly, recently, OpenAI – the team behind ChatGPT has just called for stakeholders to join hands to control and restrain artificial intelligence.
Accordingly, in a note posted on the company’s website, senior officials in OpenAI such as co-founders Greg Brockman and Ilya Sutskever as well as CEO Sam Altman emphasized the need for a international regulatory agency specializing in monitoring, testing and setting limits in the development of artificial intelligence in order to most effectively prevent risks that may occur in the future.
Specifically, the trio argues that in the next 10 years, artificial intelligence could develop to the point of surpassing expert skill levels in most fields. On the positive side, AI will help a lot for the development of the world, but at the same time it also has many downsides that stand out in that humanity is likely to have to face super intelligence. Intelligence will be stronger than any technology – something that has often happened in previous sci-fi movies. The researchers themselves have also warned about the potential risks of superintelligence in the past time, and witnessing their rapid development in recent years, these warnings add another message. became the subject of discussion again.
For example, the future world scenario, where AI will work for humans more than ever. Humans will then lose their self-control and become completely dependent on machines. Basically, as mentioned above, AI is necessary in many fields, but there are also many potential risks and it is also necessary for the father of ChatGPT to warn in the current context.