top of page

The Dilemma of AI

ree

Sam Altman, the co-founder of ChatGPT, was fired and then reinstated at OpenAI within five days, marking an event that emphasized the importance of ethics in AI development.


The ethical conflict surrounding Altman's dismissal concluded with OpenAI deciding to further enhance the development speed and commercial aspects of AI. On the day Altman's return was announced, The New York Times interpreted this as a victory for the 'capitalist team' over the 'Leviathan team', predicting an acceleration of OpenAI's commercialization strategy for ChatGPT.



The risks of AI developing self-consciousness or surpassing human intelligence have been discussed for a long time. Therefore, warnings have followed that safety and ethics should take precedence over development speed and commercial interests in AI development.


AI is becoming increasingly intelligent. The approaching singularity, where evolved AI surpasses human intellectual capabilities and becomes 'superhuman', is a testimony to AI's potential to unlock unprecedented opportunities and transform the world into a new dimension, while also bringing unforeseen dangers. AI is a double-edged sword: a blessing if used correctly, but potentially dangerous if misused.



According to a survey conducted by Pew Research in October, involving 10,000 experts, the '2035 AI Future Outlook Report' found that 42% of responses indicated equal levels of hope and concern regarding AI by 2035, while 37% expressed more concern than hope (18%).


Such concerns among experts are leading to a consensus on the need for stringent AI regulations. International cooperation for AI regulation is also gaining momentum.


Europe is currently the most proactive in AI regulation. In June, the European Union (EU) Parliament passed a draft of the AI Regulation Act (EU AI Act) to prevent risks and discrimination caused by AI. The law, to be implemented in 2026, involves direct government intervention in regulating high-risk AI. The UK announced the establishment of an AI Safety Research Institute in October.


The United States is also strengthening AI usage regulations. President Joe Biden signed an executive order related to AI technology development on October 30, currently focusing on managing AI-generated content and addressing 'fake news' spread by AI.


On November 2, the UK hosted the first AI Safety Summit at Bletchley Park, the site of the World War II Enigma code-breaking operation and the birthplace of British computer science. Representatives from 28 countries including the UK, USA, France, South Korea, Japan, the EU, global AI companies, and academia participated to discuss the balance between regulation and innovation in AI.


At this summit, the leaders agreed on the 'Bletchley Declaration', stating that AI has the potential to contribute to human prosperity, but to realize this, it must be designed, developed, deployed, and used safely. This declaration is the first joint statement in response to AI threats. Additionally, the UK plans to publish a joint report to assess the potential risks of AI, overseen by Joshua Bengio, a leading authority in AI deep learning and a known AI alarmist.


Yuval Noah Harari emphasizes that in facing threats like AI, which humanity has never experienced before, the focus should be on establishing regulatory agencies rather than on regulation itself.



The 'AI Dilemma' is growing, concerned with the potential for significant harm if AI, developed for human benefit, is not adequately controlled due to its unpredictable nature.


Could AI eventually replace humans? Is there a possibility of it attacking its creators? Could it become a significant threat to humanity?


The recent internal conflict at OpenAI is a significant event that prompts reflection on the impact of rapidly advancing AI on humanity and what preparations are necessary for this future.

Comments


bottom of page