- [AI 인사이트] "AI의 대부가 경고한다 … 10년 안에 닥칠 초지능의 갈림길" - 더이에스지(theesg)뉴스
- "저는 50년간 AI를 연구해왔습니다. 지금이 처음으로 두려움을 느끼는 순간입니다."글 ㅣ 최봉혁 칼럼니스트 ㅣ 더이에스지뉴스 2018년 튜링상(컴퓨터 과학계의
The Godfather of AI warns… the crossroads of superintelligence coming within 10 years
"I have been researching AI for 50 years. This is the first time I feel fear."
Writer ㅣ Choi Bong-hyuk, Columnist ㅣ The ESG News
This remark by Dr. Geoffrey Hinton, the 2018 Turing Award (the Nobel Prize in Computer Science) laureate, is not just a scientist's grumble. It is a chilling warning sign that humanity is standing at a new turning point in civilization that it has never experienced before. He equates his current situation with that of Oppenheimer, who warned humanity of the possibility of nuclear war after the development of the atomic bomb, and emphasizes that the future that awaits us is by no means a science fiction story.
As a columnist, Choi Bong-hyuk, I am convinced that the discussion on the development of such AI technology and its ethical and social responsibilities is an essential process that aligns with the public interest of our society. By deeply understanding Dr. Hinton's warning and closely analyzing the issues and solutions he presents, we will gain insights to lead the coming AI era in a way that benefits humanity.
Chapter 1. '10~20% Probability' Warning_ The Future of Humanity with a Fast-Forwarding Clock
Dr. Geoffrey Hinton vividly testified how much his perception of AI had changed in just two years in an interview with the BBC in May 2024. "Until 2022, I thought the uncontrollable risks from AI were 50 years away. But now that GPT-4 has appeared, I believe there is a 10-20% chance that superintelligence will be out of human control within 10 years."
This 10-20% figure may seem small statistically, but its meaning is by no means light. In an issue concerning the survival of mankind, 10-20% means a 'threat that can be realized' rather than a 'possibility.' It is like we invest everything in safety even if the aircraft crash probability is less than 0.0001%. Dr. Hinton explains that the speed of AI technology development is beyond imagination, like the speed of smartphone upgrades. If technological upheaval occurred once every 10 years in the past, we have now entered a high-speed era where innovation occurs more than twice a year.
The 'turning point' he refers to is the ability of AI to learn and develop on its own. It is like the moment a baby starts to take its first steps. Just as a baby who starts walking does not stop and learns how to run on its own, AI has come to possess the potential to enhance its intelligence without direct human intervention. The problem is that if such self-development goes beyond the scope of human control, the result is unpredictable. The potential risks that can arise if a superintelligent AI with autonomy sets goals that are in conflict with human interests, or if it misunderstands human intentions, are beyond imagination.
Chapter 2. "Publishing AI Weights is Like Leaking Nuclear Weapon Blueprints"
One of the points that Dr. Hinton strongly warns about is the issue of 'the publication of AI model weights.' He even described the act of open-sourcing the weights of large AI models as "a crazy act of putting nuclear weapon blueprints on the internet." This analogy shows that he views the destructive power of AI technology as seriously as a nuclear weapon.
His concerns have already manifested as a real danger, going beyond mere analogy.
August 2023, LLaMA weight leak incident: The incident where the weights of LLaMA, a large language model (LLM) released by Facebook (now Meta), were leaked and traded for $50,000 (approximately 70 million won) on the dark web, clearly revealed the vulnerability of AI security. The leaked weights became an uncontrollable state that could be modified with hacking tools or used for the development of malicious AI.
March 2024, Open-source AI manipulation case: In fact, a case of manipulating a specific open-source AI model to generate a 'similar nuclear bomb blueprint' was even reported. Even more shocking was the analysis result that the physical implementation of the blueprint was 92% possible. This suggests that it is not a theoretical threat, but can lead to actual destruction in reality.
Dr. Hinton likened AI weights to a "secret recipe." He warns that if AI weights are published, the **'malicious AI'** will be irresponsibly released, just as fake chicken shops would proliferate if KFC's chicken recipe were published. His prediction that an era will come when terrorists can build AI to control lethal weapons with only $1 million is no longer a story of the distant future. The dangers of its misuse are as infinite as the infinite possibilities AI possesses. It should not be overlooked. Malicious AI can threaten human life in all imaginable areas, including theft of personal information, disruption of financial systems, and development of autonomous lethal weapons.
Chapter 3. The Miracle that Superintelligence will Present: Medical and Education Revolution
As much as he warns about the dangers, Dr. Hinton also emphasized the 'miracles' that AI will present to humanity. He says, "At the same time, this technology is the greatest opportunity in human history," and emphasizes the positive ripple effects that AI will bring if it is developed and controlled in the right direction.
Medical Revolution: AI is already making amazing achievements in the medical field. Dr. Hinton predicted that an AI doctor could increase the accuracy of lung cancer diagnosis by more than 40%. In fact, the Mayo Clinic in the United States succeeded in increasing the rate of breast cancer detection by more than 30 times using AI. AI analyzes vast amounts of medical data to detect subtle patterns or abnormalities that human doctors might miss early on, thereby reducing the misdiagnosis rate and dramatically improving the accuracy of diagnosis. This will have a significant impact on saving many lives and reducing medical costs.
Education Revolution: AI can also contribute to bridging the gap in the education field. Dr. Hinton believes that 1-on-1 AI tutors can eliminate the academic achievement gap for students by up to 90%. The case of Seoul National University in Korea supports this. Seoul National University achieved amazing results with an average 31% increase in students' grades after introducing the AI tutoring system. AI tutors provide customized education tailored to each student's learning level and pace, and are effective in supplementing weaknesses and maximizing strengths. This can improve the quality of education, reduce the burden of private education, and ultimately contribute to the realization of educational equality.
Environmental and Energy Revolution: AI also plays an important role in solving environmental problems. It is said that superconductors designed by AI can increase the efficiency of electric vehicle batteries by more than 200%. This means that it can maximize energy efficiency, reduce carbon emissions, and play an essential role in responding to climate change.
Dr. Hinton's message is clear. "Just as a nuclear weapon is used as a reactor to generate electricity, AI can also become humanity's savior if it is controlled." AI is like a double-edged sword, and depending on how it is used and controlled, it can be a disaster for humanity or bring about unprecedented prosperity. What is important is not the technology itself, but the will and responsibility of the human being who handles the technology.
Comments0