News Center

location: Home > News Center > News > Content

Max Tegmark & Professor David Krueger Discuss the Impact and Risks of AI Development

Source:       Time:2023-06-12
On the morning of June 8th, the third session of the "Dialouge with Greatminds" series took place at the Turing Auditorium. The theme of this event was "The Future of AI and Society: A Conversation on the Impact and Risks of AI Development." Hosted by Professor Zhang Ya-Qin, a member of the Chinese Academy of Engineering and President of AIR, the event featured Professor Max Tegmark from MIT, founder of the Future of Life Institute and author of "Life 3.0: Being Human in the Age of Artificial Intelligence," and Professor David Krueger, Director of the Computational and Biological Learning Lab at the University of Cambridge. The three experts engaged in in-depth discussions and exchanges regarding the potential impact, risks, and regulation of AI development.
Professor Zhang Ya-Qin introduced the dialogue by referring to Professor Max Tegmark's book, which explores the concept of "Life 3.0" in the age of artificial intelligence. "Life 1.0" refers to biological origins, "Life 2.0" represents cultural development in human society, and "Life 3.0" signifies the era of human technology and emerging technologies such as artificial general intelligence (AGI), which can learn and redesign its own hardware and internal structures. The book explores the short-term effects of advanced technology development, including technological unemployment, AI weapons, and the pursuit of AGI that matches human-level intelligence, as well as potential consequences such as changes in social structures, human-machine integration, and various positive and negative scenarios. So, where exactly are we in this era, and how far are we from the age of "Life 3.0"?
Professor Max Tegmark stated that we are currently in an era of both opportunities and risks, which is extremely exciting. In just a few months, we have witnessed the rapid development of large-scale models and their applications, sweeping across the globe and influencing the production and lifestyle of human society as a whole. Everything seems to indicate that the era of AGI is imminent. However, while embracing the convenience and benefits that AGI can bring to humanity, Max Tegmark and Professor David Krueger urge everyone to be aware of the potential risks and challenges posed by AI, and to make AI research and usage more controllable and secure. Otherwise, we may be making the gravest mistake in human history.
Concerned about the impact and potential risks of AI development, Professor Max Tegmark established the non-profit organization, the Future of Life Institute (FLI), with the aim of reducing global catastrophes and existential risks faced by humanity, particularly those brought about by advanced artificial intelligence technologies. Meanwhile, Professor David Krueger's Cambridge Computational and Biological Learning Lab (CBL) focuses on engineering approaches to understand the brain and develop artificial learning systems, while closely monitoring the potential risks resulting from the lack of interpretability in AI systems.
Furthermore, these three professors recently co-signed the "Statement on AI Risk," emphasizing the importance of not overlooking potential risks while focusing on technological advancements.During this dialogue, the professors specifically discussed the risks and challenges of AI's future development and how we should regulate and respond to them. They identified several main risks and challenges:
  1. Lack of interpretability: Despite the incredible progress in AI development, we still do not fully understand the deeper and more fundamental reasons for AI's success. This lack of understanding hinders our ability to explain the mechanisms behind AI's operation and manage the risks of its potential runaway.

  2. Possibility of AI running amok: Due to the lack of interpretability, humans are unable to accurately judge or predict AI behavior. As a result, AI may exhibit unexpected behaviors. While this unpredictability may be acceptable for large language models like GPT-4, which simply answer human questions, it would be catastrophic if we allowed AI to control nuclear weapons, biotechnology, or participate in military operations.

  3. Mismatch between AI development and safety research: AI development is progressing much faster than research on AI safety and reliability. The actual controllers of AI development are often not policymakers, but rather companies driven by profit maximization. This often leads to neglecting

he safety and reliability of AI systems in favor of maximizing profits.
To address and regulate the potential risks of AI development, Professor Tegmark suggests using AI itself to improve its own interpretability. By utilizing AI to assist humans in providing mathematical proofs or self-certifying its safety, we can simplify the complexity of AI and help people understand it better. When we can use a comprehensive theoretical framework to explain AI behavior, all AI technological development can be fully controlled by humans, thereby mitigating potential risks. This does not mean that we need to stop AI research, but rather ensure that it undergoes sufficient scrutiny and verification during development and application to avoid potential risks such as AI runaway.
Professor David Krueger emphasizes that AI is not a zero-sum game, and humans should slow down the "AI arms race." When global society becomes aware of the potential risks that AI may pose to human society, it is essential for us to unite and collectively address the risks and challenges brought about by AI.
Lastly, from the perspective of researchers, scientists have a responsibility and obligation not only to develop new technologies but also to work with governments to ensure the safe and secure use of AI by the general public.
In conclusion, these professors highlight the importance of addressing the potential risks and challenges of AI development. They advocate for using AI to improve its own interpretability, slowing down the AI arms race, and promoting collaboration between researchers and governments to ensure the safe and secure use of AI. By taking these measures, we can harness the benefits of AI while minimizing the potential risks it may pose to humanity.
In the dialogue, Professor Ya-Qin Zhang emphasizes the 3R principles that should guide the development of artificial intelligence: Responsive, Resilient, and Responsible. He states that when researching theories, algorithms, and application models, we must consider the significance of the technology and the potential consequences, placing ethics and values above technology. Only by doing so can we better promote technological development and establish a stronger trust relationship.
At AIR, we are committed to creating responsible AI. We aim to understand the underlying foundations of different industries, analyze the impacts and consequences that technology will bring, and advance the Fourth Industrial Revolution through technological innovation, international cooperation, governance, and widespread use of artificial intelligence.
By adhering to the 3R principles and considering the ethical implications of AI, we can create a more inclusive, sustainable, and beneficial future for humanity.


Max Tegmark is a Professor at MIT, researching physics and Artificial Intelligence. He is also president of the Future of Life Institute and scientific director of the Foundational Questions Institute. His research has spanned cosmology and the physics of cognitive systems, and is currently focused at the interface of physics, neuroscience and AI. He is the author of more than 200 publications and the bestselling-books, Life 3.0: Being Human in the Age of Artificial Intelligence and Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003." He has received a Packard Fellowship, Cottrell Scholar Award and an NSF Career Award. He earned a PhD in physics from the University of California, Berkeley.


David Krueger is an Assistant Professor at the University of Cambridge. He is a member of Cambridge's Computational and Biological Learning lab (CBL), where he leads a research group focused on Deep Learning and AI Alignment. David’s current research interests include: 1) formalizing and testing AI Alignment concerns and approaches, especially to do with learning reward functions, 2) understanding Deep Learning, and 3) techniques for aligning foundation models. His previous research has spanned many areas of Deep Learning, including generative modeling, Bayesian Deep Learning, empirical theory, and robustness. He is also a CSER research affiliate, and previously studied at Mila / University of Montreal, and Reed College; interned at the Future of Humanity Institute, DeepMind, and ElementAI; and worked as a contract writer for the Partnership on AI, and a career counselor for 80,000 Hours.

上一条:Zhang Ya-Qin: Leading the AI Era | Full Transcript of the Speech at Tsinghua School of Economics and Management's 2023 Graduation Ceremony 下一条:Multiple firsts! AIR@LONDON Robot Exploration Journey

关闭

Email:Airoffice@air.tsinghua.edu.cn
Tel:(010)82151160  

Address:12 / F, block C, Qidi science and technology building, Tsinghua Science and Technology Park, Haidian District, Beijing

wechat

Jing ICP Bei No. 15006448 | all rights reserved@ Institute of intelligent industry, Tsinghua University