新闻中心

当前位置: 首页 > 新闻中心 > 学院新闻 > 正文

张亚勤院长:人工智能发展的3R原则

来源:       发布时间:2020-12-30

2020年12月18日,首届清华大学人工智能合作与治理国际论坛在北京开幕。清华大学智能产业研究院(AIR)院长张亚勤出席会议并发表主旨演讲,他强调人工智能发展应秉持3R原则:Responsive(积极响应)、Resilient(适应发展)、Responsible(坚守价值)。


在主论坛“后疫情时代的人工智能国际治理——实现弹性未来”中,张亚勤院长在发言中强调了人工智能技术治理中应秉持的3R原则:


At AIR, we look at AI technology and industry research with the following three R principles:


01 Responsive 积极响应


致力于研究回应社会和行业需求的技术,例如:


  • 用来预测和防范疫情的数据分析模型

  • 推动药物发明进程的深度学习算法

  • 提升个人健康水平的应用开发

  • 能够挽救生命、减少交通事故、改善环境的自动驾驶和智慧交通


Responsive AI: We choose to work on technologies that are responsive to the needs of the societies and industry: for example, data and analytic models that predict and prevent pandemics, deep learning algorithms that accelerate drug discoveries, technologies that improves personal health, and autonomous driving and smart transportation that save lives, reduce traffic, and improve our environment.


02 Resilient 适应发展


  • 推进技术发展,使其更透明,更具解释说服力,以及具备更强大的力量

  • 减少数据偏差及泄漏、模型漏洞、算法攻击情况的出现,解决存在的安全风险

  • 开发新的算法,不仅要包含大数据及计算功能,还要包含因果关系和逻辑关系的相关知识和推理能力


Resilient AI: We must advance our technologies to become more transparent, explainable, and robust. We must reduce data biases and leakage, model vulnerability, algorithm attack,and security risks. We need to develop new algorithms that embrace not only big data /computing capability, but also knowledge and inference with causality and logics.


03 Responsible 坚守价值


  • 在研究理论、算法,尤其是适用于不同行业的应用模型时,必须牢记一个重要的原则——理解技术本身的意义,及其可能带来的结果

  • 在做科研时,人们通常专注于“如何”,而忽略了“为何”。而在人工智能时代,研究人员必须明白技术的使用情况和潜在的滥用情况,并将伦理问题和价值观置于技术之上


Responsible AI: When we work on theory, algorithms, and especially applications to different industries, we must keep in mind the key underlining principles and understand consequences and implications of our technologies. Being scientists and technologies, we often focus on working on HOW and ignore the very question of WHY and WHAT. That is not okay at the age of AI, we must understand the use and potential misuse of technologies – putting ethics and value beyond the technology itself.


AIR致力于做负责任的AI


  • 面对AI的快速发展,张亚勤提出3R原则,希望能够通过Responsive(积极响应)、Resilient(适应发展)、Responsible(坚守价值)三点准则来推动赋能行业发展。

  • 清华大学智能产业研究院(AIR)希望做负责任的AI,了解不同行业的底层基础,分析技术将产生的影响和后果,通过技术创新,国际合作和治理,以及大规模使用人工智能来推进第四次工业革命。


关于论坛


Screenshot 2021-01-04 at 9.21.37 AM.png


首届清华大学人工智能合作与治理国际论坛涉及人工智能在抗击新冠疫情、促进可持续发展、维护国际安全、推动国际合作、保卫数据安全等多个主题中的作用。


人工智能近70位领军人物和60多家机构参与了论坛,此次论坛得到了清华大学人工智能研究院、美国布鲁金斯学会、SPARK联合国开发计划署可持续发展创新实验室等多家机构的合作与支持。



张亚勤演讲原文


非常荣幸参加今天的论坛,首先我要讨论人工智能这样一个话题,现在是2020年岁末,今年是非常难忘的一年。


我们反思回顾,人工智能技术不断推进算法和数据模型,这代表了AI是否是最前沿,比如包括上万个GPU(图形处理器),上百万计算等等。


我们可以看到AI在各个方面都有所应用,不管是医学、教育、交通、金融都在快速发展。中国的各个城市都在进行试点:广州、北京、长沙以及其他的城市。但让AI大规模应用成为主流化还存在很多问题,例如政策、监管、责任分配、习惯等等因素。不过我们可以看到在算法、技术以及实验这些方面,AI都取得了很大的进展。


另外,阿尔法狗不断迭代,阿尔法Flow也在结构中取得了很大的突破。一般来说进行长期实验需要耗费很多的时间、人力和资金,现在有了像阿尔法Flow这样的算法,能够让研发、医疗都以前所未有的方式来推进。


清华充当这样新的机构,也就是说所谓的清华大学智能产业研究院,可以说是希望能够推动赋能行业发展,同来自世界各地的研发中心一起工作推进第四次工业革命。清华AI治理有三个机制,这三个机制是相互补充且共同推进基础研究的。


我们清华大学人工智能产业院AIR这个机构会去研究人工智能技术和产业发展,希望做负责任的AI。我们研究的是那些对于社会和产业负责的技术,比如数据分析模型,预测防止疫情,比如像新冠这样的疫情,还有深度算法,加速应用开发。同时来提升个人的健康水平,以及自动驾驶、智能交通等等。


提高可解释性,减少数据的泄露,避免攻击。刚才张钹院士分析了一些很有意思的例子,还要去提升安全。我们有一些新的算法,同时进行知识推理,也就是要借鉴人类的思维模式,比如刚才张钹院士所讲的第三代人工智能,这是一个我们研究的关键方向。


做负责任的AI,AI在不同行业的应用都要去了解到底基础是什么,产生什么样的影响和后果。过去做科学工作,我们更关注的是方法,而忘了为什么。作为工程师可以这样,但是我们想多维度地关注怎么避免技术被误用,今天论坛的主题是人工智能的治理,我想提出几点建议:


第一,要进行积极地讨论和交流,在国家和国际层面上来进行对话,应对最为紧迫的问题。不管是自动驾驶、自动化系统、计算还是技术岗位等等。美国的布鲁金斯协会和澳大利亚的明德鲁基金会通力合作,很有勇气地研究人工智能军用这个方面的风险,我非常钦佩,这是最有敏感性,同时具有深刻政治意义的一个主题。


第二,我们要考虑研究成果要不要加密,像一些研究成果,现在40年后可以大规模的采用。另外,要注意隐私保护,不泄露相关人员的身份,同时提取他的特征,这样的话能够更好地维护安全。这些技术都是非常重要的,使我们更好地去部署技术,此外比如说政策、执行目的等等这些问题也要去关注。


我想时间也差不多了,我非常的兴奋,总之,人工智能需要通过技术创新、国际合作和治理,以及大规模使用人工智能才能够充分地推动未来智能产业的发展。



Speech on AI governance at Tsinghua Univ.

Dec.18, 2020, Ya-Qin Zhang


Good morning, ladies and gentleman.


First of all, let me just say how relevant and timely it is to have a forum on this critical topic of AI governance – as we approach the end of 2020, certainly the most eventful and difficult year in our recent memory.


We continued to advance in the AI technological front with large model, large data, and large computing. An example is OpenAI’s GPT-3, the pre-trained transformer platform for Natural language processing, represents the state-of-art AI capability with over 170 Billion parameters, implemented w over 10,000 GPUs and a quarter million CPU cores.


We see the rapid application of AI-deep learnings for every segment of industry, whether it was healthcare, education, transportation, and finance, or manufacturing.


For example, significant progress is being made in autonomous driving, In China some 30 cities began commercial trials of robotaxi this year. I was in Guangzhou just over a week ago to test out the Apollo robotaxi --- after Changsha, Beijing, and a few other cities. While it still takes some time to bring to mainstream in large volume – as you know there are also many non-tech issues: policies, regulations, liabilities, ethnics and driving habits, I do see tremendous strides made in algorithms. Technologies, and commercial trials.


Another example is AlphaFold 2, continuing the miraculous path of AlphaGo, AlphaZero, and AlphaFold 1. AlphaFold 2 achieved breakthrough in predicting protein 3D folding structure – with the accuracy close to real experimentation which usually takes long cycle with super expensive instrumentation and human resources. Indeed, New algorithms like AlphaFold can transform the drug discovery and medicine with an unprecedented pace we have never seen before.


Just two weeks ago, we officially started a new institute at Tsinghua University, the institute for AI industry research (AIR). AIR aims to become an open and global R&D center that develops core technologies, empower industries, and advance societies, leveraging the new wave of the fourth industrial revolution.


As President Qiu Yong envisioned and articulated, there are three key pillars for AI at Tsinghua University, one for academic research (AI research institute), led by Prof. Zhang Bo and Prof. Zhang YaoXue, one for international governance (A-IIG), led by madam Fu Ying and Prof. Xue Lan, and one for AI industry research, headed by myself and a few co-leagues. The three complimentary institutes try to address fundamental research, applied industry technologies, and their societal impacts and implications in a holistic way.


For AIR, we look at AI technology and industry research with the following three R principles:


Responsive AI: We choose to work on technologies that are responsive to the needs of the societies and industry: for example, data and analytic models that predict and prevent pandemics, deep learning algorithms that accelerate drug discoveries, technologies that improves personal health, and autonomous driving and smart transportation that save lives, reduce traffic, and improves our environment.


Resilient AI: We must advance our technologies to become more transparent, explainable, and robust. We must reduce data biases and leakage, model vulnerability, algorithm attack and security risks. We need to develop new algorithms that embrace not only big data /computing capability, but also knowledge and inference with causality and logics.


Responsible AI: When we work on theory, algorithms, and especially applications to different industries, we must keep in mind the key underlining principles and understand consequences and implications of our technologies. Being scientists and technologies, we often focus on working on HOW and ignore the very question of WHY and WHAT. That is not okay at the age of AI, we must understand the use and potential misuse of technologies – putting ethics and value beyond the technology itself.


With today’s AI governance, let me also offer the following two suggestions:


Conduct active discussion and candid dialogue at global level, through industry forums, think tanks, and international standardization efforts, to tackle some of the most pressing issues facing all of us, whether it’s for autonomous driving, biological computing, face recognition, elimination of jobs, and so on. I applaud the leadership by Madam Fu Ying at Tsinghua, along with Brookings Institute in US, and Moodera foundation in Australia – they have courageously worked on the boundary, risks, governance and framework of the AI for military use – arguably the most difficult and critically important topic of all.


Actively to deploy new technologies to deal with sensitive issues of privacy, security, and data solvency, and governance. For example, federated learnings that can do learning without actually sharing original data (e,g, medical data, or financial data), multi-party secured computing that can do computing with encrypted data, differential privacy that you can share the attributes and features without disclosing identities, and of course other new encryption/authentication algorithms for privacy and security.


Again, I am super excited about the transformative power of AI and the fourth industry revolution. Only through technology innovation, international cooperation and more importantly the proper use and governance of AI, can we fully unleash the full potential and power of the new wave of the industry revolution.

上一条:清华大学智能产业研究院成立暨合作协议签署仪式举行 下一条:清华教授马维英:智慧医疗中国有巨大优势 未来可能走在美国前面

关闭

相关新闻

最新动态

业务合作:airoffice@air.tsinghua.edu.cn
招生招聘:airhr@air.tsinghua.edu.cn
联系电话:(010)82151160  

办公地点:北京市海淀区清华科技园启迪科技大厦C座12层

官方微信

京ICP备15006448号  |   版权所有©清华大学智能产业研究院