新闻中心

当前位置: 首页 > 新闻中心 > 学院新闻 > 正文

张亚勤院长: 负责任发展AI技术的前提是各界保持开放型对话

来源:       发布时间:2021-02-01

在国内外新冠疫情持续的特殊形势下,2021年世界经济论坛“达沃斯议程”对话会于1月25日至29日以线上形式举行。此次论坛上,全球70多个国家和地区的1500多位政商界和社会组织领导人围绕“把握关键之年,重建各方信任”这一主题,就如何应对新冠疫情下全球面临的经济、环境、社会、技术等挑战,寻求合作抗疫和推动经济复苏的方案等展开了探讨。

作为人工智能世界级科学家和企业家、达沃斯论坛的常客和重要委员,清华大学智能产业研究院(Institute for AI Industry Research,AIR)张亚勤院长亦受邀出席了此次达沃斯议程对话会,并发表了演讲。


张亚勤院长发言视频

黄老师截的图2.png

张亚勤出席世界经济论坛“科技向善”分论坛

Ya-Qin:I'm very happy to be part of the panel. From my industry – the tech, software and the internet industry – we have come a very long way in terms of where we stand in AI ethics. There are three things that will make this work.

(1)The level of awareness, which has to start from very top.

When I started Tsinghua Institute for AI Research, one of my first email was to statethe 3R principles: Responsive AI, Resilient AI and Responsible AI.

  • By Responsive AI, I mean the technologies we work on have to be responsive to the needs of the industry and the society, for example, autonomous driving, algorithms that accelerate drug discovery, technologies that improve personal health.

  • AI Resilience asks of us to make our technologies more transparent, explainable and robust. We need to work on the things that reduce data bias and leakage, model vulnerability and security risks.

  • Being responsible means that engineers and scientists should make sure they put ethics and values beyond technologies itself.

I was the president of Baidu for a few years. At Baidu there was a committee for the privacy, security, and governance of data and I was the chair of the committee. We made sure that we had the right people in a cross-company initiative.

(2)The second element, which is very important, is that you have to map this into the right domain, for your product and your industry.

  • You have to start with the right data and avoid data bias: data integrity, right of ues, the scope of use, and the life cycle.

  • You have to develop and apply the right type of algorithms – deep learning, which is a bit opaque, also other algorithms that are more transparent. You need make sure the logic, rule and knowledge being incorporated into the overall algorithm.

  • You have to control the training, learning and inference. The AI algorithm is like a baby or your dog. You have to train it to make sure it evolves and have the right kind of environment to grow.

(3)The last one, which is super-important, is to have an operational framework – the right type of workflow, toolbox, decision-making process – otherwise it won't get anywhere.

When I was in Baidu, we had data agents who own the flow, management and accountability of the data.

I just read some excellent work from the World Economic Forum, "AI Ethics Framework : tool set and playbook".I thought that was really important because a lot of times, you talk about AI, you have that level of awareness, but you don’t have the right framework, the right type of tools to execute.

Synder:We were just talking about drawing a red line for technologies. You are seeing this from different perspectives. You have the US view, because you were from Microsoft China, and the China view. What do you, the business leaders, need to understand in terms of how countries view these issues differently?

黄老师截的图.jpg

张亚勤在世界经济论坛上发言

Ya-Qin:Obviously, there's a common set of principles, values and ethics that we've talked about – the responsibility and the accountability framework. It's also important to recognize there are differences. Just like the products you build for different customers – Chinese and US customers for example – are different.

And we need to understand the regulatory differences, understand the rules and regulations in different regions. I'm happy to see that the US, EU and China have developed, over the last few years, a set of rules and regulations and policies. In China there's a lot of work among the ministries and agencies in terms of security, privacy and data issues.

I see some great progress, and meanwhile, we should understand the differences in terms of the market, the users and industries.

Synder:At Davos last year, some called for more regulations for AI. What will be the most effective in the near term to ensure the responsible use of AI? International normative red lines, informal agreement between private sector players, something else or altogether?

Ya-Qin:

  • First, there has to be an open and candid dialogue between governments, NGOs, the academia and the industry.

  • Second, obviously, let's make sure we have right technologies and tools to reduce data bias, and deal with sensitive issues, such as privacy, security, and data.

For example, there's been great progress in the last few years: homomorphic computing that allows operation to be done in encrypted data, federated learning that can do learning without the need to share the original data, and differential privacy, and also technologies that can add logic to algorithms. And we have a lot of scientists working on it, to make sure that it's the knowledge-based instead of data-driven only.


张亚勤:很荣幸能够参加这次讨论会。我从事的科技、软件、互联网行业在AI伦理问题上发展已久。下面我将讲述构成AI伦理工作的三个要点。

(1)自上而下提升价值观水平

在创办清华大学智能产业研究院的初期,我就发内部邮件说明了3R原则,即积极响应(Responsive)、适应发展(Resilient)、坚守价值(Responsible)。

  • “积极响应”指的是,我们应致力于研究满足社会和行业需求的技术,如自动驾驶、推动药物发明进程的算法、提升个人健康水平的技术等。

  • “适应发展”要求我们推进技术发展,使更透明、更具解释说服力、更加稳健,还应减少数据偏差及泄漏、模型漏洞等情况的出现,解决存在的安全风险。

  • “坚守价值”意味着科研人员必须将伦理和价值观置于技术之上。

我曾任百度公司总裁数年。百度为数据隐私、数据安全和数据管理专门设立了委员会,我是该委员会会长。我们以这种方式确保运营跨公司项目的人都是可靠的。

(2)确保在正确的领域进行产品和行业规划

  • 必须以正确的数据为基础,避免数据偏差:保证数据完整性,把握数据使用范围和使用周期。

  • 必须开发、使用正确的算法——包括略模糊的深度学习和其他较透明的算法。必须确保将逻辑、规则和知识融入算法整体中去。

  • 必须控制其训练、学习和推理。AI算法就像婴儿和小狗,必须通过训练使其发展进步,为其成长提供适宜的环境。

(3)必须打造运作机制,包括正确的工作流程、工具、决策过程

我在百度任职时,设有数据代理人岗位,负责数据流通、数据管理和数据问责事务。

我最近读了世界经济论坛上的一部作品《AI伦理框架:工具与策略》,我觉得写得很好。因为往往我们讨论AI问题时,已经具有足够的意识,却缺乏正确的机制和工具。

Synder:我们刚刚说到给科技划红线的问题。您可以从双重视角来看这一问题,除了中国视角外,您在微软中国工作过,还具有美国视角。那么,包括您在内的商业领袖应该如何理解不同国家在这类问题上的不同看法?

张亚勤:其实,各国之间存在一套共同的原则、价值和伦理,也就是我们之前提到的透明机制和问责机制。与此同时,我们也应当注意到国家间的差异。为不同国家消费者生产的产品是不同的,比如说中美两国消费者需求就不同。我们还应理解国家间的制度管理差异。

令人高兴的是,近年来美国、欧盟和中国建立了一系列相关规则、制度和政策。中国的相关部门和机构在安全、隐私和数据问题上下了很大功夫。这些都是进步,但我们同时也应明白各国在市场、用户和产业上存在差异。

Synder :在去年达沃斯论坛上,有人提出应加大对AI的管理力度。为了保证人类负责任地使用AI,在短期内最有效的措施是什么?设立国际标准红线,私人企业间订立非正式协议,采取其他措施,还是多管齐下?

张亚勤:

  • 首先,各国政府、非政府组织、学界和业界都需要开诚布公地对话。

  • 其次,我们必须具备正确的技术和工具以减少数据偏差,处理好诸如隐私、安全和数据之类的敏感问题。

在这一点上,近年来发展成果颇多:使数据在加密状态下运行的同态加密算法,无需分享原始数据的联邦学习,差别化隐私,以及向算法中添加逻辑的技术等。许多科学家正积极展开这方面的研究,使算法具有知识保障,而非单纯由数据驱动。

上一条:Hi!“工具人”张亚勤带你探索AIR办公室 下一条:AIR祝您2021年元旦快乐!

关闭

相关新闻

最新动态

业务合作:airoffice@air.tsinghua.edu.cn
招生招聘:airhr@air.tsinghua.edu.cn
联系电话:(010)82151160  

办公地点:北京市海淀区清华科技园启迪科技大厦C座12层

官方微信

京ICP备15006448号  |   版权所有©清华大学智能产业研究院