作为人工智能世界级科学家和企业家、达沃斯论坛的常客和重要委员，清华大学智能产业研究院（Institute for AI Industry Research，AIR）张亚勤院长亦受邀出席了此次达沃斯议程对话会，并发表了演讲。
Ya-Qin:I'm very happy to be part of the panel. From my industry – the tech, software and the internet industry – we have come a very long way in terms of where we stand in AI ethics. There are three things that will make this work.
(1)The level of awareness, which has to start from very top.
When I started Tsinghua Institute for AI Research, one of my first email was to statethe 3R principles: Responsive AI, Resilient AI and Responsible AI.
By Responsive AI, I mean the technologies we work on have to be responsive to the needs of the industry and the society, for example, autonomous driving, algorithms that accelerate drug discovery, technologies that improve personal health.
AI Resilience asks of us to make our technologies more transparent, explainable and robust. We need to work on the things that reduce data bias and leakage, model vulnerability and security risks.
Being responsible means that engineers and scientists should make sure they put ethics and values beyond technologies itself.
I was the president of Baidu for a few years. At Baidu there was a committee for the privacy, security, and governance of data and I was the chair of the committee. We made sure that we had the right people in a cross-company initiative.
(2)The second element, which is very important, is that you have to map this into the right domain, for your product and your industry.
You have to start with the right data and avoid data bias: data integrity, right of ues, the scope of use, and the life cycle.
You have to develop and apply the right type of algorithms – deep learning, which is a bit opaque, also other algorithms that are more transparent. You need make sure the logic, rule and knowledge being incorporated into the overall algorithm.
You have to control the training, learning and inference. The AI algorithm is like a baby or your dog. You have to train it to make sure it evolves and have the right kind of environment to grow.
(3)The last one, which is super-important, is to have an operational framework – the right type of workflow, toolbox, decision-making process – otherwise it won't get anywhere.
When I was in Baidu, we had data agents who own the flow, management and accountability of the data.
I just read some excellent work from the World Economic Forum, "AI Ethics Framework : tool set and playbook".I thought that was really important because a lot of times, you talk about AI, you have that level of awareness, but you don’t have the right framework, the right type of tools to execute.
Synder:We were just talking about drawing a red line for technologies. You are seeing this from different perspectives. You have the US view, because you were from Microsoft China, and the China view. What do you, the business leaders, need to understand in terms of how countries view these issues differently?
Ya-Qin:Obviously, there's a common set of principles, values and ethics that we've talked about – the responsibility and the accountability framework. It's also important to recognize there are differences. Just like the products you build for different customers – Chinese and US customers for example – are different.
And we need to understand the regulatory differences, understand the rules and regulations in different regions. I'm happy to see that the US, EU and China have developed, over the last few years, a set of rules and regulations and policies. In China there's a lot of work among the ministries and agencies in terms of security, privacy and data issues.
I see some great progress, and meanwhile, we should understand the differences in terms of the market, the users and industries.
Synder:At Davos last year, some called for more regulations for AI. What will be the most effective in the near term to ensure the responsible use of AI? International normative red lines, informal agreement between private sector players, something else or altogether?
First, there has to be an open and candid dialogue between governments, NGOs, the academia and the industry.
Second, obviously, let's make sure we have right technologies and tools to reduce data bias, and deal with sensitive issues, such as privacy, security, and data.
For example, there's been great progress in the last few years: homomorphic computing that allows operation to be done in encrypted data, federated learning that can do learning without the need to share the original data, and differential privacy, and also technologies that can add logic to algorithms. And we have a lot of scientists working on it, to make sure that it's the knowledge-based instead of data-driven only.