By Samuel Curtis
Brian Tse (谢旻希) is a Senior Advisor at the Partnership on AI and a Policy Affiliate with the University of Oxford’s Center for the Governance of AI. His current work focuses on improving global coordination in AI safety and security. He has advised or collaborated with OpenAI, Google DeepMind, Carnegie Endowment for International Peace, Tsinghua University Institute of AI is and the leading drafter of the Beijing AI Principles. He was the first Chinese representative to deliver a keynote presentation at the Asilomar Conference on Beneficial AI. Brian is also an advisor to several major AI authors on publishing and a part-time advisor with 80,000 Hours to provide career advice to individuals on AI safety and governance in China.
Technologies that utilize neural networks, machine learning, and ambient computing are entering our lives nearly as fast as they cross our imaginations. The “pacing problem,” originally coined by Larry Downes in his 2009 book The Laws of Disruption, describes the inability of laws and regulations to keep up with the rate of technological innovation.
To address this, think tanks and research institutes around the world are trying to forecast dangers that may emerge from technologies before they arise. This includes not only intentionally-crafted malicious systems, such as malware, but also inadvertent threats that could result from poorly thought-out or hastily designed technologies.
In recent years, one major task has been identifying, compiling, and sometimes ranking the principles (transparency, security, safety, etc.) that ought to dictate the development and implementation of technologies that utilize big data and artificial intelligence.
As someone working on the ground at several such institutes, including University of Oxford’s Future of Humanity Institute and The Partnership on AI, Brian Tse says that one of the greatest challenges is translating these theoretical ethical principles to technical practice.
“So far, there are more than fifty AI principles out there, but in terms of how to translate the ‘right to forget’ into something an engineer can implement… there is still quite a gap. When people talk about ensuring the safety of autonomous systems, what does that mean in terms of properties, benchmarks and metrics that you’re measuring? I think that needs to be solved by cross-disciplinary thinking and research.”
Another challenge faced by policy advisors is bridging the differences that exist between Western and Eastern mindsets in the context of governance and ethics. Tse points out that these ideological differences are in fact reflected in their international norms: China is one of the few countries to not ratify the UN International Convention on Civil and Political Rights, which obliges countries to protect individual liberties and political rights, and subjects them to monitoring and reporting; likewise, the United States is one of the few countries to not ratify the UN International Covenant on Economic, Social, and Cultural Rights, as it views these as desirable social goals, rather than rights, and fears that its ratification would obligate the introduction of reforms contradictory to a free market.
These values come to a head on the issues of AI ethics and governance. “There is a thriving ecosystem of civil societies in the US, with organizations such as Human Rights Watch and ACLU wanting to bring their values to the AI conversations, and that space is very different in China. When you think about cooperation between China and the US on AI, civil societies want to participate in the conversation. But when they want to exert pressure on China, how should we think about that? Should their values apply equally to China? When you deal with AI ethics and governance, it seems very quickly that you have to think about everything—economics, politics, and ideologies.”
However, Tse is optimistic that channels exist for cooperation in the scientific community, which has historically played a critical role in mitigating emergent technological threats while governments were locked in arms races. For example, the Pugwash Conferences on Sciences and World Affairs, founded in 1957, served as a channel for communication between scientists from the US, the USSR, and other countries throughout the Cold War, and provided essential preparatory work to the Non-Proliferation Treaty and the Anti-Ballistic Missile Treaty, among others.
“I think that type of initiative coming from scientists and from the academic community could be a counter-force to more nationalistic forces that are impossible to escape and difficult to overcome, if you don’t have an equal power favoring cooperation and coordination on critical issues.”
Data privacy stands out to Tse as one area for collaboration by researchers from around the globe. Earlier this year, the People’s Republic of China established the National Governance Committee of Next Generation Artificial Intelligence. In June, the Committee issued a document entitled, “Governance Principles for the New Generation Artificial Intelligence,” to provide guidance to the country’s burgeoning tech sector. It outlines key principles such as “openness and collaboration, respect for privacy, and fairness and justice.” Western audiences may be skeptical of the consequence of such a document, noting that the Chinese government already utilizes intrusive technologies, such as facial recognition software, for surveillance. But Tse argues that these principles will shape the development of the Chinese AI community from the bottom up.
“Several companies have seen [those principles] and taken notice. They might feel as though if they don’t have sufficient mechanisms to protect data privacy, they will lose competition to get funding in smart city bids, or may be regulated by government, which would be stricter than self-regulation… so regulators, consumers, competitors think about these angles.”
Indeed, the day after these principles were published in June, the term received over 5 million searches on Baidu, China’s most popular search engine. Then, in November, the Beijing Academy of AI Conference hosted its first global summit, including more than one hundred experts from around the world. One spectator remarked that significant efforts were being made by Chinese municipalities to implement the principles laid out by the Committee.
This could be a potential inroad into a new era of dialogue and thought leadership between the US and China on AI ethics, though it is not without its limitations. “If we use the consensus that data privacy is something the US and China can collaborate on, we can get researchers and scientists into a room and pool resources on projects to advance data privacy.” Tse acknowledges that now, sensitive political issues, including human rights controversies and national security investigations into Chinese-owned U.S.-based companies, stand in the way of whole-of-government collaboration between the U.S. and China. “In order for governments to work together, [they] may have to table some issues, see how things evolve, and maybe there is a trajectory for convergence in the future.”
There is no doubt a need for greater discourse across borders on what the world ought to look like as AI technologies enter our lives. For the time being, progress will likely require cultural and political empathy, and collaboration without consensus, as Tse suggests. But this work need not be for ethicists and governance experts alone; collaborative efforts between business leaders and private citizens will serve as pivotal avenues for transnational trust-building. Spanning the themes of technology, politics, and ethics, cross-disciplinary dialogues are the necessary next steps towards responsible AI governance.
We just sent you an email. Please click the link in the email to confirm your subscription!