Home - News and Events - News - Content

CESL Explores the Legal Challenges of AI

On November 15, 2019, the International Academic Conference on "Artificial Intelligence and Law: Chinese and European Perspectives” was hosted by the China-EU School of Law at the China University of Political Science and Law (CESL). The event took place at Jingyi Hotel in downtown Beijing.

Welcome speeches were given by Ma Huaide, President of the China University of Political Science and Law (CUPL), Zhang Fusen, former Minister of Justice of the People's Republic of China, Hinrich Julius, Professor of Law at the University of Hamburg, and Johan Vandromme, First Counsellor at the EU Delegation to China. This was followed by two keynote speeches by Ji Weidong, Professor and former Dean of the Koguan School of Law at Shanghai Jiao Tong University, and Friedrich-Joachim Mehmel, President of the Hamburg Higher Administrative Court and the Hamburg Constitutional Court.

After that the day was broken down to three sessions which each gathered notable panels of experts which focused on different cutting-edge issues related to the legal opportunities and challenges brought by artificial intelligence (AI). These included “Legal Challenges of AI”, “Regulation of IP, Smart Contracts and Platforms in the Era of AI”, as well as “AI in Public and Private Governance Structures”.

The first panel, the "Legal Challenges of AI", was chaired by Ronald Montague Silley, European Executive Co-Dean of the CESL.

Chen Jinghui, Professor from the School of Law of Renmin University of China, gave a wonderful first speech on the topic of "the challenge of AI to law". Professor Chen Jinghui pointed out that for the discussion of AI, the discussions in China's legal circle can be roughly divided into two parts: theory and practice. The theoretical aspects, such as whether the calculation method constitutes the law of AI, mostly involve legal issues.

In turn, the practical issues can be divided into two main topics. First, AI assists judicial rulings in a tool-based manner and eases the work of legal persons, which is undisputed. The second is the key issue, that is, whether the attributes of AI-judges and human judges match, whether AI can work autonomously, and make legal judgments alone to truly achieve the result of AI trials.

He pointed out that the challenges of AI to law mainly involve the challenge of human dignity. The progress of the existing technology makes AI not only play the role of a tool, but also stores people in AI technology in the form of data. This makes personal information and privacy lose the protection of the traditional law and is presented in public spaces where everyone can be viewed. The difference between the right to be forgotten and the traditional right to privacy in the European Court of Justice is whether dignity is really related to the situation of damage. On the other hand, the challenge is that through AI technology, people's needs can be easily met, and it is easy to cause people to greatly reduce their self-modelling and self-creation capabilities.

The second speaker was Li Aijun, Professor from CUPL and Dean of the Institute of Internet Finance Law. She started by explaining some theory. First, AI is not a legal subject. It is an activity that uses existing knowledge and experience. Second, AI is an algorithm, not a science of intelligence and cognitive nerves as in the human cerebral cortex. It is based on data provided by people and a computational model for calculation. Third, AI does not meet the three basic elements that make up the subject of natural sciences, that is, it does not have self-awareness, does not own resources, and does not have the ability to control itself. From the perspective of natural science, Professor Li Aijun analysed that the value goals, intelligence level, nature, and methods of AI-science are not subjective. They do not want to create a new species, but an act.

Secondly, from the perspective of legal value, she pointed out that, first, artificial intelligence does not have legal value, and the legal risks that give it legal subject status are still uncontrollable. AI does not have the ability to take responsibility. It cannot control its risks through sanctions, guaranteeing social order and human security. AI is an element of legal behaviour that constitutes the subject. This subject includes AI designers, manufacturers, and users. It belongs to some kind of ability, namely the category of behaviour.

Thirdly, she analysed from the perspective of AI as a tool. AI is an extension of the subject's behaviour or a method of behaviour. It is a category of science and technology, and a means and tool for humans to change the objective world. AI itself is not human consciousness, but a consciousness formed by humans with additional data and algorithms. Therefore, it is a subject's behaviour theory, not a subject, and it is an extension of human behaviour.

The third speaker was Nathalie Smuha, an Assistant Lecturer at the Faculty of Law of KU Leuven and a researcher at the Center for Legal and Ethical Studies in Artificial Intelligence and Technology. The topic of her speech was "The EU approach to regulating AI: from ethics to fundamental rights”. She shared the strategy and layout of AI related to the European Union. First, from a people-oriented perspective, the EU first established an expert working group that combines industry, universities, and researchers. Scientists, computer experts, judges and people from all walks of life pay attention to and participate in it to ensure the diversity of the members. At the same time, an online platform is set up to allow the ordinary public to express their ideas.

Second, she introduced the four ethical principles and seven core requirements stipulated and implemented by the European Union to ensure that the entire life cycle of AI systems is respected and valued, and AI-related laws are binding. Third, she mentioned the application assessment checklist for specific issues, balancing detailed regulations in specific areas with general legal norms to implement the seven core requirements and better application of the four ethical principles. She emphasized that all countries in the world are working on AI laws and regulations, and the EU currently has gaps in their laws. She hopes to deepen the understanding of the legal crisis through diversified discussions, ensuring innovation while protecting individual’s privacy.

The fourth speaker was Xu Ke, Assistant Professor at the University of International Business and Economics. He spoke on the theme of "AI governance: law vs ethics". He pointed out that whether it is China or the OECD, there are more and more ethical issues in AI, but on the other hand, no country has issued comprehensive laws on AI. Professor Xu Ke analyzed the logic of AI and the legal evolution from a historical perspective, and then discussed why in a country ruled by law, ethics has become the first choice to restrict and think about AI, not purely law.

He believes that the essence of legal autonomy lies in the formalization of the entire society. There are four reasons: First, legal decisions begin to formalize, and they no longer seek the principles of actual justice and ethics in specific cases. Second, the law will determine the rights and obligations of individuals, not traditional customs and ethics. Third, law is universal, and logical analysis reveals the connection between law and facts. Fourthly, law is controllable and rational.

According to Professor Xu Ke, AI has brought about an irrational world, making laws fail here, and ethics has returned to the field of supervision. Therefore, it is necessary to formulate ethical principles instead of legal rules to meet the challenges of AI. Only by understanding the legal and ethical boundaries of AI can we better cope with the risks brought by AI.

Finally, He Dan, Associate Professor at the Law School of Beijing Normal University reviewed all the comments so far. She summarized the content of the first four speakers, summarized the core points, affirmed the four speakers' discussions on AI's challenges, and addressed whether AI can become a main legal subject as well as the EU perspective. She provided analysis of the issues, ethics and legal relations.

At the same time, she supplemented her thinking on the challenges of AI and law. First, the impact of AI on legal rules is reflected by the fact that more stakeholders have been added to participate in the formulation of the rules and have a certain impact on legal rules. Secondly, in the process of formulating rules, AI has begun to break the space of legal autonomy and the framework of legal autonomy. Should the scientific and technological experts realize the degree of AI in the future, a new cooperation framework is needed. Third, with regard to ethical and legal issues, she hopes that AI is still a framework that can be controlled and relied on by law. It should be a legal act and not the subject that controls us.

Finally, a full room of participants asked questions about the speeches in the first unit. The audience discussed the role and status of AI, from legal norms to judicial rulings, and explored whether AI as a model for a legal reasoning, the principles for overseeing AI, and how to legally deal with AI in general. Participation of AI in trials was heatedly discussed raised issues related to human rights as well as moral challenges.


(Contributor: Qin Mengyuan)