Glenn Cohen’s Talk on AI & Machine Learning in Health Care at Renmin Law School
time:2021-06-14On June 2, 2021, Professor Glenn Cohen, James A. Attwood and Leslie Williams Professor of Law & Deputy Dean at Harvard Law School, attended the Lecture Series of the Global Honorary Chair at Renmin University Institute of Law and Technology and delivered a fantastic Zoom talk on The Legal and Ethical Issues of Artificial Intelligence and Machine Learning in Health Care.
Professor Yi Wang, Vice President of Renmin University of China & Dean of Renmin University Law School, delivered a welcoming speech before Professor Cohen’s talk. A dozen of distinguished experts in the fields of artificial intelligence, machine learning andhealth law attended the talk and had fruitful conversations with Professor Cohen. More than one hundred students and legal professionals attended the Zoom talk online.
Some Discussants and Students in the Onsite Meeting Room at Renmin Law School
About the Speaker
Professor I. Glenn Cohenis the James A. Attwood and Leslie Williams Professor of Law, Deputy Dean of Harvard Law School and Faculty Director of Harvard Law School Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics. Professor Cohen is one of the world's leading experts on the intersection of bioethics and the law, as well as health law.
Professor Cohen has authored of more than 150 articles and chapters and his award-winning work has appeared in leading legal (including the Stanford, Cornell, and Southern California Law Reviews), medical (including the New England Journal of Medicine), bioethics (including the American Journal of Bioethics), scientific (Science, Cell, Nature Reviews Genetics) and public health (the American Journal of Public Health) journals.He is also the author, co-author, editor, or co-editor of more than 15 books, published with leading university presses like Oxford, Cambridge, Columbia, John Hopkins and MIT.
Professor Cohen is one of three editors-in-chief of the Journal of Law and the Biosciences, a peer-reviewed journal published by Oxford University Press and serves on the editorial board for the American Journal of Bioethics. He served on the Steering Committee for Ethics for the Canadian Institutes of Health Research (CIHR), the Canadian counterpart to the NIH, and the Ethics Committee for the American College of Obstetricians and Gynecologists (ACOG). He currently serves on the Ethics Committee of the U.S. Organ Procurement and Transplantation Network (OPTN).
Discussants & Moderator
Chenguang Wang, Professor, Tsinghua University Law School; Executive Vice President, China Health Law Society; Director, Health Law Center, Tsinghua University Law School
Hongjie Man, Professor & Deputy Dean, Shandong University Law School; Member of the Standing Council, China Health Law Society
Chunyan Ding, Associate Professor & Associate Dean, City University of Hong Kong Law School
Su Jiang, Associate Professor, Peking University Law School; Deputy Director, Science & Technology Law Center at Peking University
Wen Xiang, Associate Professor, Copenhagen University Faculty of Law
Xinqi Gong, Professor, Renmin University School of Mathematics
Hongteng Xu, Tenure-track Associate Professor, Gaoling School of Artificial Intelligence, Renmin University
Jiyu Zhang, Associate Professor, Renmin University Law School; Executive Director, Renmin University Institute of Law and Technology
Xiaodong Ding, Associate Professor, Renmin University Law School; Deputy Director, Renmin University Institute of Law and Technology
Rui Guo, Associate Professor, Renmin University Law School; Senior Research Fellow, Renmin University Institute of Law and Technology
Yang Liu, Assistant Professor, Renmin University Law School; Senior Research Fellow, Renmin University Institute of Law and Technology
Moderator: Bingwan Xiong, Associate Professor, Renmin University Law School; Senior Research Fellow, Renmin University Institute of Law and Technology
Welcoming Speech
Vice President Yi Wang
Vice President Yi Wang first extended his warm welcome to Professor Cohen. He recalled the long-lasting friendship andextensive cooperation between Renmin Law School and Harvard Law School, especially routine student exchange program, faculty visiting program, joint academic projectson disability law and the Geneva-Harvard-Renmin-Sydney Law Faculty Conference. Professor Wang spoke highly of Professor Cohen's academic achievements and contributions in the field of health law and artificial intelligence in health care. He thanked Professor Cohen for taking his precious time to deliver this talk and hoped this opportunity to further promote the academic communication between Chinesescholars’ legal studies in this field. Professor Wang also thanked all the discussants for their participation and support.
Professor Cohen’s Talk
Professor Cohen Talking on Medical AI
Medical Artificial Intelligence (‘medical AI’) and machine learning have great potential in improving medical care service whereas the introduction of such novel technologies also brings legal and ethical concerns at the same time. Professor Cohen’s presentation focuseson legal and ethical issues revolving application of artificial intelligence and machine learning in health care. He started his talk by asking a thought-provoking question: "How should technology be used?" followed by an introduction of a series of current and near future medical AI and machine learning applications including Orcams and Arterys. Professor Cohen then used two hypothetical cases to illustrate potential technical, ethical and legal dilemmas faced by medical AI and further explained the differences between medical AI and AI in other fields (such as driverless cars).
Vertically, the application of artificial intelligence involves a complex process from data acquisition to the dissemination of artificial intelligence. At each stage, there will be various legal issues, such as data privacy, data representativeness, disclosure of the use of artificial intelligence to patients, patients' consent and fair access. Horizontally speaking, in the process of using and monitoring AI applications in healthcare, there are many subjects, i.e. AI manufacturers or developers, buyers (mostly hospitals), pre-marketing review departments, insurance companies and doctors. To standardize medical malpractice, Professor Cohen thinks it is necessary to establish a fair and effective liability system.
One of the most challenging aspects of medical AI regulation is transparency. Artificial intelligence developed by machine learning produces results in an opaque way. The process of AI evaluation is unclearand cannot be replicated by human beings. Though AI recommendations are generally more accurate than that of human counterparts’, they do make mistakes.As a result, it is difficult to assign liability when errors occur and cause serious adverse effects, because the way and time of errors are untraceable. Professor Cohen mentioned that current tort law encourages physicians to follow the standard of care rather thanthe recommendations of medical AI and should be reformed to encourage physicians to make better use of medical AI.
At the end of the speech, Professor Cohen proposed that ambient intelligence may be an updated version of the future medicare field. Compared with medical AI, ambient intelligence is more continuous and in-depth, which helps to better realize the full coverage of health monitoring, extending from the hospital to people's daily life. However, legal and ethical issues such as the right of privacy and consent are still worthy of special attention.
Discussion and Comments
Professor Chenguang Wang
Professor Chenguang Wang spoke very highly of Professor Cohen’s talk. He pointed out that in recent years, China is experiencing the transformation from an information society to an algorithm society. Echoing Professor Cohen’s talk, Professor Wang briefly discussed two categories of AI and their corresponding tort liability problems: (1) the first category is artificial intelligence based on big data where the evaluation process is mainly based on human experience, and it helps to analyze relevant cases by summarizing data and establishing new models. Although in this process, the data may be misunderstood or wrongly processed, it is possible determine the location of the error and assign responsibility accordingly. (2) The second category is artificial intelligence developed by machine learning which develops on its own, and human beings know little about the process. In this case, the problem of black box appears, which hinders people from positioning mistakes and taking responsibility, because at present, the law is used to regulate human behavior.
Professor Hongjie Man
Professor Hongjie Man first shared his personal experience of wearing a watch which can monitor his heartbeats and blood pressure. Then he raised two questions: (1) how to strike the balance between health care renovation and the protection of the privacy of patients and physicians, and (2) how should current tort law be adapted to address the special feature of medical AI and machine learning technology considering that the result is not a human behavior or action?
Professor Chunyan Ding
Professor Chunyan Ding emphasized three issues and raised three questions. First, she mentioned that artificial intelligence or machine learning cannot avoid data bias, so more attention should be paid to data inequality. Then she talked about opacity (the black box) and the balance between minimal protection of patients' rights and encouragement of innovation. Her three questions were: (1) whether artificial intelligence will replace human beings in the field of health care; (2) How to assess patients’consent to the use of data when they offer information, especially the consent for future use of the data; (3) What’s the difference between AI used in other fields of medical care, such as health monitoring and AI used in treatment?
Professor Su Jiang
Professor Su Jiang pointed out that the regulation of medical AI is more important than other issues, such as privacy or consent issues. He believes that while it is possible to duplicate the response to such problems in other fields, the regulation of artificial intelligence in the medical field is a special problem. He also commented on Professor Cohen's publications, in which Professor Cohen introduced two methods to standardize medical AI.
Professor Wen Xiang
Professor Wen Xiang asked two questions about Professor Cohen's research on medical AI in the EU: (1) will the development of AI contribute to the unity of the EU, or will it challenge the "shared values" of EU member states and lead to more conflicts? (2) Considering that EU member states have split attitudes towards AI, should we implement an exit mechanism for the agreements related to artificial intelligence?
Other discussants in the onsite meeting room at Renmin Law School and online also commented on Professor Cohen’s talk and raised a good number of questions to Professor Cohen. Professor Xinqi Gong asked about the responsibility of doctors when rejecting correct AI suggestions and leading to adverse consequences. Professor Hongteng Xu shared his opinion that European countries tend to overestimate the risks of AI while ignoring the potential benefits. Professor Jiyu Zhang asked Professor Cohen to provided further knowledge of the US in improving the representativeness of the data set. Professor Xiaodong Ding commented on whether there are common applicable rules in data privacy and patient information, as well as the fiduciary duty in the field of personal information protection. Professor Yang Liu raised the question of whether the bill passed by the US Senate requiring the Director of National Intelligence to declassify data concerning the investigation of the origin of Covid-19 meets the requirements of international health law and US domestic law. Professor Rui Guocommented on the reformation of insurance policy in the era of medical AI.
Professor Cohen’s Responses
Professor Cohen responded to the questions and comments patiently and informatively.
First, on the issue of data bias, Professor Cohen recognized it as a practical problem in medical AI and machine learning. However, he also pointed out that in many cases AI trained by biased data may still perform better than human physicians. Individuals are also likely to be racially biased. In addition, although the lack of data representativeness is certainly a big part of the bias problem, which researchers call "marker bias", that is, the sample has enough representativeness, it is an even more challenging problem where samples are representative enough, but data processing is biased.
On the issue of tort liability, especially vicarious liability mentioned by Professor Xiaodong Ding, Professor Cohen clarified that following the incorrect AI recommendations which meet the standard of care duty does not necessarily lead to tort liability. In terms of vicarious liability, Professor Cohen suggested that a good analogy for institution liability is the Negligent Credentialing in employment. One way to consider alternative responsibilities in medical AI is that hospitals no longer buy AI, but employ AI, which requires them to consider how to integrate AI into a specific hospital environment.
In response to the questions related to EU, Professor Cohen pointed out that EU is working on a more comprehensive set of regulations and directives concerning medical AI and that EU’s approach is quite different from that of the US. The EU tries to regulate medical AI through an integrated text like the General Data Protection Regulation (GDPR), while US tends to impose a relatively fragmented set of specific regulations. Professor Cohen believes that considering the diversity of EU Member States, the US regulatory approach to medical AI is more appropriate.
Finally, on the topic of how medical insurance should be reformed to adapt to medical AI, Professor Cohen said that the use of medical AI would threaten the basic principle of medical insurance: the results of medical AI eliminate uncertainty. In a perfectly predictable world, the premium will be set at the exact cost of each person, and this differential pricing will lead to inequality. Professor Cohen also proposed a model for future insurance regimes: the life insurance in the UK, which allows insurance companies to take into account the genetic analysis results produced by medical AI in a limited way, while ensuring everyone's basic insurance coverage level.
The Lecture Series of the Global Honorary Chair at Renmin Law School was launched in 2020, aiming at inviting distinguished academics worldwide to share their academic insights on the frontiers of law and technology in a global context. Up to now, the Series have welcomed experts like Professors Robert Post and Heather Gerken from Yale Law School, Professor Helen Nissenbaum from Cornell Law School, Professor Victor Schornburg from Oxford University to speak over privacy, personal information, intellectual property and other subjects.
(By Xuan Yao, Xinping Hu and Chen Guan; Pictures by Xiaoshuo Wang and Meng Wei)