跳到主要內容區塊
Close
:::
Open
  1. Home_picHome
  2. > News

Events

:::
  • Poster
  • Lectures
  • Institute of European and American Studies
Artificial Intelligence and Artificial Moral Agents

2025-03-31 14:30 - 16:30

Add To Calendar

Speaker: Dr. Alvin Chen
(Associate Research Fellow, Research Center for Humanities and Social Sciences, Academia Sinica)

Moderator: Dr. Chih-hsing Ho
(Associate Research Fellow, Institute of European and American Studies, Academia Sinica)

Discussant: Dr. Hung-Ju Chen
(Assistant Research Fellow, Institute of European and American Studies, Academia Sinica)

Organizer: AI Governance Laboratory, Institute of European and American Studies, Academia Sinica

Website: https://www.ea.sinica.edu.tw/SeminarList.aspx?t=1  

Contact: Miss Lin, +886-2-3789-7222, pimei@gate.sinica.edu.tw

Abstract

     With the recent development of generative artificial intelligence, questions like “Can AI be responsible of its wrongs?” and “if can, how?” become increasingly pressing; the so-called “responsibility gap”. These questions are mostly framed in terms familiar to the field of machine ethics, a field of studies emphasizing ethical reflections on the development of machines. But conceptions of artificial agent with moral capacity in legal and political theory predates the emergence of machine ethics and the development of generative AI. This paper argues that the so-called “responsibility gap” exists only because of the account of artificial agent in machine ethics. It shows that, once insights from legal and political philosophy are taken, there will be a much clearer understanding of the characters of artificial moral agents and a less troubling undertaking of the responsibility of AI. In terms of specifics, the paper addresses the following tripartite question: Is AI an artificial moral agent? If it is, what kind of artificial moral agent is it? Why does it matter for AI to be an artificial moral agent? An answer to the third part of this question quintessentially presents a response to the “responsibility gap” by reassessing how AI can take responsibility. The paper concludes with reflections on its practical implications, notably to AI in medicine.

回頂端