跳到主要內容區塊
:::
開啟
:::
  • 海報
  • 演講或講座
  • 歐美研究所
  • 地點

    歐美研究所一樓會議室

  • 演講人姓名

    林庭安教授(美國Connecticut大學)

  • 活動狀態

    確定

  • 活動網址

AI, normality, and oppressive things

2024-12-13 14:30 - 16:30

加入行事曆

 AI, normality, and oppressive things 

While it is well-known that AI systems can be perniciously biased, much attention has been paid to instances where these biases are expressed blatantly. In this talk, I draw on the literature on the political impacts of artifacts to argue that many AI systems are not merely biased but materialize oppression. In other words, many AI systems should be recognized as oppressive things when they function to calcify oppressive normality, which treats the dominant groups as normal, whereas others as deviations. Adopting this framework emphasizes the crucial roles that physical components play in sustaining oppression and helps identify instances of AI systems that are oppressive in a subtler way. Using instances of generative AI systems as the central examples, I theorize three ways that AI systems might function to calcify oppressive normality—through their content, their performance, and their style. Since the oppressiveness of oppressive things is a matter of degree, I further analyze three contributing factors that make the oppressive impacts of AI systems especially concerning. I end by discussing the limitations of existing measures and urge the exploration of more transformative remedies.

回頂端