Academia.eduAcademia.edu
paper cover icon
Feminist AI: Can We Expect Our AI Systems to Become Feminist?

Feminist AI: Can We Expect Our AI Systems to Become Feminist?

Philosophy & Technology, 2019
Galit Wellner
Abstract
The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender bias in AI technologies as a multi-faceted phenomenon, and the linguistic explanation as too narrow. The next step moves from the linguistic aspects to the relational ones, with postphenomenology. One of the analytical tools of this theory is the “I-technology-world” formula that models our relations with technologies, and through them—with the world. Realizing that AI technologies give rise to new types of relations in which the technology has an “enhanced technological intentionality”, a new formula is suggested: “I-algorithm-dataset.” In the third part of the article, four types of solutions to the gender bias in AI are reviewed: ignoring any reference to gender, revealing the considerations that led the algorithm to decide, designing algorithms that are not biased, or lastly, involving humans in the process. In order to avoid gender bias, we can recall a feminist basic understanding—visibility matters. Users and developers should be aware of the possibility of gender and racial biases, and try to avoid them, bypass them, or exterminates them altogether.

Galit Wellner hasn't uploaded this paper.

Let Galit know you want this paper to be uploaded.

Ask for this paper to be uploaded.