Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020, Nature Machine Intelligence
…
3 pages
1 file
AI-generated Abstract
As artificial intelligence (AI) becomes crucial in society, establishing a framework to connect algorithm interpretability with public trust is essential. This paper discusses how recent regulatory trends have led to increased demands for transparency in AI systems, emphasizing the need for accessible explanations tailored to various stakeholders. By exploring the nature of interpretability, the paper raises critical questions about what explanations are necessary, to whom they should be directed, and how their effectiveness can be assessed, ultimately aiming to enhance trust and accountability in algorithm-assisted decision-making.
AI & SOCIETY, 2020
The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come toperceiveAI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision...
This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users. Algorithms increasingly have the ability to affect everyday life, work practices, and economic systems through automated decision-making and interpretation of " big data ". Cases of algorithmic authority include algorithmically curating news and social media feeds, evaluating job performance, matching dates, and hiring and firing employees. This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains.
Information Systems Frontiers
example the responsible design (Dennehy et al., 2021) and governance (Mäntymäki et al., 2022b) of AI systems. While organisations are increasingly investing in ethical AI and Responsible AI (RAI) (Zimmer et al., 2022), recent reports suggest that this comes at a cost and may lead to burnout in responsible-AI teams (Heikkilä, 2022). Thus, it is critical to consider how we educate about RAI (Grøder et al., 2022) and rethink our traditional learning designs (Pappas & Giannakos, 2021), as this can influence end-users' perceptions towards AI applications (Schmager et al., 2023) as well as how future employees approach the design and implementation of AI applications (Rakova et al., 2021; Vassilakopoulou et al., 2022). The use of algorithmic decision-making and decisionsupport processes, particularly AI is becoming increasingly pervasive in the public sector, also in high-risk application areas such as healthcare, traffic, and finance (European Commission, 2020). Against this backdrop, there is growing concern over the ethical use and safety of AI, fuelled by reports of ungoverned military applications (Butcher and Beridze, 2019; Dignum, 2020), privacy violations attributed to facial recognition technologies used by the police (Rezende, 2022), unwanted biases exhibited by AI applications used by courts (Imai et al., 2020), and racial biases in clinical algorithms (Vyas et al. 2020). The opacity and lack of explainability frequently attributed to AI systems makes evaluating the trustworthiness of algorithmic decisions challenging even for technical experts, let alone the public. Together with the algorithm-propelled proliferation of misinformation, hate speech and polarising content on social media platforms, there is a high risk for erosion of trust in algorithmic systems used by the public sector (Janssen et al., 2020). Ensuring that people can trust in the algorithmic processes is essential not only for reaping the potential benefits from AI (Dignum, 2020) but also for fostering trust and resilience at a societal level. AI researchers and practitioners have expressed their fears about AI systems being developed that are
Social Science Computer Review, 2020
Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienc...
Business Ethics Quarterly, 2021
Businesses increasingly rely on algorithms that are data-trained sets of decision rules in order to implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a "right to explanation." Our contention is that we can address much of the problem of algorithmic transparency by rethinking the right to informed consent in the age of artificial intelligence. It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction with individual autonomy as its moral foundation. Such a view is insufficient, especially when data is used in a secondary, non-contextual, and unpredictable manner-which is the inescapable nature of advanced AI systems. We submit that an alternative view of informed consent-as an assurance of trust for incomplete transactions-allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Business Ethics Quarterly
Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
2023 ACM Conference on Fairness, Accountability, and Transparency
Public attention towards explainability of artificial intelligence (AI) systems has been rising in recent years to offer methodologies for human oversight. This has translated into the proliferation of research outputs, such as from Explainable AI, to enhance transparency and control for system debugging and monitoring, and intelligibility of system process and output for user services. Yet, such outputs are difficult to adopt on a practical level due to a lack of a common regulatory baseline, and the contextual nature of explanations. Governmental policies are now attempting to tackle such exigence, however it remains unclear to what extent published communications, regulations, and standards adopt an informed perspective to support research, industry, and civil interests. In this study, we perform the first thematic and gap analysis of this plethora of policies and standards on explainability in the EU, US, and UK. Through a rigorous survey of policy documents, we first contribute an overview of governmental regulatory trajectories within AI explainability and its sociotechnical impacts. We find that policies are often informed by coarse notions and requirements for explanations. This might be due to the willingness to conciliate explanations foremost as a risk management tool for AI oversight, but also due to the lack of a consensus on what constitutes a valid algorithmic explanation, and how feasible the implementation and deployment of such explanations are across stakeholders of an organization. Informed by AI explainability research, we then conduct a gap analysis of existing policies, which leads us to formulate a set of recommendations on how to address explainability in regulations for AI systems, especially discussing the definition, feasibility, and usability of explanations, as well as allocating accountability to explanation providers.
ArXiv, 2021
Copyright held by the owner/author(s). CHI’21, May 8-13, 2021, Online Virtual Conference ACM 978-1-4503-6819-3/20/04. https://doi.org/10.1145/3334480.XXXXXXX Abstract Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs [14]. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders [2]. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
Media Theory , 2023
So-called artificial intelligence (AI) is infiltrating our public and communication structures. The Dutch childcare benefit scandal, revealed in 2019, demonstrates how disadvantageous the opacity of AI can be for already vulnerable groups. In its aftermath, many scholars urged for the need for more explainable AI so that decision-makers can intervene in discriminatory systems. Fostering the explainability of AI (XAI) is a good start to address the issue, but not enough to empower vulnerable groups to fully deal with its repercussions. As a canon in data and computer sciences, XAI aims to illustrate and explain complex AI via simpler models making it more accessible and ethical. The issue being that, in doing so, XAI depoliticises transparency into a remedy for algorithmic opacity, treating transparency as artificially stripped of its ideological meanings. Transparency is presented as an antidote to ideology, though I will show how this is an ideological move with consequences. For instance, it makes us focus too much on algorithmic opacity, rather than explaining the wider power of AI. Second, it hinders us from having debates on who holds the power around AI's explanations, application or critique. The problem is that those affected by or discriminated against by AI, as in the Dutch case, have little tools to deal with the opacity of AI as a system, while those who focus on data opacity are shaping the literacy discussion. To address these concerns, I suggest moving beyond the focus on algorithmic transparency and towards a post-critical AI literacy to strengthen debates on access, empowerment, and resistance, while not dismissing XAI as a field, nor algorithmic transparency as an intention. What I challenge here is the hegemony of treating transparency as a depoliticised and algorithmic issue and viewing the explainability of AI as the sufficient path to citizen empowerment.
Academia Biology, 2024
GLQ: A Journal of Lesbian and Gay Studies, 2021
THE BIBLICAL MUSEUM | Washington D.C. | 31 Agosto, 2024.
European Scientific Journal, ESJ , 2024
Jurnal Ilmiah Universitas Batanghari Jambi, 2023
2022
A Publication of Islamic University College, Selangor, Malaysia, 9(1), 2021
bioRxiv (Cold Spring Harbor Laboratory), 2024
Arbor, 2010
Behavior Therapy, 2007
Biological Rhythm Research, 2014
Child Labor and Human Rights Making Children Matter, 2005
International Journal of Oral and Maxillofacial Surgery, 2013
Journal Of Social Humanities and Administrative Sciences
Microbes and environments, 2018
Renewable and Sustainable Energy Reviews, 2010