Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2023
…
8 pages
1 file
The spread of artificial intelligence across business, defense, and security sectors has the potential to improve the speed of operations, provide new capabilities, and increase efficiencies. Along with the integration of AI comes an upsurge in risk and potential harm from AI accidents, misuse, and unexpected behavior. The growing concern about AI having unforeseen negative impacts on U.S. commercial, social, infrastructure, and national security highlights the need for AI assessment that can help reduce potential harm from AI and ensure that AI applications and technologies are safe and trustworthy. The Center for Security and Emerging Technology has published studies related to AI safety, accidents, and testing. Building on this work, CSET has launched a new line of research titled "AI Assessment" to investigate the development and adequacy of current AI assessment approaches, along with the availability and sufficiency of tools and resources for implementing them. Specifically, the research will: 1. Understand and contribute to the development and adoption of AI standards, testing procedures, best practices, regulation, auditing, and certification. 2. Characterize the wide variety of AI products, tools, services, data, and resources that influence AI assessment. 3. Understand the needs for additional infrastructure, academic research, tools, or budgetary resources to support demonstration and adoption. 4. Explore the global differences and similarities of AI assessment, standards, and testing practices among various sectors and government entities. There is no simple one-size-fits-all assessment approach that can be adequately applied to the diverse range of AI. AI systems have a wide variety of functionalities, capabilities, and outputs. They are also created using different tools, data types, and resources, adding to assessment diversity. A collection of approaches and processes are needed to cover a wide range of AI products, tools, services, and resources. Additionally, because the number and frequency of AI creation will greatly increase, resources need to include techniques and tools for scaling assessment, handling the variety and quantity of AI systems. With new AI innovations, assessment needs may change. This research will provide a foundation for assessment that can be adapted to future needs. It will also provide a better understanding of the current U.S. needs and capabilities for AI assessment, and support decisions on AI policy, resourcing, research, and national security.
AI & SOCIETY
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual driv...
arXiv (Cornell University), 2023
Given rapid progress toward advanced AI and risks from frontier AI systems-advanced AI systems pushing the boundaries of the AI capabilities frontier-the creation and implementation of AI governance and regulatory schemes deserves prioritization and substantial investment. However, the status quo is untenable and, frankly, dangerous. A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight. In response, frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems. Yet, the budding AI risk evaluation ecosystem faces significant coordination challenges, such as limited diversity and independence of evaluators, suboptimal allocation of effort, and perverse incentives. This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators. Such a consortium could play a critical role in international efforts to mitigate societal-scale risks from advanced AI, including in managing responsible scaling policies and coordinated evaluation-based risk response. In this paper, we discuss the current evaluation ecosystem and its shortcomings, propose an international consortium for advanced AI risk evaluations, discuss issues regarding its implementation, discuss lessons that can be learned from previous international institutions and existing proposals for international AI governance institutions, and finally, we recommend concrete steps to advance the establishment of the proposed consortium: solicit feedback from stakeholders, conduct additional research, conduct a workshop(s) for stakeholders, create a final proposal and solicit funding, and create a consortium.
The Next Wave of Sociotechnical Design, 2021
Notwithstanding its potential benefits, organizational AI use can lead to unintended consequences like opaque decision-making processes or biased decisions. Hence, a key challenge for organizations these days is to implement procedures that can be used to assess and mitigate the risks of organizational AI use. Although public awareness of AI-related risks is growing, the extant literature provides limited guidance to organizations on how to assess and manage AI risks. Against this background, we conducted an Action Design Research project in collaboration with a government agency with a pioneering AI practice to iteratively build, implement, and evaluate the Artificial Intelligence Risk Assessment (AIRA) tool. Besides the theory-ingrained and empirically evaluated AIRA tool, our key contribution is a set of five design principles for instantiating further instances of this class of artifacts. In comparison to existing AI risk assessment tools, our work emphasizes communication between stakeholders of diverse expertise, estimating the expected real-world positive and negative consequences of AI use, and incorporating performance metrics beyond predictive accuracy, including thus assessments of privacy, fairness, and interpretability.
ArXiv, 2019
Recent advances in artificial intelligence (AI) have lead to an explosion of multimedia applications (e.g., computer vision (CV) and natural language processing (NLP)) for different domains such as commercial, industrial, and intelligence. In particular, the use of AI applications in a national security environment is often problematic because the opaque nature of the systems leads to an inability for a human to understand how the results came about. A reliance on 'black boxes' to generate predictions and inform decisions is potentially disastrous. This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust. Specifically, we focus on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.
AI Impact Assessment: A Policy Prototyping Experiment, 2021
This report presents the outcomes of the Open Loop policy prototyping program on Automated Decision Impact Assessment (ADIA) in Europe. Open Loop (www.openloop.org) is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. In this particular case, Open Loop partnered with 10 European AI companies to co-create an ADIA framework (policy prototype) that those companies could test by applying it to their own AI applications. The policy prototype was structured into two parts: the prototype law, which was drafted as legal text, and the prototype guidance, which was drafted as a playbook. The latter provided participants with additional guidance on procedural and substantive aspects of performing the ADIA through: - A step-by-step risk assessment methodology; - An overview of values often associated with AI applications; - A taxonomy of harms; - Examples of mitigating measures. The prototype was tested against the following three criteria: 1) policy understanding; 2) policy effectiveness; 3) policy costs.The goal was to derive evidence-based recommendations relevant to ongoing policy debates around the future of AI regulation. Based on the results of the prototyping exercise and the feedback on the prototype law and playbook, the report advises lawmakers formulating requirements for AI risk assessments to take the following recommendations into account: - Focus on procedure instead of prescription as a way to determine high-risk AI applications; - Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements that apply to organisations deploying AI applications; - Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law; - Be as specific as possible in the definition of risks within regulatory scope; - Improve documentation of risk assessment and decision-making processes by including justifications for mitigation choices; - Develop a sound taxonomy of the different AI actors involved in risk assessment; - Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another; - Don’t reinvent the wheel; combine new processes with established ones, improving the overall approach.
2020
Introduction 2 The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1). 2 The Rapporteurs of the Task Force would like to thank all Task Force Members who kindly and actively participated in the drafting of this document by providing meaningful and insightful comments. In particular, the rapporteurs would like to thank Matti Aksela, Marcus Comiter, Eleftherios Chelioudakis and Marisa Monteiro. The views presented in this report do not necessarily represent the opinions of all participants of the Task Force or their organizations nor do they explicitly represent the view of any individual participant. The views expressed in this report are those of the authors writing in a personal capacity and do not necessarily reflect those of CEPS or any other institutions with which they are associated.
International Journal of Applied Engineering & Technology, 2023
As artificial intelligence (AI) systems become increasingly ubiquitous and influential, ensuring their safe, secure, and trustworthy development and deployment is of paramount importance. This paper explores the multifaceted challenges and considerations involved in fostering a robust AI ecosystem in the United States. It delves into key aspects such as ethical considerations, technical robustness and security, data privacy and security, and strategies for building public trust. The paper presents a comprehensive analysis of these issues, supported by relevant research and best practices from various stakeholders. Additionally, it provides recommendations and highlights existing initiatives aimed at promoting responsible AI development and deployment. Furthermore, the paper includes three block diagrams to visually represent the technical robustness and security considerations, data privacy and security concerns, and the importance of stakeholder engagement and public trust in AI systems. By addressing these critical aspects, the United States can harness the transformative potential of AI while mitigating risks and upholding ethical principles, ultimately positioning itself as a global leader in responsible AI innovation.
The ability of computer technology to adapt to tasks which in usual sense, requires human intelligence, emotional response, decision-making capacity and strategic technique, is known as artificial intelligence. Amazon Alexa or Apple's Siri are masterminds of the masterminds which can recognise your voice and follow instruction on their own. Kyle Vogt founded the Cruise Automation in 2013; the company introduced self-driving cars in the US market, but it had to compete with Google. Nevertheless, today, in 2017, Tesla, Ford, Audi and GM have actually improvised a lot on the theory. They are ready to launch their self-driving cars in the market. Hewlett Packard Enterprises went quite ambitious in planning an artificially intelligent business, which will be capable of handling real life customers, clients and solving problems. They have decided to build this branch as a start-up within their long-time reliable HP brand name.
PAAKAT: Revista de Tecnología y Sociedad, 2022
Starting from exemplifying and recognizing the impacts, risks and damages caused by some artificial intelligence systems, and under the argument that the ethics of artificial intelligence and its current legal framework are insufficient, the first objective of this paper is to analyze the models and evaluative practices of algorithmic impacts to astimate which are the most desirable. The second objective is to show what elements algorithmic impact assessments should have. The theoretical basis for the analysis of models, taken from Hacker (2018), starts from showing the discrimination due to lack of guarantees that the input data is representative, complete, and purged of biases, in particular historical bias coming from representations made by intermediaries. The design to discover the most desirable evaluation instrument establishes a screening among models and their respective inclusion of the elements present in the best practices at a global level. The analysis sought to review all algorithmic impact evaluations in the relevant literature at the years 2020 and 2021 to gather the most significant lessons of good evaluation practices. The results show the convenience of focusing on the risk model and six essential elements in evaluations. The conclusions suggest proposals to move towards quantitative expressions of qualitative aspects, while warning of the difficulties in building a standardized evaluation formula. It is proposed to establish four levels: neutral impacts, risks, reversible and irreversible damage, as well as four protection actions: risk prevention, mitigation, repair and prohibition.
Digital Society
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights fo...
London : MacMillan and Company + (new Publisher) The new Alexandria Library of Texas , 1879
Research result. Pedagogy and Psychology of Education, 2020
Humanities & Social Sciences Reviews, 2020
Anemon Muş Alparslan Üniversitesi sosyal bilimler dergisi, 2021
International journal of business studies, 2022
Comptes Rendus Palevol
NZ Association of Science Educators magazin, 2004
IEEE microwave magazine, 2024
Ekombis Sains: Jurnal Ekonomi, Keuangan dan Bisnis
Journal of Colloid and Interface Science, 1991
The Scientific World Journal, 2016
SudEuropa, 2024
Francia, 2005
Deltion of the Christian Archaeological Society, 2023
Research in Comparative Literature, 2024
Nature Communications, 2013