Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, IEEE Intelligent Systems
https://doi.org/10.1109/MIS.2009.36…
5 pages
1 file
Problems that involve interacting with humans, such as natural language understanding, have not proven to be solvable by concise, neat formulas like F = ma. Instead, the best approach appears to be to embrace the complexity of the domain and address it by harnessing the power of ...
arXiv (Cornell University), 2022
We survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to understand language-and the physical and social situations language encodes-in any humanlike sense. We describe arguments that have been made for and against such understanding, and key questions for the broader sciences of intelligence that have arisen in light of these arguments. We contend that an extended science of intelligence can be developed that will provide insight into distinct modes of understanding, their strengths and limitations, and the challenge of integrating diverse forms of cognition. What does it mean to understand something? This question has long engaged philosophers, cognitive scientists, and educators, nearly always with reference to humans and other animals. However, with the recent rise of large-scale AI systems-especially so-called large language models-a heated debate has arisen in the AI community on whether machines can now be said to understand natural language, and thus understand the physical and social situations that language can describe. This debate is not just academic; the extent and manner in which machines understand our world has real stakes for how much we can trust them to drive cars, diagnose diseases, care for the elderly, educate children, and more generally act robustly and transparently in tasks that impact humans. Moreover, the current debate suggests a fascinating divergence in how to think about understanding in intelligent systems, in particular the contrast between mental models that rely on statistical correlations and those that rely on causal mechanisms. Until quite recently there was general agreement in the AI research community about machine understanding: while AI systems exhibit seemingly intelligent behavior in many specific tasks, they do not understand the data they process in the way humans do. Facial recognition software does not understand that faces are parts of bodies, or the role of facial expressions in social interactions, or what it means to "face" an unpleasant situation, or any of the other uncountable ways in which humans conceptualize faces. Similarly, speech-to-text and machine translation programs do not understand the language they process, and autonomous driving systems do not understand the meaning of the subtle eye contact or body language drivers and pedestrians use to avoid accidents. Indeed, the oft-noted brittleness of these AI systems-their unpredictable errors and lack of robust generalization abilities-are key indicators of their lack of understanding [59]. However, over the last several years, a new kind of AI system has soared in popularity and influence in the research community, one that has changed the views of some people about the prospects of machines that understand language. Variously called Large Language Models (LLMs), Large Pre
Computational Linguistics, 2000
ArXiv, 2021
How can we design Natural Language Processing (NLP) systems that learn from human feedback? There is a growing research body of Human-in-the-loop (HITL) NLP frameworks that continuously integrate human feedback to improve the model itself. HITL NLP research is nascent but multifarious—solving various NLP problems, collecting diverse feedback from different people, and applying different methods to learn from human feedback. We present a survey of HITL NLP work from both Machine Learning (ML) and Human-computer Interaction (HCI) communities that highlights its short yet inspiring history, and thoroughly summarize recent frameworks focusing on their tasks, goals, human interactions, and feedback learning methods. Finally, we discuss future studies for integrating human feedback in the NLP development loop.
Human Computer Confluence, 2015
A lot of what our brains process never enters our consciousness, even if it may be of potential value to us. So just what are we wasting by letting our brains process stimuli we don't even notice or attend to? This is one of the areas being explored in the 16-partner CEEDs project (ceeds-project.eu). Funded by the European Commission's Future and Emerging Technologies programme, CEEDs (the Collective Experience of Empathic Data systems) has developed new sensors and technologies to unobtrusively measure people's implicit reactions to multimodal presentations of very large data sets. The idea is that monitoring these reactions may reveal when you are surprised, satisfied, interested or engaged by a part of the data, even if you're not aware of being so. Applications of CEEDs technology are relevant to a broad range of disciplines-spanning science, education, design, and archaeology, all the way through to connected retail. This chapter provides a formalisation of the CEEDs approach and its applications and in so doing explains how the CEEDs project has broken new ground in the nascent domain of human computer confluence.
knowledge. That is, how we can collaborate with engineers, economists, physicians, social scientists, and people from many other fields in such a way that our undoubtedly substantial insights into the mechanisms of speech communication eventually have a positive impact on everyday life. Nass and Brave (2007) complain, not without reason, that "Interfaces that talk and listen are populating computers, cars, call centres, and even home appliances and toys, but voice interfaces invariably frustrate rather than help". Every corner of our life is filled with speech. Speaking with each other is the most important means of social interaction. This makes speech one of the most important research subjects of all.
Nature Machine Intelligence, 2020
International journal of qualitative methods, 2020
Social scientists of mixed-methods research have traditionally used human annotators to classify texts according to some predefined knowledge. The "big data" revolution, the fast growth of digitized texts in recent years brings new opportunities but also new challenges. In our research project, we aim to examine the potential for natural language processing (NLP) techniques to understand the individual framing of depression in online forums. In this paper, we introduce a part of this project experimenting with NLP classification (supervised machine learning) method, which is capable of classifying large digital corpora according to various discourses on depression. Our question was whether an automated method can be applied to sociological problems outside the scope of hermeneutically more trivial business applications. The present article introduces our learning path from the difficulties of human annotation to the hermeneutic limitations of algorithmic NLP methods. We faced our first failure when we experienced significant inter-annotator disagreement. In response to the failure, we moved to the strategy of intersubjective hermeneutics (interpretation through consensus). The second failure arose because we expected the machine to effectively learn from the human-annotated sample despite its hermeneutic limitations. The machine learning seemed to work appropriately in predicting bio-medical and psychological framing, but it failed in case of sociological framing. These results show that the sociological discourse about depression is not as well founded as the biomedical and the psychological discourses-a conclusion which requires further empirical study in the future. An increasing part of machine learning solution is based on human annotation of semantic interpretation tasks, and such human-machine interactions will probably define many more applications in the future. Our paper shows the hermeneutic limitations of "big data" text analytics in the social sciences, and highlights the need for a better understanding of the use of annotated textual data and the annotation process itself.
Hybrid, 2021
According to the predictions of a team of AI experts for the Future of Humanity Institute, computers will outperform humans in translation by 2024 and in writing a NY Times bestseller by 2049. 1 These predictions rest on recent and remarkable advances in language processing technologies, due to improved AI techniques, increased computing power and huge datasets collected mostly on the net. Although a Booker Prize bot recipient is still science fiction, a growing proportion of texts in our daily lives are now produced industrially, from manuals to weather prediction. Machine translation is currently the best example of those language processing techniques becoming commonplace through such major platforms as Amazon, Twitter, Facebook or Google.
2022
As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the 'how' and 'why' of human-computer interaction (HCI) within these frameworks, both current and expected. Such a discussion is necessary for optimal system design, leveraging advanced data-processing capabilities to support decision-making involving humans, but it is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. Within this context, we focus on the following questions: (i) How does HCI currently look like for state-of-the-art AutoML algorithms, especially during the stages of development, deployment, and maintenance? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, we project existing literature in HCI into the space of AutoML; this connection has, to date, largely been unexplored. In so doing, we review topics including user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, we contemplate how AutoML may manifest in effectively open-ended environments. This discussion necessarily reviews projected developmental pathways for AutoML, such as the incorporation of reasoning, although the focus remains on how and why HCI may occur in such a framework rather than on any implementational details. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
Environmental science. Processes & impacts, 2013
Genetics in Medicine, 2013
ACM SIGIR Forum, 2012
Encyclopedia of the Sciences of Learning, 2012
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2009
Perspectives and Applications, 2012
Journal of Insect Behavior, 2014
Proceedings of the American Society for Information Science and Technology, 2013
PLoS Computational Biology, 2013
Genetics in Medicine, 2010
Remote Sensing
European Radiology
Frontiers in Earth Science
Machine Learning, 2016
Human mutation, 2017
Pharmacogenomics and Personalized Medicine
Proceedings of the 2013 international workshop on Mining unstructured big data using natural language processing - UnstructureNLP '13, 2013
SN Computer Science, 2020
European Journal of Information Systems, 2016
ISPRS International Journal of Geo-Information
Journal of Digital Imaging
Communications in Computer and Information Science, 2019
Remote Sensing, 2020
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
Journal of Big Data
Journal of Healthcare Engineering
Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering - EASE'17, 2017
Ethics and Information Technology
Document Recognition and Retrieval XXII, 2015
Lecture Notes in Computer Science, 2020
Sensors, 2021
International Journal of Data Science and Analytics, 2022
Applied Sciences, 2021
Handbook of Scan Statistics, 2020
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021
International Journal on Document Analysis and Recognition (IJDAR), 2016
BMC Health Services Research, 2022
Proceedings of the 2016 International Conference on Management of Data, 2016
Astronomy & Astrophysics, 2021
Engineering Applications of Artificial Intelligence, 2020
ACM Transactions on Graphics, 2013
ACM SIGCAS Computers and Society, 2016
Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering
Remote Sensing Letters, 2019
Advances in Data Analysis and Classification
Sustainability
Medical Imaging 2019: Image Processing, 2019
npj Computational Materials, 2019
Pharmacogenomics, 2018
Frontiers in Artificial Intelligence, 2021
Applied Soft Computing
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
International Conference on Information & Communication Technologies and Development 2022
Applied Sciences
Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies, 2021
ACM Computing Surveys, 2019
Communications Biology, 2021
Springer tracts in advanced robotics, 2016
Clinical Ophthalmology, 2021
Research Square (Research Square), 2022
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021
European Journal for Philosophy of Science, 2020
Knowledge and Information Systems, 2015
Geo-spatial Information Science, 2019