Institut für Informatik
Refine
Has Fulltext
- yes (417)
Year of publication
Document Type
- Doctoral Thesis (173)
- Journal article (154)
- Working Paper (61)
- Conference Proceeding (11)
- Report (7)
- Master Thesis (6)
- Bachelor Thesis (3)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (378)
- German (38)
- Multiple languages (1)
Keywords
- Leistungsbewertung (29)
- virtual reality (19)
- Datennetz (14)
- Quality of Experience (12)
- Robotik (11)
- Netzwerk (10)
- machine learning (10)
- Modellierung (9)
- Kleinsatellit (8)
- Optimierung (8)
Institute
- Institut für Informatik (417)
- Institut Mensch - Computer - Medien (12)
- Graduate School of Science and Technology (7)
- Medizinische Klinik und Poliklinik I (7)
- Medizinische Klinik und Poliklinik II (6)
- Institut für Sportwissenschaft (5)
- Institut für Klinische Epidemiologie und Biometrie (4)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Institut für Mathematik (3)
- Theodor-Boveri-Institut für Biowissenschaften (3)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- ANAVS GmbH (1)
- Airbus Defence and Space GmbH (1)
- Beuth Hochschule für Technik Berlin (1)
- Birmingham City University (1)
- California Institute of Technology (1)
This article presents an extension of the well-known TPACK model to describe the professional digital competencies of mathematics teachers. The extension leads to what we want to call MPC-model (Media–Pedagogy–Content) in the following. It additionally includes (1) the consideration of competencies instead of knowledge for a holistic description, (2) the integration of professional digital competencies in a broader context of professional media competencies (including explicitly analog and digital teaching media), (3) the description of concrete individual experiences with digital technology in context-bound subjective domains of experience, and (4) the cross-linking of concrete individual experiences about (digital) technology in specific (subjective) domains of experience. In this article, we first present a motivating literature overview leading to the research question: How can the TPACK model be extended to enable a qualitative description of professional digital competencies of mathematics teachers against the background of situated experiences? This extended framework is developed and presented in a detailed theoretical background. In the empirical part of the article, an exemplary application of the MPC-model is carried out in an explicative case study dealing with the reflections of a mathematics teacher on a planned lesson using virtual reality technology in a guided interview. The qualitative data is interpreted according to the systematic-extensional analysis method. The case study illustrates the importance of taking into account concrete situated experiences opening up a new reflective level analyzing the development of professional mathematics-specific digital competencies.
The role of teaching, learning, and assessment with digital technology has become increasingly prominent in mathematics education. This survey paper provides an overview of how technology has been transforming teaching, learning, and assessment in mathematics education in the digital age and suggests how the field will evolve in the coming years. Based on several decades of research and educational practices, we discuss and anticipate the multifaceted impact of technology on mathematics education, thus laying the groundwork for the other papers in this issue. After a brief introduction discussing the motivations for this issue, we focus our attention on three lines of research: teaching mathematics with technology, learning mathematics with technology, and assessment with technology. We point to new research orientations that address the issue of teaching with technology, specifically describing attempts to conceptualise teachers’ mathematical and digital competencies, perspectives that view teachers as designers of digital resources, and the design and evaluation of long-term initiatives to support teachers as they develop innovative teaching practices enhanced by digital technologies. Our examination shows that learning with technology is still marked by new conceptualizations raised by researchers that can further our understanding of this complex issue. These conceptualizations support the recognition that multiple resources, ranging from paper and pencil to augmented reality, participate in the learning process. Finally, assessment with technology, especially in the formative sense, offers new possibilities for offering individualised support for learners that can benefit from adaptive systems, though more tasks for conceptual understanding need to be developed.
The Valles Marineris Explorer (VaMEx) initiative of the German Aerospace Center (DLR) will develop and demonstrate the technology for a swarm of heterogeneous and autonomous explorers on Earth that will eventually explore the surface of Mars in search of liquid water and signs of life. The swarm consists of multiple unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), each of which is designed for a specific task. When one explorer identifies a region that another one is better equipped to explore, they autonomously exchange this information and adjust the exploration plan accordingly. For this a robust, absolute positioning system is needed. To achieve this, multiple different sensors are fused together, one of which is a radio-positioning system, that will be presented in this work. It uses ultra wideband (UWB)-localization to measure distances between the swarm members and stationary, deployable nodes. During the mission, the stationary nodes are distributed in interesting regions. After there deployment, they measure the distances to themselves and construct a coordinate system using multidimensional scaling (MDS). Afterwards, the swarm members can determine their own position by measuring the distances to the stationary nodes. The foundation for the radio-localization system are distance measurements between the nodes following the double-sided two-way-ranging (DS-TWR) scheme.
Storytelling is a long-established tradition and listening to stories is still a popular leisure activity. Caused by technization, storytelling media expands, e.g., to social robots acting as multi-modal storytellers, using different multimodal behaviours such as facial expressions or body postures. With the overarching goal to automate robotic storytelling, we have been annotating stories with emotion labels which the robot can use to automatically adapt its behavior. With it, three different approaches are compared in two studies in this paper: 1) manual labels by human annotators (MA), 2) software-based word-sensitive annotation using the Linguistic Inquiry and Word Count program (LIWC), and 3) a machine learning based approach (ML). In an online study showing videos of a storytelling robot, the annotations were validated, with LIWC and MA achieving the best, and ML the worst results. In a laboratory user study, the three versions of the story were compared regarding transportation and cognitive absorption, revealing no significant differences but a positive trend towards MA. On this empirical basis, the Automated Robotic Storyteller was implemented using manual annotations. Future iterations should include other robots and modalities, fewer emotion labels and their probabilities.
In the realm of computer science, the acquisition of large-scale datasets has been greatly facilitated by advancements in sensor technologies and the ubiquity of the Internet. This influx of data has led to the widespread adoption of Deep Learning (DL) techniques for various data processing tasks. However, achieving high-quality predictions using DL approaches is challenging, primarily due to two factors: the substantial annotation requirements and the need for comprehensive coverage of data aspects, which are often impractical or impossible.
In practical scenarios, there is a demand for an approach that can classify data with minimal annotated samples for a defined number of categories while also identifying unknown and unlabeled categories. For instance, while a beekeeper can collect vast amounts of (temporal) high-resolution data from an apiary, creating a comprehensive dataset for honey bees is challenging due to their complexity. Similarly, in laboratory settings, inferring cell types from tissue samples can be challenging due to the presence of unknown or rare cell types, despite the availability of large and accurate reference datasets.
Recently, Semi-Unsupervised Learning (SuSL) has emerged as a promising paradigm that leverages both labeled and unlabeled data to enhance (DL) model performance. However, the current Multi-Layer Perceptron (MLP) implementation of SuSL has limitations when applied to real-world processes or organisms, such as honey bees.
This thesis addresses these limitations by proposing novel methods to enhance the practical applicability of SuSL: (1A) Establishing a generalized encoder-decoder framework for SuSL, enabling seamless integration with various neural network architectures without manual feature extraction. (1B) Investigating critical parameters and dataset characteristics to understand their impact on prediction tasks in a standardized setup. (2) Addressing dataset imbalance in both labeled and unlabeled subsets. (3) Modeling diverse tabular input data while maintaining a compact representation within the network for downstream tasks. (4) Efficiently processing raw sensor data, including time-dependent sensor relations, without manual feature extraction.
In addition to theoretical developments and benchmarking results, this thesis demonstrates the real-world applicability of the proposed SuSL implementation through two case studies in biology: (1) Analysis of a billion-tick dataset from ground-level observations of honey bees: from backend to explorative data analysis to SuSL usage. (2) Application of SuSL on newly recorded and unlabeled data using a large reference atlas for cell type annotation: from data integration to SuSL application.
Through these contributions, this research aims to advance the practical utility of SuSL in real-world scenarios, particularly in domains with complex and heterogeneous data.
As Internet of Things (IoT) devices become ubiquitous, they face increasing cybersecurity threats. Unlike standard 1-to-1 communication, the unique challenge posed by n-to-n communication in IoT is that messages must not be encrypted for a single recipient but for a group of recipients. For this reason, using Secure Group Communication (SGC) schemes is necessary to encrypt n-to-n communication efficiently for large group sizes. To this end, the literature presents various SGC schemes with varying features, performance profiles, and architectures, making the selection process challenging. A selection from this multitude of SGC schemes should best be made based on a benchmark that provides an overview of the performance of the schemes. Such a benchmark would make it much easier for developers to select an SGC scheme, but such a benchmark still needs to be created. This paper aims to close this gap by presenting a benchmark for SGC schemes that focus on IoT. Since the design of a benchmark first requires the definition of the underlying business problems, we defined suitable problems for using SGC schemes in the IoT sector as the first step. We identified a common problem for the centralized and decentralized/hybrid SGC schemes, whereas the distributed/contributory SGC schemes required defining an independent business problem. Based on these business problems, we first designed a specification-based benchmark, which we then extended to a hybrid benchmark through corresponding implementations. Finally, we deployed our hybrid benchmark in a typical IoT environment and measured and compared the performance of different SGC schemes. Our findings reveal notable impacts on calculation times and storage requirements without a trusted Central Instance (CI) in distributed/contributory SGC schemes.
Network impact analysis on the performance of Secure Group Communication schemes with focus on IoT
(2024)
Secure and scalable group communication environments are essential for many IoT applications as they are the cornerstone for different IoT devices to work together securely to realize smart applications such as smart cities or smart health. Such applications are often implemented in Wireless Sensor Networks, posing additional challenges. Sensors usually have low capacity and limited network connectivity bandwidth. Over time, a variety of Secure Group Communication (SGC) schemes have emerged, all with their advantages and disadvantages. This variety makes it difficult for users to determine the best protocol for their specific application purpose. When selecting a Secure Group Communication scheme, it is crucial to know the model’s performance under varying network conditions. Research focused so far only on performance in terms of server and client runtimes. To the best of our knowledge, we are the first to perform a network-based performance analysis of SGC schemes. Specifically, we analyze the network impact on the two centralized SGC schemes SKDC and LKH and one decentralized/contributory SGC scheme G-DH. To this end, we used the ComBench tool to simulate different network situations and then measured the times required for the following group operations: group creation, adding and removing members. The evaluation of our simulation results indicates that packet loss and delay influence the respective SGC schemes differently and that the execution time of the group operations depends more on the network situations than on the group sizes.
Graph Drawing is a field of research that has application in any field of science that
needs to visualize binary relations. This thesis covers various problems arising when
drawing graphs, both in theoretical and applied settings.
In the first and more theory-based part, we start by discussing how and to
which degree graph drawings can be used to visually prove graph properties (such
as connectivity) in an effective manner. Both for these visual proofs and for graph
drawings in general, the visual complexity determines how well humans are able
to perceive and process them. We find it thus paramount to minimize the visual
complexity of drawings. For example, one measure for the visual complexity of a
straight-line node-link diagram is the number of segments used. We prove lower
bounds on the segment number of some planar graph classes. These bounds tell us
how (visually) complex node-link diagrams (a traditional drawing style) of these
graphs must be at least. Next, we consider obstacle representations, which can be
far less (visually) complex in some cases, however, (usually) at the expense of being
harder to understand.
Next, we investigate the coloring of mixed and directional interval graphs. While
this in itself is not a drawing problem, it has, among others, application in the
Sugiyama framework, which is a widely used framework for layered orthogonal graph
drawing. In the final chapter of the first part, we consider drawings of level graphs
on few levels under a given set of precedence constraints.
The two problems considered in the second part are motivated by applications
in biology. First, we propose a drawing style for visualizing multispecies coalescent
trees, which are composed of a species tree and associated gene trees, and then
investigate various drawing algorithms. Second, we propose a model for visualizing
geophylogenies, that is, species trees that label sites on maps, and then analyze
various variants and algorithms to draw them.
With an increasing interest in extraterrestrial exploration new systems need to be developed for absolute heading determination, since e.g. the Moon or Mars lack usable magnetic field. This makes the magnetic compass, the most widely used option on Earth, ineffective. We present a novel design for a Sun sensor for absolute heading determination in this paper. It combines three low-cost commercial off-the-shelf 2D analog light angle sensors, each with a 90° field of view, to a single Sun sensor with almost 180° field of view, that can continuously determine the absolute heading at a rate of 200 Hz. The whole system comes in a small size of 9 cm x 9 cm x 4.5 cm and only weighs 45 g. To determine the heading the measurements from all three sensors are combined and compared to the predicted Sun position. A sensitivity analysis is conducted on all inputs for the computation. In our tests a root-mean-square error of 10.7° could be achieved compared to a magnetometer in clear sky conditions.
This cumulative dissertation presents interdisciplinary research in the field of human-computer interaction that is located in both, computer science and psychology. It investigates the implementation and subsequent evaluation of Intelligent Virtual Agents (human-like, interactive characters) with mixed-cultural backgrounds (multiple cultural backgrounds combined in one individual, e.g., as a result of migration).
On the one hand, methods from computer science were applied to develop two novel tools that allow for the automatic generation of synthetic non-native speech and for the efficient implementation of Wizard-of-Oz-based interactive scenarios with mixed-cultural Intelligent Virtual Agents (IVAs). On the other hand, methods from psychology were applied to investigate how users perceive cultural behaviours in IVAs and whether their response is congruent to the one seen in human-human interactions.
Additionally, to use IVAs in cultural interventions, effective intervention strategies from social psychology were adapted for interactive scenarios with IVAs.
The research contribution is divided into two levels:
On the first level, basic research on the design and subsequent perception of mixed-cultural IVAs is conducted. With non-native speech being the most reliable cultural marker of mixed-cultural individuals, this dissertation presents work on the design and perception of synthetically generated non-native speech for IVAs. Focusing on the two most salient markers of non-native speech, grammatical mistakes and non-native accents, it evaluates how users perceive IVAs exhibiting these patterns of non-native speech. Furthermore it investigates in which cases non-native accented synthetically generated speech can be used as a cost-efficient and flexible alternative to non-native accented natural speech.
To simplify and standardise basic research on the perception of mixed-cultural IVAs with non-native speech, this thesis presents a novel tool for automatically generating mixed-cultural speech for IVAs. The tool enables fast and reliable generation of grammatically incorrect and non-native accented synthetic speech based on empirical research.
On the second level, this dissertation presents novel applied research that for the first time implements virtual interactive interventions using mixed-cultural IVAs to raise inter-cultural empathy and reduce implicit cultural bias.
Based on previous research from the social sciences and human-computer interaction, this dissertation also presents the implementation of a Wizard-of-Oz prototyping tool for fast and reliable implementations of complex interactive scenarios with mixed-cultural IVAs.
This dissertation makes a significant contribution to research on the implementation and perception of mixed-cultural IVAs and their application to reduce implicit cultural bias. Presenting the two newly developed tools to automatically generate non-native speech and design interactive scenarios with convincing mixed-cultural IVAs, it also makes an important technical contribution for agent-based research.