Papers by Markus Stumptner
Journal of information technology in construction, Apr 26, 2024
Substandard performance between information systems and applications remains a problem for the Ar... more Substandard performance between information systems and applications remains a problem for the Architecture, Engineering, Construction, and Operations (AECO) sector leading to significant economic, social and environmental costs. The sector suffers from poor interoperability because it lacks a holistic ecosystem for exchanging data and information. Through qualitative research involving AECO sector industry partners' views and opinions, this research extends understanding of issues which affect ecosystem interoperability in the AECO sector. Research questions guided a literature review, survey, semi-structured interview, focus group meeting, and interpretation of the results. The authors believe that incorporating AECO sector industry partners' views is essential for meaningful proposals to emerge. Open questions asked of industry partners received candid responses and confirmed key issues including: the need for the Industry Foundation Class (IFC) to be fully interoperable; the side effects and impacts of vendor lock-in; integration problems caused by multiple Common Data Environments (CDEs); handover data and information challenges; the impacts of poor interoperability on sustainable development. Through engagement with industry this research offers better understanding of interoperability challenges in the AECO sector and has generated more meaningful actions and solutions capable of improving the sector's data and information ecosystem.
This paper examines theuse of constraint-based configurationfor thecom-positionofW eb Services.W ... more This paper examines theuse of constraint-based configurationfor thecom-positionofW eb Services.W eb Services arewidelyassumedtorepresent thebasis for then extg enerationo ffl exible distributed applications in B2BE-commerce, andt he compositiono fcomplex applications fromindividuals services hasa ttracted much attention. We show howthiscompositionproblem can be addressedatincreasinglevels of semantic content embedded in thedescriptionofservices, moving frompurelymanual compositiont od escribings ervice matching as ac onfigurationp roblem that can be solved using constraint-based methods. We examine therestrictions imposed by Semantic Webontology languages and providingasuccinct and high levelmechanism for imposing thedifferent boundary conditions resulting from themultilayeredapplication environment.W egivea ne xamples howing thec onfigurationp rocess forasimple exampleproblem, discussthe ramifications of thefullcompositionproblem, anddescribe theresultingsystemarchitecture.
ISTA, 2005
This paper examines theuse of constraint-based configurationfor thecom-positionofW eb Services.W ... more This paper examines theuse of constraint-based configurationfor thecom-positionofW eb Services.W eb Services arewidelyassumedtorepresent thebasis for then extg enerationo ffl exible distributed applications in B2BE-commerce, andt he compositiono fcomplex applications fromindividuals services hasa ttracted much attention. We show howthiscompositionproblem can be addressedatincreasinglevels of semantic content embedded in thedescriptionofservices, moving frompurelymanual compositiont od escribings ervice matching as ac onfigurationp roblem that can be solved using constraint-based methods. We examine therestrictions imposed by Semantic Webontology languages and providingasuccinct and high levelmechanism for imposing thedifferent boundary conditions resulting from themultilayeredapplication environment.W egivea ne xamples howing thec onfigurationp rocess forasimple exampleproblem, discussthe ramifications of thefullcompositionproblem, anddescribe theresultingsystemarchitecture.
The state of the art in hardware design is the use of hardware description languages such as VHDL... more The state of the art in hardware design is the use of hardware description languages such as VHDL. The designs are tested by simulating them and comparing their output to that prescribed by the specification. A significant part of the design effort is spent on detecting unacceptable deviations from this specification and subsequently localizing the sources of such faults. In this paper, we describe an approach to employ model-based diagnosis for fault detection and localization in very large VHDL programs, by automatically generating the diagnosis model from the VHDL code and using observations about the program behavior to derive possible fault locations from the model. In order to achieve sufficient performance for practical applicability, we have developed a representation that provides a highly abstracted view of programs and faults, but is sufficiently detailed to yield substantial reductions in the fault localization costs when compared to the current manpowerintensive approach. The implementation in conjunction with the knowledge representation is designed with openness in mind in order to facilitate use of the highly optimized simulation tools available.
Ai Communications, Apr 1, 1997
Journal of Integrated Security and Safety Science
Allocation of resources to improve security is crucial when we consider people’s safety on transp... more Allocation of resources to improve security is crucial when we consider people’s safety on transport systems. We show how a system engineering methodology can be used to link business intelligence and railway specifics toward better value for money. A model is proposed to determine a probability of a success in service management. The forecasting model is a basic Markov Chain. A use case demonstrates a way to align statistical data (crime on stations) and probability of investment into resources (people, security measures, time).
Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each a... more Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each analysis scenario and application. This is an approach that is inflexible, incurs high costs, and leads to “silos” that prevent sharing data across different agencies or tasks. A standard approach to tackling this problem is to design a common ontology and to construct source descriptions which specify mappings between the sources and the ontology. Modeling the semantics of data manually requires huge human cost and expertise, making an automatic method of semantic modeling desired. Automatic semantic model has been gaining attention in data integration [5], federated data query [14] and knowledge graph construction [6]. This paper proposes an service-oriented architecture to create a correct semantic model, including annotating training data, training the machine learning model, and predict an accurate semantic model for new data source. Moreover, a holistic process for automatic semantic modeling is presented. By the usage of ASMaaS, historical semantic annotations for training machine learning model used in automatic semantic modeling can be shared, reducing costs of human resources from users. By specifying a well defined interface, users are able to have access to automatic semantic modeling process at any time, from anywhere. In addition, users must not be concerned with machine learning technologies and pipeline used in automatic semantic modeling, focusing mainly on the business itself.
Springer eBooks, 2001
... George, Adelaide John Gero, Sydney Greg Gibbon, Newcastle Achim Hoffmann, Sydney Ray Jarvis, ... more ... George, Adelaide John Gero, Sydney Greg Gibbon, Newcastle Achim Hoffmann, Sydney Ray Jarvis, Melbourne Cara MacNish, Perth Michael ... Level of Interchangeability Embedded in a Finite Constraint Satisfaction Problem Affects the Performance of Search Amy M. Beckwith ...
As ontologies become more prevalent for information management the need to manage the ontologies ... more As ontologies become more prevalent for information management the need to manage the ontologies increases. In the community services sector multiple organisations often combine to tender for funding. When separate organisations come together to generate reports for funding bodies an alignment of terminology and semantics is required. Ontology creation is privatised for these individual organisations to represent their view of the domain. This creates problems with alignment and integration, making it necessary to consider how much each ontology should influence the current decision to be made. To assist with determining influence a trust based approach on author and the ontologies provides a mechanism for ranking reasoning results. A representation of authors and the individual resources they provide for the merged ontology becomes necessary. The authors are then weighted by trust and trust for the resources the author provides to the ontology is calculated. This is then used to assist the integration process allowing for an evolutionary trust model to calculate the level of belief in the resources. Once the integration is complete the semantic agreement between the ontologies allows for the recalculation of the author's trust.
Springer eBooks, Jan 19, 2006
How to provide a means or organize the information used in making exploration decisions in petrol... more How to provide a means or organize the information used in making exploration decisions in petroleum exploration is an important task. In this paper, a machine learning method is put forward to collect experiences and estimate or prediction the absent data. The well logging experiments show that the method is efficiently and accurately.
The MIT Press eBooks, 2000
This chapter contains sections titled: Introduction, Behavior Diagrams, Behavior Modeling in UML,... more This chapter contains sections titled: Introduction, Behavior Diagrams, Behavior Modeling in UML, Consistent Extension, Consistent Refinement, Consistent Specialization, Conclusion, References
Artificial intelligence for engineering design, analysis and manufacturing, Sep 1, 1998
This paper describes the technical principles and representation behind the constraint-based, aut... more This paper describes the technical principles and representation behind the constraint-based, automated configurator COCOS. Traditionally, representation methods for technical configuration have focused either on reasoning about structure of systems or quantity of components, which is not satisfactory in many target areas that need both. Starting from general requirements on configuration systems, we have developed an extension of the standard CSP model. The constraint-based approach allows a simple system architecture, and a declarative description of the different types of configuration knowledge. Knowledge bases are described in terms of a component-centered knowledge base written in an object-oriented representation language with semantics directly based on the underlying constraint model. The approach combines a simple, declarative representation with the ability to configure large-scale systems and is in use for actual production applications.
European Conference on Artificial Intelligence, Jun 27, 2008
Software Engineering and Knowledge Engineering, 2008
A common data format as provided by the STEP/EX-PRESS initiative is an important step toward inte... more A common data format as provided by the STEP/EX-PRESS initiative is an important step toward interoperability in heterogeneous design and manufacturing environments. Ontologies further support integration by providing an explicit formalism of process and design knowledge, thereby enabling semantic integration and re-use of process-information. By formalizing the process-model in EXPRESS, we gain access to the domain knowledge in the STEP application protocols. We present an approach to process modeling using different models for abstract process knowledge and implementation details. The abstract process model supports re-use and is independent of the implementation. As a result, we translate the process model in combination with the implementation model to an executable workflow.
arXiv (Cornell University), Aug 22, 2017
Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each a... more Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each analysis scenario and application. This is a solution that is inflexible, incurs high costs, leads to "silos" that prevent sharing data across different agencies or tasks, and is unable to cope with the modern environment, where workflows, tasks, and priorities frequently change. Operating within the Data to Decision Cooperative Research Centre (D2D CRC), the authors are currently involved in the Integrated Law Enforcement Project, which has the goal of developing a federated data platform that will enable the execution of integrated analytics on data accessed from different external and internal sources, thereby providing effective support to an investigator or analyst working to evaluate evidence and manage lines of inquiries in the investigation. Technical solutions should also operate ethically, in compliance with the law and subject to good governance principles. Keywords. Legal natural language processing of legal texts, law enforcement investigation management 1 Pre-print. A previous version of this paper was presented at the Third Workshop on Legal Knowledge and the Semantic Web (LK&SW-2016), EKAW-2016, November 19th, Bologna, Italy. This version has been submitted for publication at AICOL-2017. 2 Corresponding Author. 3 http://www.d2dcrc.com.au/ etc. results in a flood of information in disparate formats and with widely varying content. In Australia, such data is often held by individual federal, state or territory agencies and inter-agency access to and sharing of data is generally subject to multiple laws and complicated rules and agreements [20][21]. Accessing relevant data as well as linking and integrating them in a correct and consistent way remains a pressing challenge, particularly when underlying data structures and access methods change over time. In addition to this challenge, a large volume of data needs to be handled. Usually only a fraction of current volumes can be analyzed. The Big Data challenge is to extract maximum value from this flood of data through the use of smart analytics and machine enablement. Traditionally the integration of data from multiple sources is done on an ad-hoc basis for each analytical scenario and application. This is a solution that is inflexible, costly, entrenches "silos" that prevent sharing of results across different agencies or tasks, and is unable to cope with the modern environment, where workflows, tasks, and priorities frequently change. Working within the D2D CRC, one group of authors of this article are currently involved in the Integrated Law Enforcement Project, which has the goal of developing a federated data platform to enable the execution of integrated analytics on data accessed from different external and internal sources, in order to provide effective support to an investigator or analyst working to evaluate evidence and manage lines of inquiry in the investigation. This will be achieved by applying foundational semantic technologies based on the meta-modelling of data models and software systems that permit alignment and translation by use of model-driven transformations between the different APIs, services, process models and meta-data representation schemes that are relied upon by the various stakeholders. It will also provide easily adapted interfaces to third party data sources currently outside of the stakeholders' reach, such as financial transactions. The other group of authors are involved in the D2D CRC's Law and Policy Program, which aims to identify and respond to the legal and policy issues that arise in relation to the use of Big Data solutions by Australian law enforcement and national security agencies. A 2015 systematic ACM review and mapping [1] of papers on online data mining technology intended for use by law enforcement agencies identified eight main problems being addressed in the literature: (i) financial crime, (ii) cybercrime, (iii) criminal threats or harassment, (iv) police intelligence, (v) crimes against children, (vi) criminal or otherwise links to extremism and terrorism, (vii) identification of online individuals in criminal contexts, and (viii) identification of individuals. The survey also included an array of technologies capable of application to Open Source Intelligence (OSINT), i.e. data collected from publicly available sources in the fight against organized crime and terrorism: Artificial Intelligence, Data Fusion, Data Mining, Information Fusion, Natural Language Processing, Machine Learning, Social Network Analysis, and Text Mining. Data integration in this context raises serious legal compliance and governance challenges. While the Onlife Manifesto considers the use of self-enforcing technologies as the exception, or a last resort option, for coping with the impact of the information revolution [2], nothing prevents the regulation of OSINT in accordance with existing legislation and case law, international customary law, policies, technical protocols, and best practices [3]. Indeed, compliance with existing laws and principles is a precondition for the whole process of integration, as information acquisition, sharing and analysis must occur within the framework of the rule of law.
Research in the area of process analytics has become useful in providing new insights into patien... more Research in the area of process analytics has become useful in providing new insights into patient care and support decision making. In order to reach the full potential of process analytics, we must look into further details and address challenges when applying it to real world scenarios which are often represented by complex systems. One area that has not been explored thoroughly is the ability to identify how processes relate to each other in a network of processes. Different often overlapping information about someone or something will always be kept in different domains. There is rarely a chance for these pieces of information to all link together and access them directly. Having access to the relation between processes and seeing an overall picture of network of process helps to better understanding a complex system and further analyse it to make better and informed decisions. However, attempts to link these processes into a network has led to challenges which have not been resolved yet. The contribution is a detailed use case that highlights existing challenges in discovering patient journey processes and two experimentation with preliminary results on addressing some of the identified challenges. The first experiment investigated compliance checking of clinical processes against guidelines and the second investigated the matching of event labels with an existing processable collection of health terms. The results of both experiments showed that further research and tool development is required to increase the automation for compliance checking and improve the accuracy of event and term matching.
Springer eBooks, 2012
The University of South Australia is investigating the highly complex integration of information ... more The University of South Australia is investigating the highly complex integration of information systems within the CRC for integrated engineering asset management (CIEAM) and develops techniques and tools to simplify the data exchange between enterprise critical systems. Recent research outcomes include a service oriented integration architecture, an integration toolbox and a service interface for the integration of asset management information systems. In this paper we also propose an extension on the architecture level for enabling a secure data exchange between multiple systems. It allows the secure information sharing internally within an enterprise but also externally with business partners and supports the current trend towards an open service oriented environment by protecting critical asset management data on a shared communication infrastructure.
Uploads
Papers by Markus Stumptner