Papers by Elke Rundensteiner
BDDB is a Behavioral Design Data Base that manages the design data produced and consumed by diffe... more BDDB is a Behavioral Design Data Base that manages the design data produced and consumed by different behavioral synthesis tools. These different design tools retrieve design data from BDDB, manipulate the data, and then store the results back into the data base. BDDB thus needs to address the following two issues: (1) a design data exchange approach and (2) customized design data interfaces. To address the first issue, we have developed a textual description format for describing design data objects and relationships. This language, referred to as the Behavioral Design Data Exchange Format (BDEF), is used as common format for exchanging design data between BDDB and the design tools in the behavioral synthesis environment. To address the second issue, we have developed a behavioral object type description language (generally referred to as schema definition language) for describing the global data structures required by design tools as well as the desired design subviews of this global BDDB design information. One design view class, namely, BDEF, is the topic of this report. In this report we give a formal definition of the BDEF format. Then we describe a comprehensive example of applying BDEF to the behavioral synthesis domain. That is, we present the complete BDEF syntax for the Extended Control/Data Flow Graph Model (ECDFG), which is the design representation model used by most behavioral synthesis tools in the UCI CADLAB synthesis system. We also present several example descriptions of designs using this ECDFG model. A parser/ graph compiler from BDEF into the generalized ECDFG design representation as well as a BDEF generator from the ECDFG data structures into the BDEF format have been implemented.
The use of VHDL as a hardware description language for automated synthesis has given rise to new ... more The use of VHDL as a hardware description language for automated synthesis has given rise to new problems. A behavioral description of components is given by using standard operators in the language. Therefore, a mismatch between the operators of the language and the functionalities provided by library components arises. In addition, the language does not guarantee uniqueness of descriptions, thus allowing possibly many different ways of describing same design. In this paper we propose a solution to these problems. The Component Synthesis Algorithm (CSA) recognizes a possibly incomplete behavioral description and generates a minimal set of components from a given library. In particular, CSA maps a complex behavioral description of a unit to one hardware component whenever possible. This process, driven by a particular library, emphasizes resource sharing between mutually exclusive operations that are mergable, i.e., that can be performed by the same component.
Design tools share and exchange various types of information pertaining to the design. The identi... more Design tools share and exchange various types of information pertaining to the design. The identification of a uniform design representation to capture this information is essential for the development of a successful design environment. We have done an extensive study on the representation needs of existing database tools in the UCI CADLAB; examples of which are graph compilers for high-level hardware specifications, state schedulers, hardware allocators, and micro-architecture optimizers. The result of this study is the development of a design representation model that will serve as a common internal representation (DDM) for all system and behavioral synthesis tools. DDM thus builds the foundation for a CAD Framework in which design tools can communicate via operating on this common representation. The design information is composed of three separate graph models: the conceptual model, the behavioral model and the structural model. The conceptual model (represented by a Design Entity Graph) captures the overall organization of the design information, such as, versions and configurations. The behavioral model (represented by an Augmented Control/Data Flow Graph) describes the design behavior. The structural model (represented by an Annotated Component Graph) captures the hierarchical data path structure and its geometric information. In this paper, we define the last two graph models. They both capture the actual design data of the application domain. Since VHDL has gained increasing popularity as hardware description language for synthesis, we give numerous examples throughout this report that show how the proposed design representation model can be used to represent VHDL specifications.
International Journal of Approximate Reasoning, Apr 1, 1988
ultimate moment); and productivity Ip = (/mat + /man + It)/Iman. Practically, for the fuzzificati... more ultimate moment); and productivity Ip = (/mat + /man + It)/Iman. Practically, for the fuzzification process (/mat, Ir,a,, It, and Ia are interpreted as fuzzy goals and Isi and Ip as fuzzy constraints), an exponential membership function lk = e-klxl (where k is the importance coefficient) is proposed. Thus by this procedure, the optimal investment is alternative a3, and the hierarchization of the alternatives is a3, a4, a2, a~. A computer program was written to process the data and produce the results.
Semantic database models utilize several fundamental forms of groupings to increase their express... more Semantic database models utilize several fundamental forms of groupings to increase their expressive power. In this paper we consider four of the most common of these constructs; basic set groupings, is-a related groupings, power set groupings, and Cartesian aggregation groupings. For each, we define a number of useful restrictions that control its structure and composition. This permits each grouping to capture more subtle distinctions of the concepts or situations in the application environment. The resulting set of restrictions forms a framework which increases the expressive power of semantic models and specifies various set-related integrity constraints.
Springer eBooks, Nov 13, 2005
ABSTRACT
Design Automation Conference, Jul 1, 1992
It is generally overlooked that designers use functional models more frequently than behavioral o... more It is generally overlooked that designers use functional models more frequently than behavioral or gate-level models. In functional modeling, the functionality of one or more register-transfer level RTL) components are described language such as VHDL. In this paper, we present two algorithms that synthesize a netlist of RTL components from a functional description while minimizing hardware costs and delay. Experimental results show that the proposed algorithms produce designs that are comparable to those produced by human designers. as separate concurrent bloc i s using a hardware description 0738-100)(/92 33.00 Q 1992 IEEE
International Journal of Approximate Reasoning, May 1, 1989
It has been widely recognized that the imprecision and incompleteness inherent in real-world data... more It has been widely recognized that the imprecision and incompleteness inherent in real-world data suggests a fuzzy extension for information management systems. Various attempts to enhance these systems by fuzzy extensions can be found in the literature. Varying approaches concerning the fuzzification of the concept of a relation are possible, two of which are referred to in this article as the generalized fuzzy approach and the fuzzy-set relation approach. In these enhanced models, items can no longer be retrieved by merely using equality-check operations between constants; instead, operations based on some kind of nearness measures have to be developed. In fact, these models require such a nearness measure to be established for each domain for the evaluation of queries made upon them. An investigation of proposed nearness measures, often fuzzy equivalences, is conducted. The unnaturalness and impracticality of these measures leads to the development of a new measure: the resemblance relation, which is defined to be a fuzzified version of a tolerance relation. Various aspects of this relation are analyzed and discussed. It is also shown how the resemblance relation can be used to reduce redundancy in fuzzy relational database systems.
IEEE Transactions on Knowledge and Data Engineering, 1992
Query languages designed for traditional database systems, such as the relational model, generall... more Query languages designed for traditional database systems, such as the relational model, generally support set operations. However, the semantics of these set operations are not adequate for richer data models of newly developed object-based database systems that include object-oriented and semantic data modeling concepts. The reason is that precise semantics of set operations on complex objects require a clear distinction between the dual notions of a set and a type, both of which are present in the class construct found in object-based data models. In fact, class creation by set operations has largely been ignored in the literature. Our paper lls this gap by presenting a framework for executing set-theoretic operations on the class construct. The proposed set operations, including set di erence, union, intersection and symmetric di erence, determine both the type description of the derived class as well as its set membership. For the former, we develop inheritance rules for property characteristics such as single-versus multi-valued and required versus optional. For the latter, we borrow the object identity concept from data modeling research. Our framework allows for property inheritance among classes that are not necessarily is-a related.
í•œêµì§€ëŠ¥ì‹œìŠ¤í…œí•™íšŒ 논문지, Mar 1, 1993
The need for extending information management systems to handle the imprecision of in formation f... more The need for extending information management systems to handle the imprecision of in formation found in the real world has been recognized. Fuzzy set theory together with possibility theory represent a uniform framework for extending the relational database model with these features. However, none of the existing proposals for handling impreci sion in the literature has dealt with queries involving a functional evaluation of a set of items, traditionally referred to as aggregation. Two kinds of aggregate operators, namely, scalar aggregates and aggregate functions, exist. Both are important for most real-world applications, and are thus being supported by traditional languages like SQL or QUEL. This paper presents a framework for handling these two types of aggregates in the context of imprecise information. We consider three cases, specifically, aggregates within vague queries on precise data, aggregates within precisely specified queries on possibilistic data, and aggregates within vague queries on imprecise data. These extensions are based on fuzzy set-theoretical concepts such as the extension principle, the sigma-count operation, and the possibilistic expected value. The consistency and completeness of the proposed operations is shown.
Class creation by set operations has largely been ignored in the literature. Precise semantics of... more Class creation by set operations has largely been ignored in the literature. Precise semantics of set operations on complex objects require a clear distinction between the dual notions of a set and a type, both of which are present in a class. Our paper fills this gap by presenting a framework for executing set-theoretic operations on the class construct. The proposed set operations determine both the type description of the derived class as well as its set membership. For the former, we develop inheritance rules for property characteristics such as single-versus multi-valued and required versus optional. For the later, we borrow the object identity concept from data modeling research. Our framework allows for property inheritance among classes that are not necessarily is-a related.
An object-oriented data schema is a complex structure of classes interrelated via generalization ... more An object-oriented data schema is a complex structure of classes interrelated via generalization and property decomposition relationships. vVe define an object-oriented view to be a virtual schema graph with possibly restructured generalization and decomposition hierarchies-rather than just one individual virtual class as proposed in the literature. In this paper, we propose a methodology, called J\!fulti View, for supporting multiple such view schemata. Multi View is anchored on the following complementary ideas: (a) the view definer derives virtual classes and then integrates them into one consistent global schema graph and (b) the view definer specifies arbitrarily complex view schemata on this augmented global schema. The focus of this paper is, however, on the second, less explored, issue. This part of the view definition is performed using the following two steps: (1) view class selection and (2) view schema graph generation. For the first, we have developed a view definition language that can be used by the view definer to specify the selection of the desired view classes from the global schema. For the second, we have developed two algorithms that automatically augment the set of selected view classes to generate a complete, minimal and consistent view class generalization hierarchy. The first algorithm has linear complexity but it assumes that the global schema graph is a tree. The second algorithm overcomes this restricting assumption and thus allows for multiple inheritance, but it does so at the cost of a higher complexity.
Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies, 2020
Clostridium difficile infection (CDI) is a common and often serious hospital-acquired infection. ... more Clostridium difficile infection (CDI) is a common and often serious hospital-acquired infection. The CDI Risk Estimation System (CREST) was developed to apply machine learning methods to predict a patient's daily hospital-acquired CDI risk using information from the electronic health record (EHR). In recent years, several systems have been developed to predict patient health risks based on electronic medical record information. How to interpret the outputs of such systems and integrate them with healthcare work processes remains a challenge, however. In this paper, we explore the clinical interpretation of CDI Risk Scores assigned by the CREST framework for an L1-regularized Logistic Regression classifier trained using EHR data from the publicly available MIMIC-III Database. Predicted patient CDI risk is used to calculate classifier system output sensitivity, specificity, positive and negative predictive values, and diagnostic odds ratio using EHR data from five days and one day before diagnosis. We identify features which are strongly predictive of evolving infection by comparing coefficient weights for our trained models and consider system performance in the context of potential clinical applications.
Uploads
Papers by Elke Rundensteiner