Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010
Conceptual schemata each representing some component of a system in the making, can be integrated in a variety of ways. Herein, we explore some fundamental notions of this. In particular, we examine some ways in which integration using correspondence assertions affects the interrelationship of two component schemata. Our analysis of the logic leads us to reject the commonly asserted requirement of constraining correspondence assertions to single predicates from a source schema. Much previous work has focussed on dominance with regard to preservation of information capacity as a primary integration criterion. However, even though it is desirable that the information capacity of a combined schema dominate one or both of its constituent schemata, we here discuss some aspects of why domination based on information capacity alone is insufficient for the integration to be semantically satisfactory, and we provide a framework for detecting mappings that prevent schema domination.
1994
Two major problems in schema integration are to identify correspondences between different conceptual schemas and to verify that the proposed correspondences are consistent with the semantics of the schemas. This problem can only be effectively addressed if the conceptual schema is expressed in a semantically rich modelling formalism. We introduce such a modelling formalism, the distinguishing feature of which is the use of case grammar.
1992
In this paper we introduce some terminology for comparing the expressiveness of conceptual data modelling techniques, such as ER, NIAM, and PM, that are finitely bounded by their underlying domains. Next we consider schema equivalence and discuss the effects of the sizes of the underlying domains. This leads to the introduction of the concept of finite equivalence. We give some examples of finite equivalence and inequivalence in the context of PM.
1995
We present a formal framework for the combination of schemas. A main problem addressed is that of determining when two schemas can be meaningfully integrated. Another problem is how to merge two schemas into an integrated schema that has the same information capacity as the original ones, ie, that the resulting schema can represent as much information as the original schemas. We show that both these problems can be solved by placing a restriction on the schemas to be integrated.
Proceedings of the twenty-seventh ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems - PODS '08, 2008
A schema mapping is a high-level specification that describes the relationship between two database schemas. As schema mappings constitute the essential building blocks of data exchange and data integration, an extensive investigation of the foundations of schema mappings has been carried out in recent years. Even though several different aspects of schema mappings have been explored in considerable depth, the study of schema-mapping optimization remains largely uncharted territory to date.
Journal of Database Management, 2000
We begin with a number of applications and environments in which conceptual-relational mappings are used extensively and a solution to the mapping maintenance problem will greatly benefit the applications.
Information Systems, 1984
The specification of the conceptual schema for a data base application is divided into levels. It is argued that, at the highest level, a direct description of the characteristics of the information kept in a data base and of the constraints governing their existence and transformation of its components characterizes what a particular data base is in a more fundamental way (hence at a higher and more stable level) than the operations that happen to be used for data base manipulation. At a next lower level a specification based on operations, using the encapsulation strategy of abstract data types, is introduced, followed by a specification based on representations used in semantic data models. The discussion includes constraints involving temporal aspects. Modularization is also discussed as another dimension in the specification process, orthogonal to the division into levels.
2018
Abstract: Various advances in Schemas Theory over the years since the Tutorials in 2014 (http://schematheory.net). Advances in Special Systems Theory have been noted already . A presentation on Advances in both Schemas Theory and Special Systems Theory was given at ISSS.org conference in 2018 . We have already gave an Overview of the basis for Schemas Theory and a summary of Dagger Theory that encompasses it. We have worked on the definition of the Schemata and given a hypothesis about the Core of Design rooted in N-categories over Groupoid syntheses . These recent papers attempt to give an up to date view of the State of the Art in Schemas theory. But we need to sketch out the path through the various advances that have occurred since the tutorials in 2014 in order to bring the theory up to date in preparation for further researches. == Key Words: Schemas Theory, Systems Theory, Form, Pattern, Meta-system, OpenScape, Domain, World, Spacetime, Phenomenology, Structure of a Pattern, Essence of a Form, Nucleus of a System, Locus of a Meta-system, Systems Science, Systemology, Schematology.
Data & Knowledge Engineering, 2004
Conflict detection and analysis are of high importance, e.g., when integrating conceptual schemata, such as UML-Specifications, or analysing goal-fulfilment of sets of autonomous agents. In general, models for this introduce unnecessarily complicated frameworks with several disadvantages regarding semantics as well as complexity. This paper demonstrates that an important set of static and dynamic conflicts between specifications can be diagnosed using ordinary first-order modal logic. Furthermore, we show how the framework can be extended for handling situations when there are convex sets of probability measures over a state-space. Thus, representing specifications as conceptual schemata and using standard Kripke models of modal logic, augmented with an interval-valued probability measure, we propose instrumental definitions and procedures for conflict detection.
Information Technology & Management, 2005
This paper addresses the problem of handling semantic heterogeneity during database schema integration. We focus on the semantics of terms used as identifiers in schema definitions. Our solution does not rely on the names of the schema elements or the structure of the schemas. Instead, we utilize formal ontologies consisting of intensional definitions of terms represented in a logical language. The approach is based on similarity relations between intensional definitions in different ontologies. We present the definitions of similarity relations based on intensional definitions in formal ontologies. The extensional consequences of intensional relations are addressed. The paper shows how similarity relations are discovered by a reasoning system using a higher-level ontology. These similarity relations are then used to derive an integrated schema in two steps. First, we show how to use similarity relations to generate the class hierarchy of the global schema. Second, we explain how to enhance the class definitions with attributes. This approach reduces the cost of generating or re-generating global schemas for tightly-coupled federated databases.
We present a brief comment on each of three models of schemas; concept maps, mind maps and concept-relationship knowledge structures, comparing and assessing them without giving details of any of them. We suggest a way of combining the three models.
2001
Interoperability and integration of data sources are becoming ever more important issues as both, the amount of data and the number of data producers are growing. Interoperability not only has to resolve the differences in data structures, it also has to deal with semantic heterogeneity. Semantics refer to the meaning of data in contrast to syntax, which only defines the structure of the schema items (e.g., classes and attributes). We focus on the part of semantics related to the meanings of the terms used as identifiers in schema definitions. This paper presents an approach to integrate schemas from different communities, where each such community is using its own ontology. The approach is based on merging ontologies based on similarity relations among concepts of different ontologies. We present formal definitions of similarity relations based on intensional definitions and conclude the extensional consequences. The process of merging ontologies based on the detected similarity relations is discussed. The merged ontology is finally used to derive an integrated schema. The resulting schema can be used as the global schema in a federated database system.
2001
Interoperability and integration of data sources are becoming ever more important issues as both, the amount of data and the number of data producers are growing. Interoperability not only has to resolve the differences in data structures, it also has to deal with semantic heterogeneity. Semantics refer to the meaning of data in contrast to syntax, which only defines the structure of the schema items (e.g., classes and attributes). We focus on the part of semantics related to the meanings of the terms used as identifiers in schema definitions. This paper presents an approach to integrate schemas from different communities, where each such community is using its own ontology. The approach is based on merging ontologies based on similarity relations among concepts of different ontologies. We present formal definitions of similarity relations based on intensional definitions and conclude the extensional consequences. The process of merging ontologies based on the detected similarity relations is discussed. The merged ontology is finally used to derive an integrated schema. The resulting schema can be used as the global schema in a federated database system.
2008
* Supported by Bell Canada through the Bell University Labs, NSERC and Ontario Centres Of Excellence markable duality between them. The results of the paper can then be seen as an (important yet) particular case of this general duality theory. Contents 3 Sample scenario 3.1 Example of relational schema integration
1996
The quality of the results produced in the early phases of systems development is a major factor in determining the overall quality of an information system. Therefore, an important task for research in conceptual modelling and requirements engineering is to clarify the concept of quality and develop methods for improving the quality of conceptual schemas. In this paper, we propose an approach for improving schema quality, which is based on a systematic use of schema transformations to incrementally restructure schemas.
2013
Embracing Incompleteness in Schema Mappings Patricia C. Rodriguez-Gianolli Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2013 Various forms of information integration have become ubiquitous in current Business Intelligence (BI) technologies. In many cases, the semantic relationship between heterogeneous data sources is specified using high-level declarative rules, called schema mappings. For decades, Skolem functions have been regarded as an important tool in schema mappings as they permit a precise representation of incomplete information. The powerful mapping language of second-order tuple generating dependencies (SO tgds) permits arbitrary Skolem functions and has been proven to be the right class for modeling many integration problems, such as composition and correlation of mappings. This language is strictly more powerful than the languages used in many integration systems, including source-to-target and nested tgds which are both first-orde...
1992
Abstract The integration of views and schemas is an important part of database design and evolution and permits the sharing of data across complex applications. The view and schema integration methodologies used to date are driven purely by semantic considerations, and allow integration of objects only if that is valid from both semantic and structural view points.
Data & Knowledge Engineering, 2010
In this article, we address the problem of changing the constraints of a mediated schema to accommodate the set of constraints of a new export schema. The relevance of this problem lies in that the constraints of a mediated schema capture the common semantics of the data sources and, as such, they must be maintained and made available to the users of the mediation environment. We first argue that such problem can be solved by computing the greatest lower bound of two theories induced by sets of constraints, defined as the intersection of the theories. Then, for an expressive family of conceptual schemas, we show how to efficiently decide logical implication and how to compute the greatest lower bound of two theories induced by sets of constraints. The family of conceptual schemas we work with partly corresponds to OWL Lite and supports the equivalent of named classes, datatype and object properties, minCardinalities and maxCardinalities, InverseFunctionalProperties, subset constraints, and disjointness constraints. Such schemas are also sufficiently expressive to encode commonly used UML constructs, such as classes, attributes, binary associations without association classes, cardinality of binary associations, multiplicity of attributes, and ISA hierarchies with disjointness, but not with complete generalizations. (L.A.P. Paes Leme), [email protected] (K.K. Breitman), [email protected] (A.L. Furtado), [email protected] (V.M.P. Vidal).
ACM SIGMOD Record , 2009
A schema mapping is a specification that describes howdata from a source schema is to be mapped to a targetschema. Schema mappings have proved to be essentialfor data-interoperability tasks such as data exchange anddata integration. The research on this area has mainly fo-cused on performing these tasks. However, as Bernsteinpointed out [7], many information-system problems in-volve not only the design and integration of complex ap-plication artifacts, but also their subsequent manipulation.Driven by this consideration, Bernstein proposed in [7]a general framework for managing schema mappings. Inthis framework, mappings are usually specified in a logi-cal language, and high-level algebraic operators are usedto manipulate them
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.