Ich spreche mit dir doch du nicht mit mir - Sematische Verständigungsprobleme
1. Ich spreche mit dir, doch du nicht mit mir Semantische Herausforderungen elektronischen Datenaustauschs Johann Höchtl Donau Universität Krems Zentrum für E-Government
2.
3.
4. Sprachprobleme “ Our wines leave you nothing to hope for” Unsere Weine lassen nichts zu wünschen übrig! Bei unseren Weinen besteht keine Hoffnung !? “ We take your Bags and send them in all directions” Wir versenden Ihr Gepäck überall hin! Wir nehmen Ihre Koffer und verschicken sie in alle Richtungen !? The Beatles:Magical Mystery Tour (1967) I say hello Hello, hello I don't know why you say goodbye I say hello
18. Danke für die Aufmerksamkeit! Unser Stand: N - A18 1. Stock Fragen?
19.
Notas do Editor
My name is Johann Höchtl I am from Danube University Austria and I will present you some challenges of semantic interoperability and recent research to overcome the problems. Semantic interoperability is much about connecting concepts, thus the term semantic “bridging”. Istanbul would not be metropoly of the importance it has without the two big bridges connecting Europa and Asia. When thinking about Europe and Asia, certain associations arouse. Both have a characteristic food culture, traditional clothing and distinct medical cultures. Terms as Corn and Rice, Red Wine and Sake, Bachblüten and Reiki have something in common, a relationship which can be modeled on a higher level.
While the first three concepts fall into the food domain with Corn and Rice being an important protein source, Lederhose and Sari have in common that they are super of concept Clothing and share the property Natural Material and Bachblüten and Reiki are alternative medical treatments. To even more complicate things you can identify horizontal properties. They have in common that they all can be bought which belongs to Finance domain. What we can identify here are relationships and properties, hierarchy attributes. In terms of knowledge engineering these properties are termed superconcepts and sub-concepts or Higher Ontology vs. Lower ontology. As a knowledge worker you may find ask yourself whether you are a generalist or specialist.
After this small introductory stuff about what semantic bridging is about, some more information about my workplace. I work for Danube University Krems, the only publicly owned university for continuing education in Austria. The research focus of Center for E-Government is in E-Democracy and the impact of electronic participation on society. You will find out more about what we do when you browse to and participate on our public blog. If you are interested you may submit a paper to to E-Journal of E-Democracy and Open Government.
So why are we as a center for e-Government interested in Semantic Ontology driven data exchange? Because the current state of affairs in semantic land does not permit unguided exchange on the semantic level. As long as only technical interoperability is concerned for example when you can strictly follow an XML schema specification, things are fine. But not when it comes down to semantic systems without enriched domain knowledge. In the research we made together with the CIO section of Austrian Chancellery we found out that the recall rate of semantic bridging systems which focus on domain knowledge is higher than in systems which try to extract or reconstruct that domain knowledge by dictionary lookups, word frequency analysis or stemmer approaches. Three months ago netbase made a new service publicly available, a Content Intelligence platform for healthcare. Based on user input he gets treatment advises and possible causes and cures for diseases. While some of the results may be funny, but taken to seriously those advice can be more of harm than good. Here some funny assertions by the system. Since it’s release the system has improved as those funny assertions are not returned any longer.
So why are we as a center for e-Government interested in Semantic Ontology driven data exchange? Because the current state of affairs in semantic land does not permit unguided exchange on the semantic level. As long as only technical interoperability is concerned for example when you can strictly follow an XML schema specification, things are fine. But not when it comes down to semantic systems without enriched domain knowledge. In the research we made together with the CIO section of Austrian Chancellery we found out that the recall rate of semantic bridging systems which focus on domain knowledge is higher than in systems which try to extract or reconstruct that domain knowledge by dictionary lookups, word frequency analysis or stemmer approaches. Three months ago netbase made a new service publicly available, a Content Intelligence platform for healthcare. Based on user input he gets treatment advises and possible causes and cures for diseases. While some of the results may be funny, but taken to seriously those advice can be more of harm than good. Here some funny assertions by the system. Since it’s release the system has improved as those funny assertions are not returned any longer.
Some fundamentals properties on semantics. First and foremost semantic bridging is much about the detection of similarity in a computerized manner. When semantic information is for example in OWL-DL format it first has to be converted into machine processable representation, which usably is that of a matrix. The two dimensions of the matrix contain the similarity of identified concepts and their similarity expressed between 0 and 1 with 0 meaning no similarity and 1 meaning either identical or full semantic match. As for the human eye a matrix is not the most intuitive form to visualize semantic information, for the human perception, Directed Acyclic Graphs or for special inheritance relationships trees are sensible graphical representations. The naïve approach to compute similarity is to completely enumerate all concepts and to compare pairwise. The theoretical amount of required data processing power for a complete DNA analysis or Internet Data Mining required new comparison algorithms, which reduce the computational complexity to less than NP-complete. A prominent early algorithm was the marching ants algorithm to solve the traveling salesman problem in reasonable time.
Many of those semantic similarity problems have their origins in detecting structural similarity, for example comparing the similarity between graphs. Especially in the realm of graph similarity, the influence of semantic similarit research resulted in new approaches and algorithms. While the number of edit operations to transform a tree A into a structural equivalent tree B are rather old, similarity flooding is a quite new methodology. The idea behind similarity flooding is the fundamental assumption, that two concepts are similar, if their neighbors are similar. While this algorithm iteratively traverses the graph at least two-fold and has terrible runtime complexity, additional sensible constraints help to improve the performance for example the maximum depth at which to propagate a similarity of node based on its surrounding nodes or branch prediction to stop comparing branches which are unlikely to match given a certain threshold. Besides the structural similarity of Graphs the element names and their assigned data types also contain semantic information. Dictionary bases algorithms calculate the relatedness of words or similar words may be identified by the soundex or levenshtein-algorithm. Combining multiple similarity measures into one concept, eg. Structural similarity between two nodes and their soundex similarity is another challenge. Once the similarity matrix has been established, the most likely matching pairs have to be determined. Based on similarity indices in the matrix Concepts of A can been as feature vectores and compared to the feature vectors of concept B with the euclidean distance, the well-know cosine distance or the Jaccard coefficent. The Jaccard coefficient measures similarity between sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets.
While the previous slide presented algorithms derived from schema matching which are applicable in ontology matching, these algorithms do not account enough for the semantics in an ontology. A frequent problem is to identify the most specific ancestor in an ontology. The EDGE and LEACOCK algorithm for example measure the relatedness of ontologies entirely on distance between edges in the ontology represented as a Directed Graph. In 1995 RESNIK proposed a similarity approach which accounts for the depth of the concepts in the Graph. A node carries less information the higher it can be found along the inheritance line. Dekang Lin refined this concept in 1998 with a very clever, universally applicable, domain and resource-neutral concept. He defines similarity by the amount of information the concepts share in relation to the smallest common sub-concept. To give you an idea on how complex this is, in 2005 a paper was presented to WWW Conference in Chiba Japan. The Department of CS of University of Indiana, US, compared a traditional tree-based approach to a graph-based analysis of similarity between all concepts available on DMOZ.org, excluding world and regional. In 2005 DMoz.org had 150.000 pages. The Calculation of graph-based similarity on hierarchical component and the two non-hierarchical components symbolic and related cross-links required a total of 5000 CPU-hours on a massively parallel CPU cluster consisting of 416 Prestonian cores. But abbreviations or association words add a level of complexity which prevents automatic inference of concepts . In this cases either a custom dictionary knowledge represented in SWRL predicate logic or simply a human based mapping can solve these mapping problems.