Wissensmanagement: Werkzeuge für Praktiker (German Edition)

Free download. Book file PDF easily for everyone and every device. You can download and read online Wissensmanagement: Werkzeuge für Praktiker (German Edition) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Wissensmanagement: Werkzeuge für Praktiker (German Edition) book. Happy reading Wissensmanagement: Werkzeuge für Praktiker (German Edition) Bookeveryone. Download file Free Book PDF Wissensmanagement: Werkzeuge für Praktiker (German Edition) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Wissensmanagement: Werkzeuge für Praktiker (German Edition) Pocket Guide.

The book is part of the Intelligent Data-Centric Systems: All contributions showcase essential research results, concepts and innovative teaching methods to improve engineering education. Further, they focus on a variety of areas, including virtual and remote teaching and learning environments, student mobility, support throughout the student lifecycle, and the cultivation of interdisciplinary skills. Presenting the current state IIoT and the concept of cybermanufacturing, this book is at the nexus of research advances from the engineering and computer and information science domains.

It features contributions from leading experts in the field with years of experience in advancing manufacturing. Readers will acquire the core system science needed to transform to cybermanufacturing that spans the full spectrum from ideation to physical realization. Die Autoren zeigen, dass unser Digitalzeitalter von umfassenden technologischen Neuerungen, komplexer Vernetzung und schnellen Innovationszyklen gekennzeichnet ist, die sich klassischen Beschreibungsmodellen und traditionellen Regelungsmechanismen entziehen.

Mit welchen Awarenessfunktionen ist die Wirklichkeit in virtuellen Welten nachzubilden? Haben virtuelle Unternehmen eigene Organisationskulturen? Work is a systematically relevant element of the global economy, part of our everyday life, and a reflection of our socio-economic conditions and developments. This book outlines important trends, opportunities, and risks in the working environment of the 21st century. International researchers, economic actors, and politicians from twelve different countries are rethinking or 'prethinking' work, and have produced inspiring articles on the future of work and the future of our socio-economic reality.

Juni in Aachen by Sabina Jeschke 6 editions published between and in German and held by 61 WorldCat member libraries worldwide Main description: Gallen erstmalig die KyWi aus. Die unter Beteiligung von Prof. Verwaltungsinnovation ist kein Selbstzweck und bedarf der Bewertung und Gestaltung. Nomos edition sigma , S. Externe Berater in der Verwaltung. Von dem Traineeprogramm werden - wie in dem Senatsbeschluss auch explizit genannt - PuMas, aber sicher auch andere Bachelor-Absolventinnen und -Absolventen, wie etwa die des HWR-Studiengangs "Verwaltungsinformatik", profitieren.

Dem kommt entgegen, dass der Staat schon seit langem IT zur Aufgabenerledigung einsetzt. Das tangiert in durchaus brisanter Weise klassische Prinzipien der Staatsorganisation. Based on an extension of the PageRank matrix, eigenvectors representing the distribution of a term after propagating term weights between related data items are computed. The result is an index which takes the document structure into account and can be used with standard document retrieval techniques.

As the scheme takes the shape of an index transformation, all necessary calculations are performed during index time. Abstract We present a novel incremental algorithm to compute changes to materialized views in logic databases like those used by rule-based reasoners. The method presented in this article overcomes both drawbacks, arguably at an acceptable price: Abstract Creating descriptive labels for pictures is an important task with applications in image retrieval, Web accessibility and computer vision.

Automatic creation of such labels is difficult, especially for pictures of artworks. Existing implementations are highly successful in terms of player count and number of collected labels, but hardly create comprehensive tag sets containing both general and specific labels. We propose Karido, a new image labeling game designed to collect more diverse tags.

This paper describes the design and implementation of the game, along with an evaluation based on data collected during a trial deployment. The game is compared to an existing image labeling game using the same database of images. Results of this evaluation indicate that Karido collects more diverse labels and is fun to play.

Abstract Humanities rely on both field research data and databases but rarely have the means necessary for employing them. Crowdsourcing on the Web using social media specifically designed for the purpose offers a promising alternative. This article reports about two endeavors of this kind: The article motivates and describes the approach and further introduces into the semantic analysis method based on higher-order singular value decomposition specially designed for the project.

Abstract This article reports on the conception of a novel digital backchannel, code name Backstage , dedicated to large classes aiming at empowering not only the audience but also the speaker, at promoting the awareness of both audience and speaker, and at promoting an active participation of students in the lecture. The backchannel supports different forms of inter-student communication via short microblog messages, social evaluation and ranking of messages by the students themselves, and aggregation of student's opinions aiming at increasing the students' community feeling, strengthening the students' awareness of and co-responsibility for the class work aiming at promoting the students' participation in the lecture.

The backchannel further supports immediate concise feedback to the lecturer of selected and aggregated students' opinions aiming at strengthening the lecturer's awareness for students' difficulties. This integration enables a limited form of recursion for traversing RDF paths of unknown length at almost no additional cost over conjunctive triple patterns.

For these extended NREs we have implemented an evaluation algorithm with polynomial data complexity. To the best of our knowledge, this demo is the first implementation of NREs or similarly expressive RDF path languages with this complexity. Abstract Abstract Complex Event Processing CEP denotes algorithmic methods for deriv- ing higher-level knowledge, or complex events, from a stream of lower-level events in a continuous and timely fashion.

High-level Event Query Languages EQLs are designed for expressing complex events in a convenient, concise, effective and main- tainable manner. CEP differs fundamentally from traditional database or Web query- ing, as CEP continuously evaluates standing queries against a stream of incoming event data whereas traditional querying evaluates incoming ad hoc queries against more or less standing data. However EQLs and traditional query languages share a need for clear formal se- mantics which typically consist of two parts: A declarative semantics specifying what the answer of a query should be and an operational semantics telling how this answer is actually computed.

The declarative semantics serves as reference for the operational semantics which is the basis for query evaluation and optimization. While formal semantics is well-understood for traditional query languages it has been rather neglected for EQLs so far. The operational semantics on the one hand, bases on CERA, a tailored variant of relational algebra, and incremantal eval- uation of query plans.

Although the basic idea might sound familiar from previous approaches like [3, 12, 16], the way it is realized here is significantly different. The declarative semantics on the other hand, is defined using a Tarski-style model theory with accompanying fixpoint theory.

Abstract Abstract Complex Event Processing CEP denotes algorithmic methods for mak- ing sense of events by deriving higher-level knowledge, or complex events, from lower-level events in a timely fashion and permanently. At the core of CEP are queries continuously monitoring the incoming stream of "simple" events and recognizing "complex" events from these simple events. Event queries monitoring in- coming streams of simple events serve as specification of situations that manifest themselves as certain combinations of simple events occurring, or not occurring, over time and that cannot be detected solely from one or parts of the single events involved.

Special purpose Event Query Languages EQLs have been developed for the expression of the complex events in a convenient, concise, effective and maintainable manner.

Wissensmanagement : Werkzeuge für Praktiker

This chapter identifies five language styles for CEP, namely composition operators, data stream query languages, production rules, timed state machines, and logic languages, describes their main traits, illustrates them on a sensor network use case and discusses suitable application areas of each language style. Abstract This manifesto explains and stresses the importance of "digital social media", "social software" and "social computing". In particular, it makes the claim that we need a better understanding of how this mix of enabling technology, social behaviour and market practises is challenging our socio-economical and political systems, and puts forward an action plan for the areas of education, fundamental research and applied research, to address these challenges.

The goal of this manifesto is to raise awareness for digital social media and to stress the need for research, research funding, and education in a field so far under-represented in public research funding programmes and in education. This manifesto does not cover all aspects of digital social media, or provide a comprehensive treatment of their socio-economical impact.

Such issues are beyond the scope of this manifesto. This manifesto is an outcome of a Perspective Workshop held from the 25th to 29th of January at the research centre Schloss Dagstuhl. The workshop brought together scientists and practitioners from academia and industry, across the fields of social sciences and computer science.

Digital Social Networks http: Abstract We present PEST, a novel approach to approximate querying of structured wiki data that exploits the structure of that data to propagate term weights between related wiki pages and tags. Based on the PEST matrix, eigenvectors representing the distribution of a term after propagation are computed. This article gives a detailed outline of the approach and gives first experimental results showing its viability.

Abstract Reasoning in wikis has focused so far mostly on expressiveness and tractability and neglected related issues of updates and explanation. In this demo, we show reasoning, explanation, and incremental updates in the KiWi wiki and argue that it is a perfect match for OWL 2 RL reasoning.

Featured Amazon Original Books

Explanation nicely complements the "work-in-progress" focus of wikis by explaining how which information was derived and thus helps users to easily discover and remove sources of inconsistencies. Incremental updates are necessary to minimize reasoning times in a frequently changing wiki environment. Abstract Web crawlers are increasingly used for focused tasks such as the extraction of data from Wikipedia or the analysis of social networks like last. In these cases, pages are far more uniformly structured than in the generalWeb and thus crawlers can use the structure of Web pages for more precise data extraction and more expressive analysis.

In this demonstration, we present a focused, structure-based crawler generator, the "Not so Creepy Crawler" nc2. What sets nc2 apart, is that all analysis and decision tasks of the crawling process are delegated to an arbitrary XML query engine such as XQuery or Xcerpt. Customizing crawlers just means writing declarative XML queries that can access the currently crawled document as well as the metadata of the crawl process.

We identify four types of queries that together sufice to realize a wide variety of focused crawlers. We demonstrate nc2 with two applications: The first extracts data about cities from Wikipedia with a customizable set of attributes for selecting and reporting these cities. It illustrates the power of nc2 where data extraction from Wiki-style, fairly homogeneous knowledge sites is required.

In contrast, the second use case demonstrates how easy nc2 makes even complex analysis tasks on social networking sites, here exemplified by last. Abstract KiWi is a semantic Wiki that combines the Wiki philosophy of collaborative content creation with the methods of the Semantic Web in order to enable effective knowledge management. Querying a Wiki must be simple enough for beginning users, yet powerful enough to accommodate experienced users. To this end, the keyword-based KiWi query language KWQL supports queries ranging from simple lists of keywords to expressive rules for selecting and reshaping Wiki meta- data.

The editor enables round-tripping between the twin languages KWQL and visKWQL, meaning that users can switch freely between the textual and visual form when constructing or editing a query. Abstract Knowledge representation and reasoning so far have focused on the ideal ultimate goal, thus stressing logical consistency and semantic homogeneity.

On the way to consistent and homogenous knowledge rep- resentation and reasoning, inconsistencies and divergent opinions often have to be dealt with. In this article, a social vision of knoweldge repre- sentation is proposed which accomodates conflicting views that possibly even result in logical inconsistencies; reasoning is used to track divergent, possibly incompatible viewpoints. This approach to knowledge repsresen- tation and reasoning has been developped for a social software, a social semantic wiki.

Abstract This article introduces KWQL, spoken "quickel", a rulebased query language for a semantic wiki based on the label-keyword query paradigm. KWQL allows for rich combined queries of full text, document structure, and informal to formal semantic annotations. It offers support for continuous queries, that is, queries re-evaluated upon updates to the wiki. KWQL is not restricted to data selection, but also offers database-like views, enabling "construction", the re-shaping of the selected meta- data into new meta- data.

Such views amount to rules that provide a convenient basis for an admittedly simple, yet remarkably powerful form of reasoning. KWQL queries range from simple lists of keywords or label-keyword pairs to conjunctions, disjunctions, or negations of queries. Thus, queries range from elementary and relatively unspecific to complex and fully specified meta- data selections.

Consequently, in keeping with the "wiki way", KWQL has a low entry barrier, allowing casual users to easily locate and retrieve relevant data, while letting advanced users make use of its full power. Abstract Good tree search algorithms are a key requirement for inference engines of rule languages. As Prolog exemplifies, inference engines based on traditional uninformed search methods with their well-known deficiencies are prone to compromise declarativity, the primary concern of rule languages.

The paper presents a new family of uninformed search algorithms that combine the advantages of the traditional ones while avoiding their shortcomings. Moreover, the paper introduces a formal framework based on partial orderings, which allows precise and elegant analysis of such algorithms. Abstract One significant effort towards combining the virtues of Web search, viz.

Keyword-based query languages trade some of the precision that languages like XQuery enable by allowing to formulate exactly what data to select and how to process it, for an easier interface accessible to untrained users. The yardstick for these languages becomes an easily accessible interface that does not sacrifice the essential premise of database-style Web queries, that selection and construction are precise enough to fully automate data processing tasks.

To ground the discussion of keyword-based query languages, we give a summary of what we perceive as the main contributions of research and development on Web query languages in the past decade. This summary focuses specifically on what sets Web query languages apart from their predecessors for databases. Further, this tutorial 1 gives an overview over keyword-based query lan- guages for XML and RDF 2 discusses where the existing approaches succeed and what, in our opinion, are the most glaring open issues, and 3 where, beyond keyword-based query languages, we see the need, the challenges, and the opportunities for combining the ease of use of Web search with the virtues of Web queries.

Abstract Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: This fracturing stifles innovation as application builders have to cope not only with one Web stack e.

With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs.

We believe thatWeb query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a "Web of Data". Abstract Path query languages have been previously shown to complement RDF rule languages in a natural way and have been used as a means to implement the RDFS derivation rules. RPL is a novel path query language specifically designed to be incorporated with RDF rules and comes in three avors: Node-, edge- and path- flavored expressions allow to express conditional regular expressions over the nodes, edges, or nodes and edges appearing on paths within RDF graphs.

ALLE LS19 WERKZEUGE & MASCHINEN! - Landwirtschafts Simulator 2019 Gameplay German - Gamerstime

Providing regular string expressions and negation, RPL is more expressive than other RDF path languages that have been proposed. We give a compositional semantics for RPL and show that it can be evaluated efficiently, whileseveral possible extensions of it cannot. Abstract An RDF graph is, at its core, just a set of statements consisting of subjects, predicates and objects.

Nevertheless, since its inception practitioners have asked for richer data structures such as containers for open lists, sets and bags , collections for closed lists and reification for quoting and provenance. Though this desire has been addressed in the RDF primer and RDF Schema specification, they are explicitely ignored in its model theory.

In this way a characterization of the completeness of a search method and easier completeness checks become possible. Moreover it simplifies the formulation and the proof of further features. It is complete and memory efficient and never re-expands nodes. The algorithm is analysed using the previously developed formalism. Especially the efficient processing of almost tail recursive progams and the definition of declarative semantics are addressed.

Abstract Event-driven information systems demand a systematic and automatic processing of events. Complex Event Processing CEP encompasses methods, techniques, and tools for processing events while they occur, i. CEP derives valuable higher-level knowledge from lower-level events; this knowledge takes the form of so called complex events, that is, situations that can only be recognized as a combination of several events. Situationen die sich nur als Kombination mehrerer Ereignisse erkennen lassen, ab.

Abstract KiWi is a framework for semantic social software applications that combines the Wiki philosophy with Semantic Web technologies. Applications based on KiWi can therefore leverage i. For example, KiWi allows composition of content items, which poses a challenge to the versioning system. In this paper we discuss versioning of composed content items and challenges related to reasoning in collaborative social software, as both topics are concerned with updates on the application state.

Abstract This presentation outlines requirements for querying and reasoning in a Social Semantic software context. A unified approach which tightly connects the two technologies is sketched. Abstract Traditional wikis excel in collaborative work on emerging content and structure. Semantic Wikis go further by allowing users to expose knowledge in ways suitable for machine processing, e.

All Publications — University of Liechtenstein

The combination of ease of use, support for work in progress and Semantic Web technologies makes Semantic Wikis particularly interesting for knowledge-intensive work areas such as project management and software development. While several Semantic Wikis have been put to practical use, the concepts their users interact with have been little discussed.

This position paper explores this issue, showing that the design of a conceptual model is not trivial and showing the repercussions of each design choice. The issue is explored stressing the social aspect of Semantic Wikis. Abstract In the domain of Supervisory Control and Data Acquisition SCADA , significant challenges reside in the variety of proprietary interfaces and protocols which accompany the components and devices provided by different manufacturers.

Centralized supervision and control is hampered by incom- patibility issues and additional costs occur because a number of different systems have to be installed and maintained. The Facility Control Markup Language FCML is designed to pro- vide standardized and uniform access to different devices which usually only adhere to proprietary interfaces and protocols.

This way, operators of SCADA systems can easily integrate and access additional devices and manufacturers can offer their products to a greater number of customers due to increased interoperability. As an application of the Extensible Markup Language XML , FCML is designed as an open and extensible standard which can easily be adapted and extended to include requirements of future applications and devices. Abstract This paper describes state of the art in reason maintenance with a focus on its future usage in the KiWi project.

To give a bigger picture of the field, it also mentions closely related issues such as non-monotonic logic and paraconsistency. The paper is organized as follows: It is a Java Program which can read an XML-file with a description of a network of processing nodes for streaming data. L-DSMS automatically combines all the processing nodes into a single Java program which then processes the data. L-DSMS has a number of predefined nodes, together with an interface for implementing new processing nodes.

Initially built for the research community of Roche Penzberg, YASE proved to be superior to standard search engines in the company environment due to the introduction of some simple principles: While the benefits of the learning feature need more time to be fully realized, the other two principles have proved to be surprisingly powerful. Abstract Evaluation of complex event queries over time involves storing information about those events that are relevant for, i. We call the period of time for which an event or an intermediate result must at least be stored its temporal relevance.

This paper pioneers a precise definition of temporal relevance and develops a method for statically i. During query evaluation i. Temporal relevance is also important at compile time for cost-based query planning. Abstract Events play an essential role in business processes and some forms of business rules. Often they require detection of complex events, that is, events or situations that cannot be inferred from looking only at single events but that manifest themselves in certain combinations of several events. This entails a natural need for high-level query and reasoning languages for complex events.

This position paper explores issues related to the design of such languages. Abstract Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.

Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e. Abstract We propose XcerptRDF, an extension of the rule based XML query language Xcerpt with language constructs explicitly geared at comfortable querying RDF data, including convenient access to collections, containers, reified statements, and "concise bounded descriptions" for blank nodes.

Simulation unification, the formal basis for evaluating Xcerpt queries, is extended to cover the new language constructs and thus to give a formal semantics for XcerptRDF queries. XcerptRDF is thus a possible solution to the challenge of versatile data access on the Web which has emerged due to the plethora of data formats already online. We show how to define a sound and complete operational semantics that can be implemented using existing logic programming techniques. Using RDFLog we classify previous approaches to RDF querying along their support for blank node construction and show equivalence between languages with full quantifier alternation and languages with only rules.

Abstract Simulation unification is a sp ecial kind of unification adapted to retrieving semi-structured data on the Web. This article introduces simulation subsumption, or containment, that is, query subsumption under simulation unification. Simulation subsumption is crucial in general for query optimization, in particular for optimizing pattern-based search engines, and for the termination of recursive rule-based web languages such as the XML and RDF query language Xcerpt.

This paper first motivates and formalizes simulation subsumption. Then, it establishes decidability of simulation subsumption for advanced query patterns featuring descendant constructs, regular expressions, negative subterms or subterm exclusions , and multiple variable occurrences. Finally, we show that subsumption between two query terms can be decided in O n!

Kindle Books

This article presents an over view of traditional query languages for XML and RDF, focused on emerging preeminent exemplars in each field, and contrasts these languages with the field of keyword querying for XML and RDF. Abstract In this paper we propose to apply hierarchical graphs to indoor navigation. The intended purpose is to guide humans in large public buildings and assist them in wayfinding. We start by formally defining hierarchical graphs and explaining the particular benefits of this approach. In the main part, we suggest an algorithm to automatically construct such a multi-level hierarchy from floor plans.

The algorithm is guided by the idea to exploit domain-specific characteristics of indoor environments. Besides this, two particular problems are addressed: Abstract This paper introduces a generic approach to integrating different kinds of geo-referenced sensor measurements along a physical infrastructure. The underlying core ontology is domain-independent and realized using Semantic Web technologies; it can be specialized for different domains.

In particular, railway infrastructures are presented as a case study. Using the physical infrastructure as a common spatial reference system constitutes a central point of the integration, which allows to perform reasoning tasks, such as answering network-related queries involving measurements from both stationary and mobile sensors.

A classification of different query types is presented together with the corresponding algorithms. Abstract In this paper, we tackle the challenging problem of guiding pedestrians in buildings. We propose a conceptual model for indoor environments, based only on regions and their boundaries. It needs to be computed just once.

Our approach covers different phenomena, in particular irregular, nonconvex regions which are not trivial. Visibility is modelled implicitly and can be determined efficiently. We illustrate by examples how route descriptions can be derived from the model. Abstract This paper describes a project aiming at enhancing social tagging with reasoning and explanations. So as to keep with the ease of use characteristic of social media, simple explanations are required. A working hypothesis of the work reported in this paper is that simple explanations require simple reasoning.

The approach to reasoning presented in this paper is minimalist: First, it precludes involved forms of reasoning such as refutation or excluded middle; second, it does not need structural induction. It is furthermore pragmatic: Because reasoning is kept simple, a simple and intuitive approach to explanation based only on proof trees is possible. This paper outlines the approach to both reasoning and explanations. Finally, it discusses more sophisticated explanation concepts, based on a notion of proof factorization, that are deemed necessary in the application context considered.

Abstract RDF data is set apart from relational or XML data by its support of rich existential information in the form of blank nodes. Where SQL null values are always scoped over a single statement, blank nodes in RDF can span over any number of statements and thus can be seen as existentially quantified variables. For RDF querying blank node querying is considered in most query languages, but blank node construction, i. Abstract Reactivity, the ability to detect and react to events, is an essential functionality in many information systems.

In particular, Web systems such as online marketplaces, adaptive e. This article investigates issues of relevance in designing high-level programming languages dedicated to reactivity on the Web. It presents twelve theses on features desirable for a language of reactive rules tuned to programming event-driven Web and Semantic Web applications. Abstract In the last years, the Semantic Web has significantly gained momentum, and the amount of RDF data on the Web has been increasing exponentially ever since the publication of the RDF recommendation.

These transformation programs are specifically written to extract the RDF information from the document. Our system implements this use case in two different manners. The second implementation uses Xcerpt[4, 1], a versatile Web and Semantic Web query language for both processing stages.


  • Subverting Borders: Doing Research on Smuggling and Small-Scale Trade!
  • Aztec Legend Lord of the Jaguars.
  • Hold Me If You Can (Soulfire);
  • Wissensmanagement : Werkzeuge für Praktiker (Book, ) [licapedu.tk].
  • Find a copy online?
  • Learn 1 2 3 With Monsters?

The implementation of these use-cases uncovers difficulties and challenges in the authoring of GRDDL algorithms in XSLT and also in Xcerpt, and highlights advantages and disadvantages of both approaches. Learn one, get one free! Abstract Even with all the progress in Semantic technology, accessing Web data remains a challenging issue with new Web query languages and approaches appearing regularly. In this paper we propose a straightforward step toward the improvement of this situation that is simple to realize and yet effective: Advanced module systems that make partitioning of a the evaluation and b the conceptual design of complex Web queries possible.

They provide the query programmer with a powerful, but easy to use high-level abstraction for packaging, encapsulating, and reusing conceptually related parts in our case, rules of a Web query. The proposed module system combines ease of use thanks to a simple core concept, the partitioning of rules and their consequences in flexible "stores", with ease of deployment thanks to a reduction semantics. We focus on extending the rule-based Semantic Web query language Xcerpt with such a module system though the same approach can be applied to other rule-based languages as well.


  • Items in search results.
  • Rob Roe Seur The Sons of Mary Queen of Scots.
  • Erotic Nude Photography Book, Girls Next Door: Alyssa Vol 3 (Amateur Girls Next Door Book 1).
  • De la nature de la richesse et de lorigine de la valeur: édition intégrale (Economie) (French Edition).
  • Publications - Teaching and Research Unit Programming and Modelling Languages - LMU Munich.

Abstract Once upon a time scientists were experts in their field. They knew not only the "hot questions" but also the scientists involved and the various approaches investigated. More important, they were well informed of novel research results. Gone are these favorable times! Hot issues and active research teams emerge with high pace and being informed within days or even hours might be essential for success. Furthermore, no one can any longer keep an eye on the research publications, patents, and other information that might be relevant for one's research.

As a consequence, scientists often feel - and in fact they sometimes are - rather unaware of areas that are of prime importance for their research. High diversity, considerable amounts of information and extremely fast communication are key characteristics of today's research - especially in medical biology. An automatic tracking of technical and scientific information is a way to cope with these aspects of today's research. Such a system is made possible by emerging techniques such as "Semantic Web". This article describes the corner stones of such an "Intelligent Information Portal" currently developed at Roche Diagnostics GmbH for scientists in Pharmaceutical Research.

Abstract Abstract This paper takes three important steps towards constraint-based school timetabling: Abstract An essential feature in practically usable programming languages is the ability to encapsulate functionality in reusable modules. Modules make large scale projects tractable by humans. For Web and Semantic Web programming, many rule-based languages, e. Rules are easy to comprehend and specify, even for non-technical users, e. Unfortunately, those contributions are arguably doomed to exist in isolation as most rule languages are conceived without modularity, hence without an easy mechanism for integration and reuse.

In this paper a generic module system applicable to many rule languages is presented. We demonstrate and apply our generic module system to a Datalog-like rule language, close in spirit to RIF Core. The language is gently introduced along the EU-Rent use case. Using the Reuseware Composition Framework,the module system for a concrete language can be achieved almost for free, if it adheres to the formal notions introduced in this paper. Abstract The field of complex event processing still lacks formal foundations. In particular, event queries require both declarative and operational semantics.

We put forward for discussion a proposal towards formal foundations of event queries that aims at making well-known results from database queries applicable to event queries. Declarative semantics of event queries and rules are given as a model theory with accompanying fixpoint theory. Operational semantics are then obtained by translating the considered queries into relational algebra expressions.

We show the suitability of relational algebra for the kind of incremental evaluation usually required for event queries. With the aim of generating further discussion of formal foundations in the research community, we reflect openly upon both strengths and weaknesses of the presented approach. Abstract This paper presents a graph-based spatial model which can serve as a reference for guiding pedestrians inside buildings. We describe a systematic approach to construct the model from geometric data. In excess of the well-known topological relations, the model accounts for two important aspects of pedestrian navigation: An algorithm is proposed which partitions spatial regions according to visibility criteria.

It can handle simple polygons as encountered in floor plans. The model is structured hierarchically - each of its elements corresponds to a certain domain concept 'room', 'door', 'floor' etc. This is useful for applications in which such information have to be evaluated. Styling has become a widespread technique with the advent of the Web and of the markup language XML. With XML, application data can be modeled after the application logic regardless of the intended rendering.

Provided the styling language offers the necessary functionalities, style sheets can similarly specify a visual rendering of modeling and programming languages. The advantages of the approach are manifold: This article first introduces rather limited extensions to the style sheet language CSS that make it amenable to render data modeling and programming languages as visual languages.