Browsing by Subject "004"
Now showing 1 - 20 of 101
- Results Per Page
- Sort Options
Thesis Open Access 3D Reconstruction using Active Illumination(Philipps-Universität Marburg, 2017-01-17) Grochulla, Martin; Thormählen, Thorsten (Prof. Dr.)In this thesis we present a pipeline for 3D model acquisition. Generating 3D models of real-world objects is an important task in computer vision with many applications, such as in 3D design, archaeology, entertainment, and virtual or augmented reality. The contribution of this thesis is threefold: we propose a calibration procedure for the cameras, we describe an approach for capturing and processing photometric normals using gradient illuminations in the hardware set-up, and finally we present a multi-view photometric stereo 3D reconstruction method. In order to obtain accurate results using multi-view and photometric stereo reconstruction, the cameras are calibrated geometrically and photometrically. For acquiring data, a light stage is used. This is a hardware set-up that allows to control the illumination during acquisition. The procedure used to generate appropriate illuminations and to process the acquired data to obtain accurate photometric normals is described. The core of the pipeline is a multi-view photometric stereo reconstruction method. In this method, we first generate a sparse reconstruction using the acquired images and computed normals. In the second step, the information from the normal maps is used to obtain a dense reconstruction of an object’s surface. Finally, the reconstructed surface is filtered to remove artifacts introduced by the dense reconstruction step.Thesis Open Access Accelerating Event Stream Processing in On- and Offline Systems(Philipps-Universität Marburg, 2022-02-21) Körber, Michael (0000-0003-2079-6264); Seeger, Bernhard (Prof. Dr.)Due to a growing number of data producers and their ever-increasing data volume, the ability to ingest, analyze, and store potentially never-ending streams of data is a mission-critical task in today's data processing landscape. A widespread form of data streams are event streams, which consist of continuously arriving notifications about some real-world phenomena. For example, a temperature sensor naturally generates an event stream by periodically measuring the temperature and reporting it with measurement time in case of a substantial change to the previous measurement. In this thesis, we consider two kinds of event stream processing: online and offline. Online refers to processing events solely in main memory as soon as they arrive, while offline means processing event data previously persisted to non-volatile storage. Both modes are supported by widely used scale-out general-purpose stream processing engines (SPEs) like Apache Flink or Spark Streaming. However, such engines suffer from two significant deficiencies that severely limit their processing performance. First, for offline processing, they load the entire stream from non-volatile secondary storage and replay all data items into the associated online engine in order of their original arrival. While this naturally ensures unified query semantics for on- and offline processing, the costs for reading the entire stream from non-volatile storage quickly dominate the overall processing costs. Second, modern SPEs focus on scaling out computations across the nodes of a cluster, but use only a fraction of the available resources of individual nodes. This thesis tackles those problems with three different approaches. First, we present novel techniques for the offline processing of two important query types (windowed aggregation and sequential pattern matching). Our methods utilize well-understood indexing techniques to reduce the total amount of data to read from non-volatile storage. We show that this improves the overall query runtime significantly. In particular, this thesis develops the first index-based algorithms for pattern queries expressed with the Match_Recognize clause, a new and powerful language feature of SQL that has received little attention so far. Second, we show how to maximize resource utilization of single nodes by exploiting the capabilities of modern hardware. Therefore, we develop a prototypical shared-memory CPU-GPU-enabled event processing system. The system provides implementations of all major event processing operators (filtering, windowed aggregation, windowed join, and sequential pattern matching). Our experiments reveal that regarding resource utilization and processing throughput, such a hardware-enabled system is superior to hardware-agnostic general-purpose engines. Finally, we present TPStream, a new operator for pattern matching over temporal intervals. TPStream achieves low processing latency and, in contrast to sequential pattern matching, is easily parallelizable even for unpartitioned input streams. This results in maximized resource utilization, especially for modern CPUs with multiple cores.Thesis Open Access Advanced Indexing and Query Processing for Multidimensional Databases(Philipps-Universität Marburg, 2007-10-29) Dellis, Evangelos; Seeger, Bernhard (Prof. Dr.)Many new applications, such as multimedia databases, employ the so-called feature transformation which transforms important features or properties of data objects into high-dimensional points. Searching for 'similar' or 'non-dominated' objects based on these features is thus a search of points in this feature space. To support efficient query processing in these high dimensional databases, high-dimensional indexes are required to prune the search space and efficient query processing strategies employing these indexes have to be designed. Based on an analysis of typical advanced database systems - such as multimedia databases, electronic market places, and decision support systems - the following four challenging characteristics of complexity are detected: high-dimensionality nature of data, re-usability of existing index structures, novel (more expressive) query operators for advanced database systems and efficient analysis of complex high-dimensional data. Therefore, the general goal of this thesis is the improvement of the efficiency of index based query processing in high-dimensional data spaces and the development of novel query operators. The first part of this thesis deals with similarity query processing techniques. We introduce a new approach to indexing multidimensional data that is particularly suitable for the efficient incremental processing of nearest neighbor queries. The basic idea is to split the data space vertically into multiple low- and medium-dimensional data spaces. The data from each of these lower-dimensional subspaces is organized by using a standard multi-dimensional index structure. In order to perform incremental NN-queries on top of index-striping efficiently, we first develop an algorithm for merging the results received from the underlying indexes. Then, an accurate cost model relying on a power law is presented that determines an appropriate number of indexes. Moreover, we consider the problem of dimension assignment, where each dimension is assigned to a lower-dimensional subspace, such that the cost of nearest neighbor queries is minimized. Furthermore, a generalization of the iDistance technique, called Multidimensional iDistance (MiD), for k nearest neighbor query processing is presented. Three main steps are performed for building MiD. In agreement with iDistance, firstly, data points are partitioned into clusters and, secondly, a reference point is determined for every cluster. However, the third step substantially differs from iDistance as a data object is mapped to a m-dimensional distance vector where m > 1 generally holds. The m dimensions are generated by splitting the original data space into m subspaces and computing the partial distance between the object and the reference point for every subspace. The resulting m-dimensional points can be indexed by an arbitrary point access method like an R-tree. Our crucial parameter m is derived from a cost model that is based on a power law. We present range and k-NN query processing algorithms for MiD. The second part of this thesis deals with skyline query processing techniques. We first introduce the problem of Constrained Subspace Skyline Queries (CSSQ). We present a query processing algorithm which builds on multiple low-dimensional index structures. Due to the usage of well performing low dimensional indices, constrained subspace skyline queries for arbitrary large subspaces are efficiently supported. Effective pruning strategies are applied to discard points from dominated regions. An important ingredient of our approach is the workload-adaptive strategy for determining the number of indexes and the assignment of dimensions to the indexes. Furthermore, we introduce the concept of Reverse Skyline Queries (RSQ). Given a set of data points P and a query point q, an RSQ returns the data objects that have the query object in the set of their 'dynamic' skyline. Such kind of dynamic skyline corresponds to the skyline of a transformed data space where point q becomes the origin and all points are represented by their distance to q.In order to compute the reverse skyline of an arbitrary query point, we first propose a Branch and Bound algorithm (called BBRS), which is an improved customization of the original BBS algorithm. To reduce even more the computational cost of determining if a point belongs to the reverse skyline, we propose an enhanced algorithm (called RSSA), that is based on accurate pre-computed approximations of the skylines. These approximations are used to identify whether a point belongs to the reverse skyline or not. The effectiveness and efficiency of all proposed techniques are discussed and verified by comparison with conventional approaches in versatile experimental evaluations on real-world datasets.Thesis Open Access Advancing Operating Systems via Aspect-Oriented Programming(Philipps-Universität Marburg, 2006-08-16) Engel, Michael; Freisleben, Bernd (Prof. Dr.)Operating system kernels are among the most complex pieces of software in existence to- day. Maintaining the kernel code and developing new functionality is increasingly compli- cated, since the amount of required features has risen significantly, leading to side ef fects that can be introduced inadvertedly by changing a piece of code that belongs to a completely dif ferent context. Software developers try to modularize their code base into separate functional units. Some of the functionality or “concerns” required in a kernel, however, does not fit into the given modularization structure; this code may then be spread over the code base and its implementation tangled with code implementing dif ferent concerns. These so-called “crosscutting concerns” are especially dif ficult to handle since a change in a crosscutting concern implies that all relevant locations spread throughout the code base have to be modified. Aspect-Oriented Software Development (AOSD) is an approach to handle crosscutting concerns by factoring them out into separate modules. The “advice” code contained in these modules is woven into the original code base according to a pointcut description, a set of interaction points (joinpoints) with the code base. To be used in operating systems, AOSD requires tool support for the prevalent procedu- ral programming style as well as support for weaving aspects. Many interactions in kernel code are dynamic, so in order to implement non-static behavior and improve performance, a dynamic weaver that deploys and undeploys aspects at system runtime is required. This thesis presents an extension of the “C” programming language to support AOSD. Based on this, two dynamic weaving toolkits – TOSKANA and TOSKANA-VM – are presented to permit dynamic aspect weaving in the monolithic NetBSD kernel as well as in a virtual- machine and microkernel-based Linux kernel running on top of L4. Based on TOSKANA, applications for this dynamic aspect technology are discussed and evaluated. The thesis closes with a view on an aspect-oriented kernel structure that maintains coherency and handles crosscutting concerns using dynamic aspects while enhancing de- velopment methods through the use of domain-specific programming languages.Thesis Open Access Applying Model-Driven Engineering to Development Scenarios for Web Content Management System Extensions(Philipps-Universität Marburg, 2021-08-19) Priefer, Dennis; Taentzer, Gabriele (Prof. Dr.)Web content management systems (WCMSs) such as WordPress, Joomla or Drupal have established themselves as popular platforms for instantiating dynamic web applications. Using a WCMS instance allows developers to add additional functionality by implementing installable extension packages. However, extension developers are challenged by dealing with boilerplate code, dependencies between extensions and frequent architectural changes to the underlying WCMS platform. These challenges occur in frequent development scenarios that include initial development and maintenance of extensions as well as migration of existing extension code to new platforms. A promising approach to overcome these challenges is represented by model-driven engineering (MDE). Adopting MDE as development practice, allows developers to define software features within reusable models which abstract the technical knowledge of the targeted system. Using these models as input for platform-specific code generators enables a rapid transformation to standardized software of high quality. However, MDE has not found adoption during extension development in the WCMS domain, due to missing tool support. The results of empirical studies in different domains demonstrate the benefits of MDE. However, empirical evidence of these benefits in the WCMS domain is currently lacking. In this work, we present the concepts and design of an MDE infrastructure for the development and maintenance of WCMS extensions. This infrastructure provides a domain-specific modelling language (DSL) for WCMS extensions, as well as corresponding model editors. In addition, the MDE infrastructure facilitates a set of transformation tools to apply forward and reverse engineering steps. This includes a code generator that uses model instances of the introduced DSL, an extension extractor for code extraction of already deployed WCMS extensions, and a model extraction tool for the creation of model instances based on an existing extension package. To ensure adequacy of the provided MDE infrastructure, we follow a structured research methodology. First, we investigate the representativeness of common development scenarios by conducting interviews with industrial practitioners from the WCMS domain. Second, we propose a general solution concept for these scenarios including involved roles, process steps, and MDE infrastructure facilities. Third, we specify functional and non-functional requirements for an adequate MDE infrastructure, including the expectations of domain experts. To show the applicability of these concepts, we introduce JooMDD as infrastructure instantiation for the Joomla WCMS which provides the most sophisticated extension mechanism in the domain. To gather empirical evidence of the positive impact of MDE during WCMS extension development, we present a mixed-methods empirical investigation with extension developers from the Joomla community. First, we share the method, results and conclusions of a controlled experiment conducted with extension developers from academia and industry. The experiment compares conventional extension development with MDE using the JooMDD infrastructure, focusing on the development of dependent and independent extensions. The results show a clear gain in productivity and quality by using the JooMDD infrastructure. Second, we share the design and observations of a semi-controlled tutorial with four experienced developers who had to apply the JooMDD infrastructure during three scenarios of developing new (both independent and dependent) extensions and of migrating existing ones to a new major platform version. The aim of this study was to obtain direct qualitative feedback about acceptance, usefulness, and open challenges of our MDE approach. Finally, we share lessons learned and discuss the threats to validity of the conducted studies.Presentation Open Access Aufgeräumt! Ordner und Dateichaos effizient beseitigen.(Philipps-Universität Marburg, 2024-08-19) Neumann, Marcel; Prautzsch, Hanna; Ernst, Hannah; UniversitätsbibliothekDie Präsentation zum Vortrag enthält Informationen zum strukturierten Umgang mit Daten, zur Konzeption einer Verzeichnisstruktur, zum Benennen von Dateien und zum Anlegen von Versionen – kurz, zur Dateiorganisation. „Dateiorganisation“ bezeichnet alle Strategien, um Daten zu strukturieren, speichern und lesbar zu halten.Thesis Open Access An automatic system providing constructive feedback for early programming courses(Philipps-Universität Marburg, 2025-04-23) Dick, Steffen (M.Sc.); Bockisch, Christoph (Prof. Dr.)A system for constructive feedback for beginner programmers was developed which, based on an evaluation of various feedback criteria, analyses code submitted by learners and provides them with unevaluated feedback with suggestions for improvement for these submissions. Several different components were developed for this purpose, which provide feedback in different areas. For example, the quality of tests was analysed, as well as syntactic and semantic correctness. Various existing tools were used for this purpose, which were subsequently adapted to the use case and provided with predefined configurations. These configurations generally offer good default values with which very good feedback can already be created. In many cases, however, it is possible to deviate from this predefined configuration and customise it for specific tasks. Furthermore, a server and a plugin for IntelliJ were developed, with which learners can easily access the functionality from their development environment. The system was tested and analysed in several different scenarios. The use of the system has statistically significantly increased the quality awareness of learners.Thesis Open Access An Autonomic Cross-Platform Operating Environment for On-Demand Internet Computing(Philipps-Universität Marburg, 2010-08-02) Paal, Stefan (141929049); Freisleben, Bernd (Prof. Dr.)The Internet has evolved into a global and ubiquitous communication medium interconnecting powerful application servers, diverse desktop computers and mobile notebooks. Along with recent developments in computer technology, such as the convergence of computing and communication devices, the way how people use computers and the Internet has changed people´s working habits and has led to new application scenarios. On the one hand, pervasive computing, ubiquitous computing and nomadic computing become more and more important since different computing devices like PDAs and notebooks may be used concurrently and alternately, e.g. while the user is on the move. On the other hand, the ubiquitous availability and pervasive interconnection of computing systems have fostered various trends towards the dynamic utilization and spontaneous collaboration of available remote computing resources, which are addressed by approaches like utility computing, grid computing, cloud computing and public computing. From a general point of view, the common objective of this development is the use of Internet applications on demand, i.e. applications that are not installed in advance by a platform administrator but are dynamically deployed and run as they are requested by the application user. The heterogeneous and unmanaged nature of the Internet represents a major challenge for the on demand use of custom Internet applications across heterogeneous hardware platforms, operating systems and network environments. Promising remedies are autonomic computing systems that are supposed to maintain themselves without particular user or application intervention. In this thesis, an Autonomic Cross-Platform Operating Environment (ACOE) is presented that supports On Demand Internet Computing (ODIC), such as dynamic application composition and ad hoc execution migration. The approach is based on an integration middleware called crossware that does not replace existing middleware but operates as a self-managing mediator between diverse application requirements and heterogeneous platform configurations. A Java implementation of the Crossware Development Kit (XDK) is presented, followed by the description of the On Demand Internet Computing System (ODIX). The feasibility of the approach is shown by the implementation of an Internet Application Workbench, an Internet Application Factory and an Internet Peer Federation. They illustrate the use of ODIX to support local, remote and distributed ODIC, respectively. Finally, the suitability of the approach is discussed with respect to the support of ODIC.Thesis Open Access BASE - ein begriffsbasiertes Analyseverfahren für die Software-Entwicklung(Philipps-Universität Marburg, 2002-01-04) Düwel, Stephan (123037980); Prof. Dr. Wolfgang HesseThesis Open Access Basiskomponenten von XML Datenbanksystemen(Philipps-Universität Marburg, 2005-06-08) Schneider, Martin (106084402); Seeger, Bernhard (Prof.)Für die Entwicklung von vielen kleinen und großen Softwaresystemen reichen herkömmliche (objekt-)relationale Datenbanksysteme nicht mehr aus. Viele interessante Daten sind in der Praxis nicht voll strukturiert und somit nicht effektiv mit einem Standarddatenbanksystem zu verwalten. Es werden deshalb neuartige standardisierte Systeme für unstrukturierte bzw. semi-strukturierte Daten benötigt. Diese Lücke wird nun von nativen XML Datenbanksystemen geschlossen, die als Datenformat das vom W3C standardisierte XML verwenden. XML Datenbanksysteme unterstützen außerdem viele weitere XML Standards, wie beispielsweise XSchema für Grammatiken, XPath und XQuery für die Anfrageverarbeitung, XSLT für Transformationen und DOM und SAX für die Applikationsanbindung. In dieser Arbeit werden Grundlagen von nativen XML Datenbanksystemen betrachtet, sowie neue Strukturen vorgeschlagen und alte Strukturen optimiert. Es wird auf eine solide Basis zum Testen von Algorithmen Wert gelegt. Hierzu wurde ein Testframework innerhalb der Java-Bibliothek XXL implementiert und anschließend verwendet. Die XXL Bibliothek enthielt bereits vor dieser Arbeit einige Komponenten, die für die Implementierung von Datenbanksystemen eingesetzt werden konnten, beispielsweise eine generische Anfrageverarbeitung und Indexstrukturen. Zusätzlich zu den vorhandenen Komponenten wurden nun neue hinzugefügt, so z.B. eine Komponente für den direkten Festplattenzugriff, ein frei konfigurierbarer Recordmanager, sowie ein Datenbank-Framework. Das zentrale Anliegen der Arbeit ist die Optimierung der Speicherungsebene von nativen XML Datenbanksystemen. Wichtig ist, dass bei der Abbildung von XML Dokumenten auf den Externspeicher die Baumstruktur erhalten bleibt und somit eine performante Anfragenverarbeitung mit wenigen Externspeicherzugriffen möglich wird. Ähnlich wie bei R-Bäumen, können für XML Speicherungsstrukturen verschiedene Splitalgorithmen angegeben werden, die gewisse Heuristiken verfolgen. Hier zeigte sich der neu entwickelte, so genannte OneCutSplit mit Scaffold als klar überlegen gegenüber den bisher bekannten Splitalgorithmen aus der Literatur. Für das Einfügen von Dokumenten wurde weiterhin ein Bulkloading Mechanismus implementiert. Es konnte gezeigt werden, dass die Speicherstruktur für die hiermit erzeugten Dokumente deutlich besser war als bei der Benutzung von Splitalgorithmen. Dies macht sich erheblich in den Antwortzeiten von Anfragen bemerkbar. Zur Beschleunigung der Anfrageverarbeitung sind in nativen XML Datenbanksystemen Indexstrukturen unverzichtbar. Zu diesem Zweck wurde ein neuartiger Signaturindex entwickelt und in die XML Speicherungsstruktur unter Verwendung von Aggregaten integriert. Die Evaluierung des Indexes zeigte einen deutlichen Vorteil bei der Auswertung von XPath-Ausdrücken. Weiterhin konnten erstmals durch die Benutzung des Datenbank-Frameworks von XXL native Speicherungsverfahren für XML mit solchen verglichen werden, die auf relationalen Datenbanksystemen aufsetzen. Hierbei zeigte sich, dass nativer XML Speicher auch bei einfachen XPath-Anfragen gute Leistungswerte besitzt. Bei Navigations- und Änderungsoperationen ist der native XML Speicher den relationalen Verfahren deutlich überlegen. In der Anfrageverarbeitung auf XML Daten spielen allerdings nicht nur XPath und XQuery eine Rolle. Für die Bearbeitung von großen Mengen von XML Dokumenten sind Operatoren sinnvoll, welche eine Verarbeitung durch Abbildung von XML Dokumenten auf neue XML Dokumente realisieren. Dies ist analog zur relationalen Algebra, in der allerdings der Grunddatentyp Tupel Verwendung findet. Im Vergleich zum relationalen Modell werden für XML jedoch viele verschiedene Operatoren benötigt, die nicht auf wenige Grundoperationen zurückgeführt werden können. In dieser Arbeit werden einige neue Operatoren vorgestellt, die nicht nur für die Anfrageverarbeitung innerhalb von XML Datenbanksystemen, sondern auch für Anfragen im Internet geeignet sind. Durch das entwickelte Framework soll es Anwendern in Zukunft auf einfache Art und Weise möglich sein, Internetquellen in eigene Anfragen einzubauen.Thesis Open Access Coalgebraische Similarität(Philipps-Universität Marburg, 2017-08-31) Zarrad, Mehdi; Gumm, Peter (Prof. Dr. H.)Bereits bekannt erhält jeder Funktor genau dann schwache Pullbacks, wenn jede Kongruenz eine difunktionale Bisimulation ist. In Kapitel 3 fanden wir äquivalente Aussagen für die schwache Kerpaarerhaltung und die Urbilderhaltung. Ausserdem definierten wir eine Funktorabänderung, die wir Urbildbereinigung nannten. Der resultierende Funktor erhält Urbilder. Die Idee war inspiriert von der Transformation, so dass daraus ein gesunder Funktor entsteht. Der Urbilder erhaltende Funktor hat auch den Vorteil, dass seine Unterfunktoren genau die Urbilder erhaltende Unterfunktoren des ursprünglichen Funktors sind. In Kapitel 4 zeigten wir, dass die monotonen trennbaren Boxen eine korrekte und vollständige Modallogik liefern. Interessant ist, dass die Urbild-Bereinigung des allgemeinen Nachbarschaftsfunktors einen Funktor liefert, der schwache Pullbacks erhält.Thesis Open Access Coalgebren und Funktoren(Philipps-Universität Marburg, 2002-01-04) Schröder, Tobias (115012826); Prof. Dr. H.P. GummThesis Open Access Composite Modeling based on Distributed Graph Transformation and the Eclipse Modeling Framework(Philipps-Universität Marburg, 2013-02-22) Jurack, Stefan; Taentzer, Gabriele (Prof.Dr.)Model-driven development (MDD) has become a promising trend in software engineering for a number of reasons. Models as the key artifacts help the developers to abstract from irrelevant details, focus on important aspects of the underlying domain, and thus master complexity. As software systems grow, models may grow as well and finally become possibly too large to be developed and maintained in a comprehensible way. In traditional software development, the complexity of software systems is tackled by dividing the system into smaller cohesive parts, so-called components, and let distributed teams work on each concurrently. The question arises how this strategy can be applied to model-driven development. The overall aim of this thesis is to develop a formalized modularization concept to enable the structured and largely independent development of interrelated models in larger teams. To this end, this thesis proposes component models with explicit export and import interfaces where exports declare what is provided while imports declare what it needed. Then, composite model can be connected by connecting their compatible export and import interfaces yielding so-called composite models. Suitable to composite models, a transformation approach is developed which allows to describe changes over the whole composition structure. From the practical point of view, this concept especially targets models based on the Eclipse Modeling Framework (EMF). In the modeling community, EMF has evolved to a very popular framework which provides modeling and code generation facilities for Java applications based on structured data models. Since graphs are a natural way to represent the underlying structure of visual models, the formalization is based on graph transformation. Incorporated concepts according to distribution heavily rely on distributed graph transformation introduced by Taentzer. Typed graphs with inheritance and containment structures are well suited to describe the essentials of EMF models. However, they also induce a number of constraints like acyclic inheritance and containment which have to be taken into account. The category-theoretical foundation in this thesis allows for the precise definition of consistent composite graph transformations satisfying all inheritance and containment conditions. The composite modeling approach is shown to be coherent with the development of tool support for composite EMF models and composite EMF model transformation.Thesis Open Access Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects(Philipps-Universität Marburg, 2012-09-05) Menzel, Nicolas (1025400739); Guthe, Michael (Prof. Dr.)The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplifyThesis Open Access Consistency-by-Construction Techniques for Software Models and Model Transformations(Philipps-Universität Marburg, 2020-10-14) Nassar, Nebras (0000-0002-0838-6513); Taentzer, Gabriele (Prof. Dr.)A model is consistent with given specifications (specs) if and only if all the specifications are held on the model, i.e., all the specs are true (correct) for the model. Constructing consistent models (e.g., programs or artifacts) is vital during software development, especially in Model-Driven Engineering (MDE), where models are employed throughout the life cycle of software development phases (analysis, design, implementation, and testing). Models are usually written using domain-specific modeling languages (DSMLs) and specified to describe a domain problem or a system from different perspectives and at several levels of abstraction. If a model conforms to the definition of its DSML (denoted usually by a meta-model and integrity constraints), the model is consistent. Model transformations are an essential technology for manipulating models, including, e.g., refactoring and code generation in a (semi)automated way. They are often supposed to have a well-defined behavior in the sense that their resulting models are consistent with regard to a set of constraints. Inconsistent models may affect their applicability and thus the automation becomes untrustworthy and error-prone. The consistency of the models and model transformation results contribute to the quality of the overall modeled system. Although MDE has significantly progressed and become an accepted best practice in many application domains such as automotive and aerospace, there are still several significant challenges that have to be tackled to realize the MDE vision in the industry. Challenges such as handling and resolving inconsistent models (e.g., incomplete models), enabling and enforcing model consistency/correctness during the construction, fostering the trust in and use of model transformations (e.g., by ensuring the resulting models are consistent), developing efficient (automated, standardized and reliable) domain-specific modeling tools, and dealing with large models are continually making the need for more research evident. In this thesis, we contribute four automated interactive techniques for ensuring the consistency of models and model transformation results during the construction process. The first two contributions construct consistent models of a given DSML in an automated and interactive way. The construction can start at a seed model being potentially inconsistent. Since enhancing a set of transformations to satisfy a set of constraints is a tedious and error-prone task and requires high skills related to the theoretical foundation, we present the other contributions. They ensure model consistency by enhancing the behavior of model transformations through automatically constructing application conditions. The resulting application conditions control the applicability of the transformations to respect a set of constraints. Moreover, we provide several optimizing strategies. Specifically, we present the following: First, we present a model repair technique for repairing models in an automated and interactive way. Our approach guides the modeler to repair the whole model by resolving all the cardinalities violations and thereby yields a desired, consistent model. Second, we introduce a model generation technique to efficiently generate large, consistent, and diverse models. Both techniques are DSML-agnostic, i.e., they can deal with any meta-models. We present meta-techniques to instantiate both approaches to a given DSML; namely, we develop meta-tools to generate the corresponding DSML tools (model repair and generation) for a given meta-model automatically. We present the soundness of our techniques and evaluate and discuss their features such as scalability. Third, we develop a tool based on a correct-by-construction technique for translating OCL constraints into semantically equivalent graph constraints and integrating them as guaranteeing application conditions into a transformation rule in a fully automated way. A constraint-guaranteeing application condition ensures that a rule applies successfully to a model if and only if the resulting model after the rule application satisfies the constraint. Fourth, we propose an optimizing-by-construction technique for application conditions for transformation rules that need to be constraint-preserving. A constraint-preserving application condition ensures that a rule applies successfully to a consistent model (w.r.t. the constraint) if and only if the resulting model after the rule application still satisfies the constraint. We show the soundness of our techniques, develop them as ready-to-use tools, evaluate the efficiency (complexity and performance) of both works, and assess the overall approach in general as well. All our four techniques are compliant with the Eclipse Modeling Framework (EMF), which is the realization of the OMG standard specification in practice. Thus, the interoperability and the interchangeability of the techniques are ensured. Our techniques not only improve the quality of the modeled system but also increase software productivity by providing meta-tools for generating the DSML tool supports and automating the tasks.Thesis Open Access Continuous Queries over Data Streams - Semantics and Implementation(Philipps-Universität Marburg, 2007-10-29) Krämer, Jürgen; Seeger, Bernhard (Prof. Dr.)Recent technological advances have pushed the emergence of a new class of data-intensive applications that require continuous processing over sequences of transient data, called data streams, in near real-time. Examples of such applications range from online monitoring and analysis of sensor data for traffic management and factory automation to financial applications tracking stock ticker data. Traditional database systems are deemed inadequate to support high-volume, low-latency stream processing because queries are expected to run continuously and return new answers as new data arrives, without the need to store data persistently. The goal of this thesis is to develop a solid and powerful foundation for processing continuous queries over data streams. Resource requirements are kept in bounds by restricting the evaluation of continuous queries to sliding windows over the potentially unbounded data streams. This technique has the advantage that it emphasizes new data, which in the majority of real-world applications is considered more important than older data. Although the presence of continuous queries dictates rethinking the fundamental architecture of database systems, this thesis pursues an approach that adapts the well-established database technology to the data stream computation model, with the aim to facilitate the development and maintenance of stream-oriented applications. Based on a declarative query language inheriting the basic syntax from the prevalent SQL standard, users are able to express and modify complex application logic in an easy and comprehensible manner, without requiring the use of custom code. The underlying semantics assigns an exact meaning to a continuous query at any point in time and is defined by temporal extensions of the relational algebra. By carrying over the well-known algebraic equivalences from relational databases to stream processing, this thesis prepares the ground for powerful query optimizations. A unique time-interval based stream algebra implemented with efficient online algorithms allows for processing data in a push-based fashion. A performance analysis, along with experimental studies, confirms the superiority of the time-interval approach over comparative approaches for the predominant set of continuous queries. Based upon this stream algebra, this thesis addresses architectural issues of an adaptive and scalable runtime environment that can cope with varying query workload and fluctuating data stream characteristics arising from the highly dynamic and long-running nature of streaming applications. In order to control the resource allocation of continuous queries, novel adaptation techniques are investigated, trading off answer quality for lower resource requirements. Moreover, a general migration strategy is developed that enables the query processing engine to re-optimize continuous queries at runtime. Overall, this thesis outlines the salient features and operational functionality of the stream processing infrastructure PIPES (Public Infrastructure for Processing and Exploring Streams), which has already been applied successfully in a variety of stream-oriented applications.Thesis Open Access Coupled Transformations of Graph Structures applied to Model Migration(Philipps-Universität Marburg, 2014-12-16) Mantz, Florian (1063925886); Taentzer, Gabriele (Prof. Dr.)Model-Driven Engineering (MDE) is a relatively new paradigm in software engineering that pursues the goal to master the increased complexity of modern software products. While software applications have been developed for a specific platform in the past, today they are targeting various platforms and devices from classical desktop PCs to smart phones. In addition, they interact with other applications. To easier cope with these new requirements, software applications are specified in MDE at a high abstraction level in so called models prior to their implementation. Afterward, model transformations are used to automate recurring development tasks as well as to generate software artifacts for different runtime environments. Thereby, software artifacts are not necessarily files containing program code, they can also cover configuration files as well as machine readable input for model checking tools. However, MDE does not only address software engineering problems, it also raises new challenges. One of these new challenges is connected to the specification of modeling languages, which are used to create models. The creation of a modeling language is a creative process that requires several iterations similar to the creation of models. New requirements as well as a better understanding of the application domain result in an evolution of modeling languages over time. Models developed in an earlier version of a modeling language often needs to be co-adopted (migrated) to language changes. This migration should be automated, as migrating models manually is time consuming and error-prone. While application modelers use ad-hoc solutions to migrate their models, there is still a lack of theory to ensure well-defined migration results. This work contributes to a formalization of modeling language evolution with corresponding model migration on the basis of algebraic graph transformations that have successfully been used earlier as theoretical foundations of model transformation. The goal of this research is to develop a theory that considers the problem of modeling language evolution with corresponding model migration on a conceptual level, independent of a specific modeling framework.Thesis Open Access Cross-Layer Cloud Performance Monitoring, Analysis and Recovery(Philipps-Universität Marburg, 2015-01-05) Mdhaffar, Afef (106428390X); Freisleben, Bernd (Prof. Dr.)The basic idea of Cloud computing is to offer software and hardware resources as services. These services are provided at different layers: Software (Software as a Service: SaaS), Platform (Platform as a Service: PaaS) and Infrastructure (Infrastructure as a Service: IaaS). In such a complex environment, performance issues are quite likely and rather the norm than the exception. Consequently, performance-related problems may frequently occur at all layers. Thus, it is necessary to monitor all Cloud layers and analyze their performance parameters to detect and rectify related problems. This thesis presents a novel cross-layer reactive performance monitoring approach for Cloud computing environments, based on the methodology of Complex Event Processing (CEP). The proposed approach is called CEP4Cloud. It analyzes monitored events to detect performance-related problems and performs actions to fix them. The proposal is based on the use of (1) a novel multi-layer monitoring approach, (2) a new cross-layer analysis approach and (3) a novel recovery approach. The proposed monitoring approach operates at all Cloud layers, while collecting related parameters. It makes use of existing monitoring tools and a new monitoring approach for Cloud services at the SaaS layer. The proposed SaaS monitoring approach is called AOP4CSM. It is based on aspect-oriented programming and monitors quality-of-service parameters of the SaaS layer in a non-invasive manner. AOP4CSM neither modifies the server implementation nor the client implementation. The defined cross-layer analysis approach is called D-CEP4CMA. It is based on the methodology of Complex Event Processing (CEP). Instead of having to manually specify continuous queries on monitored event streams, CEP queries are derived from analyzing the correlations between monitored metrics across multiple Cloud layers. The results of the correlation analysis allow us to reduce the number of monitored parameters and enable us to perform a root cause analysis to identify the causes of performance-related problems. The derived analysis rules are implemented as queries in a CEP engine. D-CEP4CMA is designed to dynamically switch between different centralized and distributed CEP architectures depending on the load/memory of the CEP machine and network traffic conditions in the observed Cloud environment. The proposed recovery approach is based on a novel action manager framework. It applies recovery actions at all Cloud layers. The novel action manager framework assigns a set of repair actions to each performance-related problem and checks the success of the applied action. The results of several experiments illustrate the merits of the reactive performance monitoring approach and its main components (i.e., monitoring, analysis and recovery). First, experimental results show the efficiency of AOP4CSM (very low overhead). Second, obtained results demonstrate the benefits of the analysis approach in terms of precision and recall compared to threshold-based methods. They also show the accuracy of the analysis approach in identifying the causes of performance-related problems. Furthermore, experiments illustrate the efficiency of D-CEP4CMA and its performance in terms of precision and recall compared to centralized and distributed CEP architectures. Moreover, experimental results indicate that the time needed to fix a performance-related problem is reasonably short. They also show that the CPU overhead of using CEP4Cloud is negligible. Finally, experimental results demonstrate the merits of CEP4Cloud in terms of speeding up the repair and reducing the number of triggered alarms compared to baseline methods.Article Open Access Data recovery methods for DNA storage based on fountain codes(Philipps-Universität Marburg, 2024-12-02) Schwarz, Peter Michael (0000-0001-8763-1507); Freisleben, BerndToday’s digital data storage systems typically offer advanced data recovery solutions to address the problem of catastrophic data loss, such as software-based disk sector analysis or physical-level data retrieval methods for conventional hard disk drives. However, DNA-based data storage currently relies solely on the inherent error correction properties of the methods used to encode digital data into strands of DNA. Any error that cannot be corrected utilizing the redundancy added by DNA encoding methods results in permanent data loss. To provide data recovery for DNA storage systems, we present a method to automatically reconstruct corrupted or missing data stored in DNA using fountain codes. Our method exploits the relationships between packets encoded with fountain codes to identify and rectify corrupted or lost data. Furthermore, we present file type-specific and content-based data recovery methods for three file types, illustrating how a fusion of fountain encoding-specific redundancy and knowledge about the data can effectively recover information in a corrupted DNA storage system, both in an automatic and in a guided manual manner. To demonstrate our approach, we introduce DR4DNA, a software toolkit that contains all methods presented. We evaluate DR4DNA using both in-silico and in-vitro experiments.Research Data Open Access data_UMR - das institutionelle Repositorium der Philipps-Unversität Marburg für Forschungsdaten(Universitätsbibliothek Marburg) Cordes, Birte; Müller, Diana; Münch, Paul; Nicklas, Bernd; Vielhauer, Alexander