Streaming model transformations represent a novel class of transformations dealing with models whose elements are continuously produced or modified by a background process. Executing streaming transformations requires efficient techniques to recognize the activated transformation rules on a potentially infinite input stream. Detecting a series of events triggered by compound structural changes is especially challenging for a high volume of rapid modifications, a characteristic of an emerging class of applications built on runtime models.
Three papers by our research group have been accepted at the MODELS 2014 (ACM/IEEE Model Driven Engineering Languages and Systems) scientific conference, the top international forum for model-driven software engineering.
Three papers by our research group have been accepted at the MODELS 2014 (ACM/IEEE Model Driven Engineering Languages and Systems) scientific conference, the top international forum for model-driven software engineering.
Model-driven tools use model queries for many purposes, including validation of well-formedness rules and specification of derived features. The majority of declarative model query corpus available in industry appears to use the OCL language. Graph pattern based queries, however, would have a number of advantages due to their more abstract specification, such as performance improvements through advanced query evaluation techniques. As query performance can be a key issue with large models, evaluating graph patterns instead of OCL queries could be useful in practice. The measurements presented here give justification to automatically mapping OCL to equivalent graph patterns by showing that one can deliver efficient, incremental query evaluation for a subset of OCL expressions.
Model Driven Development systems exploit the benefit of instance model validation and model transformation. Ever-growing model sizes used for example in critical embedded systems development require more and more efficient tools. The most time consuming step during model validation or model transformation is the model query step. This benchmark aims to measure batch style query and incremental style query performance of existing EMF based tools.
Queries are the foundations of data intensive applications. In model-driven software engineering (MDSE), model queries are core technologies of tools and transformations. As software models are rapidly increasing in size and complexity, most MDSE tools frequently exhibit scalability issues that decrease developer productivity and increase costs. As a result, choosing the right model representation and query evaluation approach is a significant challenge for tool engineers. In the current paper, we aim to provide a benchmarking framework for the systematic investigation of query evaluation performance.
More specifically, we experimentally evaluate (existing and novel) query and instance model metrics to highlight which provide sufficient performance estimates for different MDSE scenarios in various model query tools. For that purpose, we also present a comparative benchmark, which is designed to differentiate model representation and graph query evaluation approaches according to their performance when using large models and complex queries.