Slicing of Model Transformations

Author: Zoltán Ujhelyi, Ákos Horváth, Dániel Varró

In the proposed ASE11 paper Dynamic Backward Slicing of Model Transformations we proposed a combined static slicing approach for slicing transformation programs and their input models. In this page we list detailed measurements that illustrate the model slicing approach.

The Petri net simulator transformation

Overview of the transformation

The Petri net simulator case study transformation  highlights typical domain specific language simulation with the following characteristics: (i) mostly static graph structure, (ii) relatively small and local model manipulations defined by declarative GT rules, and (iii) typical as-long-as-possible (ALAP) execution mode.

The transformation consists of 12 graph patterns, 2 GT and 3 ASM rules, all of them really simple. The execution of the transformation executes the rule fireTransition cyclically. This means, the slices of the transformation program are basically the same: the slice inside the fireTransition rule together with its caller.

In the following, we will evaluate the model slices wrt. different nets and executions, and try to identify the covered parts.

Test cases

To generate the various sized Petri nets of the test cases we used six reduction operations (in the inverse direction to increase the size of the net) which are described in~\cite{Murata89} as means to preserve safety and liveness properties of the net. The size of the generated nets were controlled by the number of operation usage. We generated five different Petri nets by executing the operators 10, 100, 1000, 10000 and 100000 times respectively.

The following table describes the size of the generated Petri net models:

Size Places Transitions Tokens
10 7 11 10
100 80 83 18
1000 744 770 15
10000 7532 7525 17
100000 75098 75177 93

For each Petri net we created execution traces for sequences of 10, 100, 1000 and 10000 firings. We selected the token model element created by the addToken GT rule in the last firing as the slicing criteria together with the GT rule itself.

As the firing of Petri nets is non-deterministic, we executed the transformations several times, but ignored traces, where the last firing does not depend on any previous firing (could have been fired as simulation started). In each case, we either averaged the slices from 10 different execution traces, or tagged the case non-representative (basically removed) if more, than 50% of the traces were ignored. We also recorded the number of ignored traces and presented it together with the results.

For each test case we measured the number of transformation program statements, variables and model elements present in the slice. Additionally, we compared the slice with the size of the trace (effective size), and the size of the transformation program and transformed models (total size), thus calculating statement coverage, variable coverage and model coverage respectively. We expect that these coverage metrics will be good indicators of how the different slices correspond to the structure of the MT program and the input model.

Measurements

Our measurements presented that the program slices of the execution were independent of the input models: in each case, 14 program statements were executed repeatedly from the 26 statements present in the trace; and the trace used all statements from the transformation program. Similarly, the 14 statements always used 5 variables of the 14 present in the trace (and the transformation program).

In case of model slices for each net size and execution length the average model coverage is listed in the following table:

Net Size Iterations Ignored Model Coverage
Count % (Total) % (Effective)
10 10 0% 11 25,5% 41,9%
100 0% 44 98,9% 98,9%
1000 0% 44 100,0% 100,0%
10000 0% 44 100,0% 100,0%
100 10 70%      
100 0% 73 18,0% 26,2%
1000 0% 393 96,8% 97,9%
10000 0% 406 100,0% 100,0%
1000 10 75%      
100 0% 27 0,7% 10,3%
1000 0% 254 6,5% 22,8%
10000 0% 3093 79,5% 88,7%
10000 10 55%      
100 0% 58,2 0,1% 16,3%
1000 0% 478 1,2% 19,4%
10000 0% 1212 3,1% 16,8%
100000 10 80%      
100 60%      
1000 0% 113 0,1% 3,6%
10000 0% 578 0,1% 5,3%

The results can be categorized into three different types:

  1. if the number of fired transitions is much larger than the net size (e.g. executing 1000 firing iterations on a net consisting of 10 elements), typically all places and transitions are covered both by the trace and the slice. Such elements are represented by high model coverage (over 90%).
  2. if the number of fired transitions is much lower than the net size (e.g. executing 10 iterations on net with 10000 elements), typically all firing operations are independent from each other, making the case non-representative.
  3. Otherwise, the coverage may vary largely because of the non-deterministic nature of the simulation.

To understand the results better, we also compared the average model slice and trace sizes to the corresponding minimum and maximum size:

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

 

  

  

  

  

  

 

    Model Slice Size Trace Size
Net Size Iterations Min. Avg. Max. Min. Avg. Max.
10 10 4 11 21 22 26,7 33
100 39 44 44 44 44 44
1000 44 44 44 44 44 44
10000 44 44 44 44 44 44
100 100 34 73 133 234 280 316
1000 375 393 406 389 402 406
10000 406 406 406 406 406 406
1000 100 12 27 98 202 258 302
1000 15 253 742 538 1108 1598
10000 716 3093 3500 3027 3486 3600
10000 100 28 58 105 297 357 479
1000 255 478 636 1806 2467 3149
10000 781 1212 1670 6313 7227 8208
100000 1000 61 113 162 2850 3099 3312
10000 50 578 982 9754 10799 12476

It can be seen that the minimum and maximum trace size is close to each other in case of each model. However, the model slice size changes greatly wrt. the execution (more specifically the slicing criteria): although most slices were close to the average, we always found a slices that are either smaller or larger than the average calculated.

Lessons Learned

Our results showed that the slicing of declarative model transformations behaves very differently wrt. the program and model slices. This was in line with our expectations for slicing simulation MTs: (1) program slices are largely model independent, while (2) model slices largely depend on both the number of firings and the overall size of the net.