TY - GEN
T1 - Syntactic and Semantic Differencing for Combinatorial Models of Test Designs
AU - Tzoref-Brill, Rachel
AU - Maoz, Shahar
N1 - Publisher Copyright: © 2017 IEEE.
PY - 2017/7/19
Y1 - 2017/7/19
N2 - Combinatorial test design (CTD) is an effective test design technique, considered to be a testing best practice. CTD provides automatic test plan generation, but it requires a manual definition of the test space in the form of a combinatorial model. As the system under test evolves, e.g., due to iterative development processes and bug fixing, so does the test space, and thus, in the context of CTD, evolution translates into frequent manual model definition updates. Manually reasoning about the differences between versions of real-world models following such updates is infeasible due to their complexity and size. Moreover, representing the differences is challenging. In this work, we propose a first syntactic and semantic differencing technique for combinatorial models of test designs. We define a concise and canonical representation for differences between two models, and suggest a scalable algorithm for automatically computing and presenting it. We use our differencing technique to analyze the evolution of 42 real-world industrial models, demonstrating its applicability and scalability. Further, a user study with 16 CTD practitioners shows that comprehension of differences between real-world combinatorial model versions is challenging and that our differencing tool significantly improves the performance of less experienced practitioners. The analysis and user study provide evidence for the potential usefulness of our differencing approach. Our work advances the state-of-The-Art in CTD with better capabilities for change comprehension and management.
AB - Combinatorial test design (CTD) is an effective test design technique, considered to be a testing best practice. CTD provides automatic test plan generation, but it requires a manual definition of the test space in the form of a combinatorial model. As the system under test evolves, e.g., due to iterative development processes and bug fixing, so does the test space, and thus, in the context of CTD, evolution translates into frequent manual model definition updates. Manually reasoning about the differences between versions of real-world models following such updates is infeasible due to their complexity and size. Moreover, representing the differences is challenging. In this work, we propose a first syntactic and semantic differencing technique for combinatorial models of test designs. We define a concise and canonical representation for differences between two models, and suggest a scalable algorithm for automatically computing and presenting it. We use our differencing technique to analyze the evolution of 42 real-world industrial models, demonstrating its applicability and scalability. Further, a user study with 16 CTD practitioners shows that comprehension of differences between real-world combinatorial model versions is challenging and that our differencing tool significantly improves the performance of less experienced practitioners. The analysis and user study provide evidence for the potential usefulness of our differencing approach. Our work advances the state-of-The-Art in CTD with better capabilities for change comprehension and management.
UR - http://www.scopus.com/inward/record.url?scp=85027725508&partnerID=8YFLogxK
U2 - 10.1109/ICSE.2017.63
DO - 10.1109/ICSE.2017.63
M3 - منشور من مؤتمر
T3 - Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering, ICSE 2017
SP - 621
EP - 631
BT - Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering, ICSE 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 39th IEEE/ACM International Conference on Software Engineering, ICSE 2017
Y2 - 20 May 2017 through 28 May 2017
ER -