Sufficiently Large Number for Testing C++ Standard Template Library Modules

Main Article Content

Pornrudee Netisopakul

Abstract

One of the most crucial questions in software testing is when to stop testing. The problem was proved to be an undecidable problem in general. However, for a set of collection programs in C++ called standard template library (STL), we are able to show that there exist a sufficiently large number N associated with the test data set size of each module, such that testing with a data set size larger than N does not reveal more faults in the module under test. This paper presents the concepts, theorem, and the experimental results that confirm the finding of the sufficiently large number.


 Keywords: Sufficiently large number, Data Coverage Testing, C++ Standard Template Library, Automated Test Generation, Testing of Collection Programs


E-mail: [email protected]

Article Details

Section
Original Research Articles

References

[1] Ball, T., Homan, D., Rusky, F., Webber, R., and White, L. (2000), ‘State generation and automated class testing’, Software Testing, Verification and Reliability, 10, 1 49-70.
[2] Fenton, N., and Pfleeger, S.L., (1997) ‘Software Metrics: A rigorous & Practical Approach’, 2nd Edition, PWS Publishing Company.
[3] Frankl, P.G., and Weiss, S.N., (1993), ‘An experimental comparison of the effectiveness of branch testing and dataflow testing’, IEEE Trans. on Software Eng., 19(8), 774-787.
[4] Hoffman, D., and Strooper, P., (1997), ‘ClassBench: a framework for automated class testing’, Software Practice and Experience, 27(5), 573-597.
[5] Hoffman, D., Strooper, P., and White, L. (1999), ‘Boundary values and automated component testing’, Software Testing, Verification and Reliability, 9, 3-26.
[6] Horgan, J.R. and mathur, A.P., “Software Testing and Reliability”, Chapter 13, Handbook of Software Reliability Eng., Ed M. Lyu, McGraw-Hill, 1996.
[7] Howden, W.E. (1976) ‘Reliability of the path analysis testing strategy’, IEEE Transaction of Software Engineering, SE-2 (3), 208-215.
[8] Hutchins, M., Foster, H., Goradia, T., and ostrand, t., (1994) ‘Experiments on the effectiveness of dataflow-based and controllflow-based and controlflow-based test adequacy criteria’, Proc. 16th Int. Conf. on Software Eng., Sorrento, Italy, May 1994, 53-66.
[9] Myers, G.J. (1976) ‘The art of software testing’, John Wiley and Sons, New York.
[10] Paige, M.R. (1975), ‘Program graphs, an algebra, and their implication for programming’, IEEE Trans. on Software Eng., 1(3), 286-291.
[11] Netisopakul, P., White, L.J., Morris, J., (2002), ‘Testing Efficiency: Statement Coverage, Random testing, and data Coverage Testing’. International Conference on Practical Software Quality Techniques and International Conference on Practical Software Testing Techniques (PSQT/PSTT), March 4-8, 2002, New Orleans, Louisiana.
[12] Netisopakul, P., White, L.J., Morris, J. and Hoffman, D., (2002), ‘Data Coverage Testing of Programs for Container Classes’, Proc. Of the Int. Symposium on Software Reliability Engineering (ISSRE-2002), Annapolis, MD, Nov. 12-15, 2002, 183-194.
[13] Netisopakul, P., White, L.J., Morris, J. and Hoffman, D., (2002), ‘Data Coverage Testing’, Proceeding of Asia-Pacific Software Engineering Conference, December 4-6, 2002, Gold Coast, Queensland, Australia. 465-472.
[14] Richardson, D.J., and Clarke, L.A. (1985), ‘Partition Analysis: A method of combining testing and verification’, IEEE Trans. on Software Eng., 11(2) 1477-1490.
[15] Roper, M., Wood, M., and Miller, J. (1997), ‘An empirical evaluation of defect detecting techniques’, Information and Software Tech., 39, 763-775.
[16] White, L.J., (1987), ‘Software Testing and Verification’, Advances in Computers, Vol. 26, 335-391.