Articles / Is Software Testing Product…

Is Software Testing Production or Service?

Chang Liu writes: "Most of us would probably agree that any software package, open-source or not, can gain high quality only after rigorous testing. One source of the credibility of open-source software is the fact that it is tested by a large number of knowledgeable testers who have access to the source code and know what's going on. Yet we seldom discuss what contributes to good testing practices. Sure, everybody tests in a different way, just as everybody codes in a different fashion. But in the end, there are good practices and bad practices. It benefits the community to spread the word about good testing practices."

So what exactly is software testing? The traditional academic view thinks that software testing takes programs and specifications as input, and produces a bug list as output. In other words, software testing produces bug lists for development teams. Others, especially those in the commercial world, have different expectations. They view testing as a service to development teams. Testers are expected to provide almost instant feedback at all times while programs and specifications keep evolving.

This article discusses the "production" view and the "service" view of software testing and tries to find out their impacts on software testing techniques.

The "production" view of software testing

Traditionally, the problem of software testing was stated as such: given a program and a description or specification of what the program does, find out under which conditions the program does not behave as expected. There are generally two types of testing techniques used to solve this problem. One type is program-based techniques, also known as white-box testing. The other type is specification-based techniques, also known as black-box testing.

Program-based techniques develop test cases according to program structures. The central idea is that program control structures and data structures determine program behaviors. If test cases can sufficiently cover all control structures and/or data structures, we can be reasonably confident that most program behaviors are examined. Statement coverage, branch coverage, and path coverage are example techniques used by white-box testing.

Specification-based techniques do not assume knowledge of internal program structures. Instead, they depend on the problem specifications or descriptions to determine which test cases should be used. The central idea is that if a program is supposed to solve a problem, as long as the problem is solved, it doesn't matter how the program is constructed.

Both types of these traditional testing techniques assume that there are static programs or specifications to work on and that a list of bugs is all development teams need.

To support "bug-list production", techniques are developed to cover the program-under-test more thoroughly under a pre-selected coverage criterion, to achieve a higher coverage with a smaller number of test cases, and to execute test cases more quickly. New coverage criteria are also invented to cover different aspects of the program-under-test.

The focus here is to produce more thorough lists of bugs, in other words, better products, even if it might take a longer turnaround time to provide such lists.

The "service" view of software testing

In practice, testers' jobs are sometimes more subtle than simply producing bug lists. I once asked a test lead from a large software company what his most important responsibility was. The answer was quite surprising to me at that time: the most important thing was to know the status of the software product at all times. After I thought about it, the idea became quite reasonable. Clearly, when both the program-under-test and the description of the problem are changing everyday, it is not feasible to produce a comprehensive bug list for each daily build. Nor is it necessary. It is more useful to the development team if testers can provide constant and rapid feedback on the status of the current builds. Overview information is as important as individual bug reports. This "service" view of software testing focuses on the need for rapid feedback and the evolving nature of the program-under-test. Just as with many other services such as phone services, the need for rapid responses is paramount. When a person picks up a phone, she expects to talk right away; when a development team gets a build done, they expect feedback right away.

To perform software testing as services, testers must be able to quickly find out the status of a new build. Automated test execution and result verification seems to be a logical way to go. However, most current test automation tools and techniques are closely tied to implementation details such as user interfaces. They are extremely sensitive to changes in the program-under-test. This creates a dilemma. On one hand, testers have to automate tests to provide rapid feedback. On the other hand, automated tests don't work very well with updated programs and thus sometimes slow down software testing. There is no perfect solution to this yet. More abstract test descriptions may be able to decouple test cases from implementation details in the future.

A key question here is, how do testers perform a small number of test cases on each build and still gain an good overall knowledge of the status of the entire program? In other words, how do they determine which test cases should be used on which builds? How do they combine the results of different test cases executed on different builds and make sense of it? I'm sure many testers are experienced enough to do this, but until we can clearly state how we do it, we cannot claim that we know how to engineer it and that we can do it successfully in the next project.

In the case of open-source software development, there are usually no deadlines. Still, builds are updated daily or weekly in many projects. It is likely that when someone declares "Hey, I just achieved 80% test coverage for project X based on test criterion Y." (if one ever would), the build she uses probably is an out-of-date one. I wonder how people in successful projects such as Emacs, Linux, and Apache put together feedback and determine stable builds. Or do they determine a build to be stable before user feedback? Is there a systematic way to separate stable builds from other builds?

What do you think?

The production view and the service view of software testing are certainly not entirely incompatible. Many testers who provide testing services are doing a good job using techniques developed for bug production, and make ad hoc adjustments to them to work with evolving environments. However, I think it is in the best interest of the software community to contemplate what we expect from software testing and what is the best way to provide it. I can't wait to hear what freshmeat users have to say.


Chang Liu is a member of the Rosatea group (Research Organization for Specification- and Architectual-based Testing & Analysis) at UC Irvine. His research interests are centered on software testing automation, software quality assurance, and software engineering in general. He is currently working on TestTalk -- a comphrehensive testing language.


T-Shirts and Fame!

We're eager to find people interested in writing editorials on software-related topics. We're flexible on length, style, and topic, so long as you know what you're talking about and back up your opinions with facts. Anyone who writes an editorial gets a freshmeat t-shirt from ThinkGeek in addition to 15 minutes of fame. If you think you'd like to try your hand at it, let jeff.covey@freshmeat.net know what you'd like to write about.

Recent comments

02 Dec 2009 11:51 Avatar A1QA

Look:
"until we can clearly state how we do it, we cannot claim that we know how to engineer it and that we can do it successfully in the next project" - quality inconstancy.
"the need for rapid feedback and the evolving nature of the program-under-test" - features of intangibility.
"there are usually no deadlines" - extended timeline, results in each period are different
These all are distinguishing characteristics of services, not products.
Therefore, I'm more inclined to the "service" view of software testing.

Dmitry Plavinsky, QA Manager
www.a1qa.com

30 Nov 2007 08:46 Avatar byeaw24

Re: Missing the most fundemental aspects of software development.
What you said is true, however you missed the whole point. He didn't miss the fact that "testing does not begin with the code." That is a well established tenant of software engineering. Notice how you titled your post: "...of software development." You are ranting about improper software development, not improper testing. The fact is, even with proper software engineering practices, their must be a testing process before new software is released. That is the focus of the discussion. Also, it seems somewhat rude to post such a large amount of text which looks like it has been copied and pasted from a website on software engineering. You could put in a link instead.

25 Nov 2003 02:16 Avatar robdavis

Re: testing
I agree... it is a good idea to start testing the software as early as possible.


Rob


Rob Davis, PE

Software QA/Verification/Validation/Test Engineer

http://www.robdavispe.com

20 Feb 2000 13:25 Avatar jimcox

Stability verification amidst rapid releases
I must say, that it's pretty strange to see rational discussion of software testing - it's a part of the industry that's long been overlooked.

Rapid release cycles can deliver stable builds. The process is inherently difficult, but it can be done with extensive communication between the development and the testing groups.

First, the development team must meticulously maintain a change log, and before the release, communicate these changes to the test team.

The test team can then either determine with white-box methods which areas to test, or obtain the information from a less-partial developer.

This allows the test team to proceed with focused testing on areas that have been changed, in addition to the broad and shallow 'smoke-test' that is applied to the entire product before general release.

This strategy is entirely dependent, of course, on sound design and testing principles as Frank outlined earlier.

19 Feb 2000 23:34 Avatar markbullock

open source test cases
I recently tested a custom FTP server for the driveway.com service. I was wishing for open source FTP test cases, although the FTP RFCs provided a decent spec.

If anyone is interested in contributing test plans or test cases, I can probably supply storage and organization on www.sasqag.org.

Also, how does Linux get tested? Is there a public web site with test related information?

Thanks, Mark

Screenshot

Project Spotlight

Kigo Video Converter Ultimate for Mac

A tool for converting and editing videos.

Screenshot

Project Spotlight

Kid3

An efficient tagger for MP3, Ogg/Vorbis, and FLAC files.