[plt-scheme] [maybe off-topic] The theory of testing

From: Ken Dickey (Ken.Dickey at whidbey.com)
Date: Sun Aug 24 23:37:08 EDT 2008

On Sunday 24 August 2008 13:49:03 Grant Rettke wrote:
> Did you guys expand your unit testing philosophy to differentiate unit
> and integration testing?

There were a separate set of system acceptance tests as well as a stress 
tester which had pre and post conditions (database setup/teardown stuff) to 
simulate various configurations and use cases, driven by a generative model 
(simulated customers w a bunch of parametrized randomness).

> Did you do any test-generation based on contracts, or did you revise
> your unit tests as the product definition changed?

We did not use sw contracts, but did have a database of "user stories" (use 
cases), which changed over time based on changing business requirements.

If you don't know eXtreme Programming, the scenario is that the "voice of the 
customer" defines "user stories" with engineering.  Engineering supplies time 
estimates for the stories. The customer decides the order in which user 
stories are implemented.  And the customer can change the order, add or 
remove stories at any time.

These started as 8 x 5 cards, then grew to a spreadsheet, then to a web 
interface to a database.  The shared stores as the "voice of the customer" 
gave a unifying language which sales/marketing, management and engineering 
could talk about features. Especially as sales could see what engineering was 
doing (which stories were being implemented this week and how fast we were 
moving) and could request features via the SSL encrypted web interface 
anywhere they were.  As this was happening between, e.g. London, San 
Francisco, and Sydney, a unifying language was important.

We actually had some non-actionable stories to capture constraints.  For 
example multiple records of the same user could be in a database under 
different clients, but the different clients could not cross-access data; all 
data was encrypted, an so on.

We were fairly strict.  All non-trivial functions had unit tests for every 
failure case as well as a/the success case(s).   As we ran the test suite 
quite a fer times each day--sometimes every few minutes--we did not feel the 
need for a separate "test generator" for "contracts".  Again, we did have 
separate system tests and a stress test.  The stress test was generative.

One has to trust the process.  I was worried initially.  When we first started 
we seemed to be spending half to 2/3 of the time writing test cases, but this 
inverted as time went on and we got better at it.  We were usually within a 
day on estimates of time to implement individual user stories.  We also spent 
time re-writing working code to make it more clear what was going on. We knew 
we were winning when the total lines of code started going down as we 
continued to add features.  [That was close to a year into the project, and, 
boy did it feel good!].

> Based on the fact you "live on a small island" now I suspect it all
> turned out pretty well! :)

It is actually a pretty big island, but I share it.  ;^)

No project is without its problems, not all of them small.  But in 20 years 
of "pushing bits" (building sw products) for companies large and small it was 
the best run project I have seen.  It was quite rare in the the management 
actually trusted engineering to make engineering decisions.  Again, I think 
the process of developing a unified language to talk about "user stories" 
helped out here.


Cheers,
-KenD




Posted on the users mailing list.