Testing reports
Stuart McGraw
smcg4191 at frii.com
Fri Apr 13 15:12:43 EDT 2012
On 04/13/2012 09:20 AM, John Layman wrote:
>> Chasing down a single test failure, particularly in system tests (as
> opposed to unit tests),
>> takes far more time than writing a good test in the first place.
>
> If there were actual data to support such a claim, who could disagree? As
> intuition, however, the claim sounds suspiciously like rationalization. I'm
> reminded of the old saw about the hazards of asking an engineer the time.
> At what probability does is make no sense to provide against an imaginable
> eventuality? Good grief!
>
> An important point the developers of GnuCash need to bear in mind is that
> the economics of the software do not begin and end with them. The time
> required to investigate why a test failed may be dear to the developer,
> personally, but the cost of poor quality multiplied across the user
> community is far more consequential. The point which I'd hope we could all
> agree upon is that a delay in putting some degree of effective testing in
> place cannot be justified on grounds of sparing developers menial work.
I'd like to jump in with two quick comments....
First, I've been using Gnucash for a number of years, both
for my personal finances and for other things (helping out
a non-profit and as executor of an estate) so I think I too
have a claim at least equal to any other user of Gnucash
regarding its future (which in reality is not much :-).
I certainly would like to see tests that provide some protection
against unplanned changes in reports. I would not want to see
such tests if the work involved had a serious negative effect
on other aspects of Gnucash development.
Gnucash's reporting system has always been recognized as
one of its weaker parts. A major redesign of the reporting
system would be useful I think. Spending a *lot* of time
developing tests for the current system would be counter-
productive to that [*1]. (And note that I am not advocating
spending *no* time on such tests.)
Second, my own programming experience started in the days
when programmers did not write tests for their own code
(other groups did that). When I did start writing tests
I found exactly what John Rails has claimed -- that naive
comparisons with known good output required a lot of extra
work for little benefit. Minor changes which would not
cause output to depart from the spec would cause a non-spec
violating output change that the tests had to be adapted
to. I recall, after spending a lot of time updating tests,
spending a lot of time rewriting tests to make them test
only *significant* differences. (Of course with Gnucash
there are no formal specs but the principle is the same.)
An "any difference should be investigated" philosophy is
appropriate for safety-critical and high-reliability systems
but I think it is too expensive for Gnucash where its cost
will be to slow the development of other needed improvements
and demotivate developers. However, *some* testing, to
avoid serious boo-boos and regressions, would be good.
Like all real-world decisions it involves trade-offs and
subjective judgments.
Bottom line is that I personally would find a "categories"
feature far more valuable than an ironclad guarantee that
there will never be an extra blank line in a report.
My 2 cents.
----
[*1] "counter-productive" in that I would expect a lot of
the details of tests to change with a major report system
redesign (ie, reports would look a lot different), requiring
a *lot* of work to update/rewrite a very comprehensive set
of tests. Of course there may be plusses too *if* the
testing framework is flexible enough to adapt easily to a
new reporting system.
More information about the gnucash-user
mailing list