Any tips to improve performance?
Anthony W. Youngman
wol at thewolery.demon.co.uk
Tue Dec 29 12:39:25 EST 2009
In message <1262104001.3035.33.camel at phil-laptop>, Phil Longstaff
<plongstaff at rogers.com> writes
>On Tue, 2009-12-29 at 10:42 -0500, Derek Atkins wrote:
>> Phil,
>>
>> Phil Longstaff <plongstaff at rogers.com> writes:
>>
>> > Many of the reports work by querying the engine for a set of splits
>> >matching a
>> > specific filter, then looping and adding the values (all in
>> >scheme). Once the
>> > database backend is released in 2.4, this can be changed to a single SQL
>> > query.
>> >
>> > Some preparation for this could begin by adding an appropriate API to the
>> > engine (Account?) which takes a QofQuery for splits and returns the
>> >sum of the
>> > splits. This at least creates the abstraction needed by the scheme
>> >code which
>> > can be replaced.
>>
>> Note that in general the scheme functions are not returning a single
>> gnc_numeric total, but rather a list of <gnc_commodity,gnc_numeric>
>> tuples that sum up the amounts in each commodity. Also note that
>> different reports are going to want to perform different types of
>> currency conversions based on different requirements. For example, for
>> Income and Expense accounts you probably want to compute an exchange
>> rate based on the date of the transaction instead of the date of the
>> report, whereas Asset and Liabilities most likely want to compute
>> exchange rates based on the report date.
>>
>
>Yes, exchange rates do throw a kink into it.
>
>> But yes, implementing some of this in C could possibly help improve
>> performance.
>>
>> I'm not sure a SQL statement would work, because it sounds like 2.4 will
>> still default to XML, so the reports cannot (and should not) depend on
>> the use of SQL in the backend.
>
>The scheme code wouldn't want to know about the backend. However,
>somewhere (Account? backend?), there should be a "get me the amount on
>a certain date" or "get me the amount according to a query" function
>which with an XML backend would filter and sum, and with an SQL backend
>would do an SQL query so that the SQL engine would filter and sum.
>
Trying to say as little as I can without getting on a hobby-horse ...
The basic principle behind XML is almost identical to the Pick family of
databases - indeed if you look at a Pick datafile as an XML data file
and the Pick dictionary as a DTD, then they're near as dammit the same
thing!
Pick is NFNF (non-first-normal-form) but it IS normalised (when done
properly!) and the transformation from a Pick dataset to a relational
dataset that can be queried by SQL is both trivial and mechanically
easy.
Using sensibly-defined dictionaries and indices, a Pick back-end could
be queried very quickly. To quote the Pick FAQ - "SQL optimises the easy
task of finding data in memory; Pick optimises the difficult task of
getting it into memory in the first place".
My favourite war-story concerns a system ported from UniVerse (Pick) to
Oracle. After six months the consultants doing the port went to
management and said "our wonderful system is now 10% faster than the old
dinosaur". At which point the "dinosaur" looking after the old system
said "and you're *proud* that your twin-Xeon 800 can only outperform a
Pentium 90 by ten percent?"
(btw, it was a complicated query they were discussing, but that six
months was spent optimising the SQL, and I'm guessing the UniVerse query
was thrown together in thirty seconds with no attempt at optimisation.)
Cheers,
Wol
--
Anthony W. Youngman - anthony at thewolery.demon.co.uk
More information about the gnucash-user
mailing list