DB design document
Rob Browning
rlb@cs.utexas.edu
16 Dec 2000 09:52:14 -0600
David Merrill <dmerrill@lupercalia.net> writes:
> Please take a look at the latest revision of my design document and
> give me some feedback.
Finally got a chance to glance at it and to catch up on the current
discussion. Most of the points I was going to mention "first-pass"
other people already brought up, but I had a few other bits:
* Though we've discussed authorization/encryption, I suspect that it
might end up being a very installation-dependent thing, and
something that we might want to consider abstracting/modularizing so
that gnucash can be customized to accomodate local policy.
* You've made some comments that make it sound like you're focusing on
just requiring PostgreSQL, and relying on some of it's particular
features -- though I may have just misunderstood. In any case, the
issue is still relevant. What do we think about that?
We have a few options I can see
* try to be DB agnostic and only use very standard SQLisms. This
means that there will be limitations in how much work we can
have the DB handle. More of the computation will *have* to be
done in the client.
* choose one DB and go whole hog - use all it's clever tricks and
features to tailor the DB very closely to what we need.
* design an API that's exactly what we need, regardless of the DB
involved, and plan to just have a server-side gnucash daemon
that manages the DB interface, dealing with any limitations of
the DB on the server-side before forwarding data back and forth
-- for example, this means that you can pretend like you have a
"rational" implementation. It also means that you can have more
flexibility in other areas because you can perform actions on
the server side. This approach, does, however, have plenty of
drawbacks...
* For now, we'd be fine with just a really good simple data store.
Even if we just had an embedded PostgreSQL that could just
read/write accounts/splits/transactions to a local sandbox, that
would be a big win over XML as the primiary storage medium for
single users. It looks like you're thinking much longer-term, and I
think that's actually preferable, but I just wanted to remind people
that we could also benefit from a shorter-term, intermediate
solution.
* We haven't talked a lot about the overal communication setup, and
that's probably because no one really knows what we want/need yet,
but my mention of the proxy above is a part of that discussion. I
think Linas had always expected that the local engine would
eventually become something more like an active cache of the remote
data so that local operations aren't too slow. Of course that
raises all kinds of synchronization issues, but always fetching
everything remotely doesnt' make those completely go away, just
narrows the point of failure. Anyway, overall, we need to be
thinking about which of these setups should be our goal (very rough
sketch):
[ parens group things in a single process, double bars indicate
physical machine boundaries, and dotted lines indicate remote
conections. ]
(server) <-> (gnc_proxy) || <- - - - - -> || (gnc_engine <-> ui)
(server) || <- - - - - -> || (gnc_engine <-> ui)
and we need to be thinking about what we *require* be done where.
I.e. does the DB just serve up raw records or do we require it to be
able to "do the math" in some cases?
Thanks for your work. It's much appreciated.
--
Rob Browning <rlb@cs.utexas.edu> PGP=E80E0D04F521A094 532B97F5D64E3930