sqlite file format, anyone?

Linas Vepstas linas at linas.org
Mon Jun 23 17:03:32 CDT 2003


On Mon, Jun 23, 2003 at 03:24:27PM -0400, Derek Atkins was heard to remark:
> linas at linas.org (Linas Vepstas) writes:
> 
> > > "Setters" as you keep calling them are completely irrelevant.  What
> > 
> > OK. I thought it would make it easier to add the bus. obj.
> 
> Nope.

Ugh, this is certainly an exhausting conversation.  Why would you say
that?

> > > re-write the Query* -> SQL conversion.  I already added comments
> > OK.
> 
> I think this is a MAJOR piece of work!
OK.

> Again, this is a concept, not an actual object name.  Take a look at
> src/backend/file/io-gncxml-v2.h for an example of a plug-in API.  Take
> a look at src/business/business-core/file/*.c for examples of how this
> API is used.

wc *.c tells me that there is 5KLOC of code here.   That's a lot of 
code  That's more than half the size of the current pg backend. 

Maybe I'm missing something, but why couldn't one implement this 
with a 'setter' concept?  Naively, it would then cost 20-50 LOC
per object, not 500 LOC.  Right?  

I'm looking at gnc-job-xml-v2.c.  Seems like job_dom_tree_create()
could have been table-driven. set_string() could have been a generic
macro.  I don't understand why job_name_handler() and job_id_handler()
are even needed. gnc_job_end_handler() seems to be 100% generic.

I'm not trying to say that you should re-write the code.  
I'm trying to say that with a 'setter' concept, you would provide
a table specifying the object name, the object fields, the field types,
and the matching XML name for each.  This table would be 5-10 lines of
code.  Then I claim that one could write code that is 100% generic
that would do the xml io based off this table.

I claim you can do the 'setter' concept with sql as well.  In fact,
its already done with the m4 code for some things (not everything). 
I wish it wasn't done with m4.

> Each Backend needs to create its own plugin API.  I don't know what it

Ugh.  That's exactly what the m4 macros started out being.  They *were*
the plugin API.   They didn't go far enough, and they shouldn't have
been in m4.   As I said before, what made the postgres backend large
was the multi-user serialization issues.  If you go back and remove
the multi-user code, it'll be smaller than the bus-obj-xml backend.
If you remove the the 'generic' code to handle, e.g. kvp, its smaller
still.

I'm thinking I don't understand what you want.  I don't understand
what this plug-in API is, or how it can possibly make it easier to 
add new objects.  (where, by 'easier' I mean 'less per-object LOC's').

> I dont think it's that little work.  Maybe it is, but I really dont
> think so.

Bull, don't call my bluff. I could get it working on libdbi tonight,
starting after dinner finishing before midnight.  It would take one 
more day to debug on mysql.  There are *four* different backends in
there. The pg 'single-user' backend doesn't use events, so events don't 
even need to be stubbed out.  Checkpoints need to be stubbed out, this
can be done effectively by setting a very very high checkpoint value
(e.g. 100000).  I anticipate the mysql debugging would mostly involve
date and time handling, and this could be ugly.

I'm starting to get irritated just enough to go ahead and do this.
What shall it be?  libdbi and MySQL?

> I would also say that the column names should map directly to the
> parameter names.  However there are always special cases.

yeah, and the special cases are what interesting ... 

--linas

-- 
pub  1024D/01045933 2001-02-01 Linas Vepstas (Labas!) <linas at linas.org>
PGP Key fingerprint = 8305 2521 6000 0B5E 8984  3F54 64A9 9A82 0104 5933


More information about the gnucash-devel mailing list