Query Integration: access to Query Internals?
Derek Atkins
warlord@MIT.EDU
01 Jun 2002 00:46:17 -0400
I'm finishing up the integration of the new Query Subsystem. I've
got everything done and working except for two things:
* query->scm and scm->query: the options code wants to use these
functions as a means to "store" queries in options across loads of
gnucash. This way you can store a query for a report and, when you
re-load gnucash, the report can be restored (by restoring the
query).
* SQL(Postgres) Backend: This backend specifically tries to parse the
internals of the query and convert it into the appropriate SQL
query.
What both of these pieces have in common is that they currently need
to understand the internal structure and formats of the Query. The
problem is that this seems to be counter to the new model of a
"registry of data types". Currently the Query processing code does
not even know the internal format of all the data types (where a core
data type is something like "String", "Timespec", "GUID", etc.
Indeed, a new gnc-module could theoretically define a new data-type
and add it to the query engine. It just registers the type and it is
automatically plugged into the system. If that happens, how can we
easily get get the various backends to work if they need to know about
the internals of that type?
I see a few potential approaches to solve this problem.
1) Assume that the number of core data types is constant and just
break the (new) abstraction. This would basically mean that you
could not add new core-types to the query engine. This may or may
not be an issue.
2) Provide a new registry, where each data type defines the
appropriate conversions of each backend. This would basically mean
that for each backend you would need to define an interface and
implement the code that converts a "PredData_t" to whatever the
backend requires. If you add a new backend, you need to provide a
new set of converters for each existing data-type; if you add a new
data-type you need to provide a new set of converters for each
backend.
3) Provide some "common export format" that the Query can be
transposed into, and require all backends to convert between this
"common export format" and its internal format. For example, we
could define a standard "scheme" format for a query and require all
backends to be able to convert from this common scheme format to
their internal format. Alternatively we could define this format
to be "SQL" and require all backends to be able to parse (or
re-parse) the SQL.
While I'm sort of leaning towards approach #3, I'm not sure if this is
the right thing or not. That's what this email is about -- to begin a
discussion about the right approach.
At some level, the backends need to be all-knowing -- they do sort of
need to know how to handle all the basic data types. On another
level, however, I'd like even the backends to be "pluggable". As it
stands now a gnc-module can add objects to the XML backend. Indeed,
that's how the business objects are implemented! I'd like to add this
same pluggable functionality to, say, the Postgres backend.
I guess the question is whether it's reasonable to assume that, say,
the backends can convert a standard query framework to an internal
version. For example, presume something like:
'(<gnc:query>
Split ; <- what we're searching for
;; List of "OR" Lists of AND terms:
((((trans description) #f (string compare-equal string-match-normal #f
"foo"))
((account code) #t (string compare-neq string-match-caseinsentive #f
"300")))
(((value) #f (numeric compare-gt amt-sign-match-credit (10000 . 100)))))
; Sort order...
; max results
...)
This is pretty much the last thing holding up the Query Integration.
(My other alternative is to ignore the problem and break the SQL
Backend, but that would be rude ;)
Comments? Suggestions?
-derek
--
Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
Member, MIT Student Information Processing Board (SIPB)
URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH
warlord@MIT.EDU PGP key available