Query Integration: access to Query Internals?

Linas Vepstas linas@linas.org
Mon, 3 Jun 2002 23:32:05 -0500


--X1bOJ3K7DJ5YkBrT
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hi,


On Sat, Jun 01, 2002 at 12:46:17AM -0400, Derek Atkins was heard to remark:
> Indeed, a new gnc-module could theoretically define a new data-type
> and add it to the query engine.  It just registers the type and it is
> automatically plugged into the system.  If that happens, how can we
> easily get get the various backends to work if they need to know about
> the internals of that type?
>=20
> I see a few potential approaches to solve this problem.
>=20
> 2) Provide a new registry, where each data type defines the
>    appropriate conversions of each backend.  This would basically mean
>    that for each backend you would need to define an interface and
>    implement the code that converts a "PredData_t" to whatever the
>    backend requires.  If you add a new backend, you need to provide a
>    new set of converters for each existing data-type; if you add a new
>    data-type you need to provide a new set of converters for each
>    backend.
>=20
> 3) Provide some "common export format" that the Query can be
>    transposed into, and require all backends to convert between this
>    "common export format" and its internal format.  For example, we
>    could define a standard "scheme" format for a query and require all
>    backends to be able to convert from this common scheme format to
>    their internal format.  Alternatively we could define this format
>    to be "SQL" and require all backends to be able to parse (or
>    re-parse) the SQL.

I'm not sure I understand the difference between 2 and 3.  After all,
with proposal 3, a back end still needs to know how to store and load
the newly defined type, (and, for sql, how to build queries with it).
So I don't see how 3 is better or avoids some of the work needed for 2.


> At some level, the backends need to be all-knowing -- they do sort of
> need to know how to handle all the basic data types.  On another
> level, however, I'd like even the backends to be "pluggable".  As it
> stands now a gnc-module can add objects to the XML backend.  Indeed,
> that's how the business objects are implemented!  I'd like to add this
> same pluggable functionality to, say, the Postgres backend.

Sure. =20
(I'm not sure why I'm replying to this email right now; I'm tired and my
head is foggy... but you said you wanted my reply so...)

> I guess the question is whether it's reasonable to assume that, say,
> the backends can convert a standard query framework to an internal
> version.  For example, presume something like:
>=20
> '(<gnc:query>
>   Split ; <- what we're searching for
>   ;; List of "OR" Lists of AND terms:
>   ((((trans description) #f (string compare-equal string-match-normal #f
>                              "foo"))
>     ((account code) #t (string compare-neq string-match-caseinsentive #f
>                         "300")))
>    (((value) #f (numeric compare-gt amt-sign-match-credit (10000 . 100)))=
))
>   ; Sort order...
>   ; max results
>   ...)

Yes, the above can be converted to SQL. It would be somewhat tedious
and I wouldn't volunteer to write that code. =20
 But this example also doesn't illustrate a new data type. =20
Or the plugin that would handle it.

    (((punt) #f (ball compare-is-goal check-yardage ("20" . "down")))))

The parser needs to know that 'punt' is a newly defined type, call the
plugin for it, which needs return the string

gncFootball.yardage>=3D20 AND gncDown=3D4

so that it can be inserted into the xxx part of a standard query:

SELECT * FROM gncSplit WHERE gncTransaction.description=3D'foo' AND
gncAccount.doe=3D'300' AND gncSplit.value=3D10000 AND xxxxxx

I mean I guess this is obvious, but...

> (My other alternative is to ignore the problem and break the SQL
> Backend, but that would be rude ;)

Acckk!
Surely not!  but I guess you can't convert the new query into the=20
old-style query objects that the backend currently uses?

--linas


--=20
pub  1024D/01045933 2001-02-01 Linas Vepstas (Labas!) <linas@linas.org>
PGP Key fingerprint =3D 8305 2521 6000 0B5E 8984  3F54 64A9 9A82 0104 5933

--X1bOJ3K7DJ5YkBrT
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE8/ELFZKmaggEEWTMRAjQfAJ4uIACuQdx/S23DkW8I7EgVW0TLtwCeJI0y
5OW9V00ZKh+GbW4hymWwV/4=
=0sQg
-----END PGP SIGNATURE-----

--X1bOJ3K7DJ5YkBrT--