Access Controls

cbbrowne@ntlug.org cbbrowne@ntlug.org
Wed, 27 Dec 2000 17:16:55 -0600 (CST)


> It is unclear that the security server and the engine need to be on
> the same machine; so long as the security server is associated with
> the datastore, it should suffice.  E.g., if you don't have access to
> data, you wont be able to retreive it from the datastore in the first
> place.  From a security standpoint, you want the access control checks
> to be as close to the "object" as possible.
> 
> I'm not convinced you want the "engine" to be across the network from
> the UI.  I've always seen the engine to be a data cache and
> manipulation tool, talking across the network to the datastore.
> Perhaps I have a slanted view of the engine's role in a distributed
> system?

I would consider it a "Good Thing" for the security server and the
transaction engine to stay close, as well as for the engine and the  DB
server to stay close.

It should be clear why Transaction engine and DB engine should stay close;
they will need to communicate quite a lot of data in order to negotiate
the parts that are to be submitted to the GUI.  A "for instance" would
be that if the DB does not support computing sums/balances internally,
the transaction engine will need to do so, and will have to talk heavily
with DB.

The argument for transaction engine and security server staying close may
be framed similarly; if they need to communicate every time a data request
is put together, then for them _not_ to be local means both that there
is a lot of remote communications, with the costs to efficiency as well
as to security.  If both are local, and they can communicate using domain
sockets or some such thing, there is not a security exploit.  If the
security server is remote, then securing the communication channel is
substantially more challenging and slower.

I guess this still leaves some question of precisely _what_ the security
checks are; it might be a reasonable thing to have multiple security
servers on different hosts...

But certainly the above view is consistent with the way multi-tier
client/server apps such as SAP R/3 work:  You have some set of critical
server processes that sit on the same server as the database, while
application server processes are readily distributable to multiple
servers that are _preferably; the users run a GUI that sets up a secure
connection to the application server, but basically have things pretty
open otherwise.  The connection to the outside is through firewall;
nothing else gets in.

Note that the "app servers" sit on the same local ethernet segment
[or similar] as the DB; talking to the DB is a critical resource...

There are two quite different directions here:

a) My view is of the system being more of a "transaction processor;"
GnuCash servers would mediate access to data and request What Transactional
Information They Want.

b) Your view seems more oriented towards the notion of the UI requesting
data and being aware of what is in the database.  

I don't reject that, but would suggest we try to have the design start by
constraining access by defining the sorts of operations/APIs we _know_
to be useful; when those prove inadequate, that is the time to "punt"
and use generic database access schemes.