Access Controls

Derek Atkins warlord@MIT.EDU
28 Dec 2000 09:37:14 -0500


cbbrowne@ntlug.org writes:

> I would consider it a "Good Thing" for the security server and the
> transaction engine to stay close, as well as for the engine and the  DB
> server to stay close.

It is not the case that the security server per se need be close to
the UI (I'm still unclear on the transaction engine).  I'll explain
why in a moment.

> It should be clear why Transaction engine and DB engine should stay close;
> they will need to communicate quite a lot of data in order to negotiate
> the parts that are to be submitted to the GUI.  A "for instance" would
> be that if the DB does not support computing sums/balances internally,
> the transaction engine will need to do so, and will have to talk heavily
> with DB.

I think that the DB will at least provide some amount of checkpointing
so that the transaction engine would not require ALL transactions from
the beginning of time in order to keep a running balance.  Also, I'd
like to build a design where the transaction engine and UI can cache
data from the DB and work on it "locally".  That way you're not
falling back to the DB for every request.  Based on that, the DB need
not be as close to the transaction engine, because you only need to
talk to the DB at "startup" time and when changes (including
additions) are committed; not necessarily at every UI request.

> The argument for transaction engine and security server staying close may
> be framed similarly; if they need to communicate every time a data request
> is put together, then for them _not_ to be local means both that there
> is a lot of remote communications, with the costs to efficiency as well
> as to security.  If both are local, and they can communicate using domain
> sockets or some such thing, there is not a security exploit.  If the
> security server is remote, then securing the communication channel is
> substantially more challenging and slower.

We definitely cannot assume local communication.  The whole point of
the exercise was to define a distributed system.  If you want to
assume locality, then you might as well assume single user and embed
the database within GnuCash.

If, however, we're looking for a multi-user system, then we have to
assume that the DB and the UI will be on different hosts.  Most likely
the transaction engine will live with the UI.  In that situation, you
want the security server to live with the DB, but allow the security
server to deliver access control 'hints' to the UI to let a conforming
UI maintain those controls.

Here's an example to explain what I mean.  Let's assume that you have
read access to some data, but you do not have change or write access.
Now, let's assume that the security server is "with the UI".
Obviously you could read the data because you are authorized to do so.
But remember that GnuCash is OpenSource... what happens if you
recompile your UI/Engine to ignore the security controls (it _IS_
sitting on your own machine, even though you don't control the
database).  Then you could go and modify the data and write it back to
the database.  Unless the security server is sitting at the database,
there is no protection against this attack.

In other words, you are trying to protect the data.  That implies you
need to secure the data store, which is where the security server
needs to live.  Here's an example of how this would work..  You read
some data and the security server let's you have it (because you are
authorized).  It also returns ACL information which informs you that
your data is read-only.  Your "conforming" UI would not let you make
changes to the data.  However, if you tried to hack your UI and make
changes, the data store would not let you do so, at least not back to
the original place.

Obviously there is nothing to stop someone with read-only access from
copying that data elsewhere and then making changes there.  But I
believe that to be out of scope.

> I guess this still leaves some question of precisely _what_ the security
> checks are; it might be a reasonable thing to have multiple security
> servers on different hosts...

I agree that we need to come up with the security checks.  In
particular, we need to come up with a matrix where each "controlled
object" is on one access, and a list of all operations is on the other
access.  I don't have a full grasp on all of the operations that need
to happen, yet.

> open otherwise.  The connection to the outside is through firewall;
> nothing else gets in.

As I stated before, never trust just a firewall.  It's never secure
enough.

> a) My view is of the system being more of a "transaction processor;"
> GnuCash servers would mediate access to data and request What Transactional
> Information They Want.
> 
> b) Your view seems more oriented towards the notion of the UI requesting
> data and being aware of what is in the database.  

I don't see these two as being any different.  I view the Database as
a datastore, and the transaction engine requests data out of the store
when it wants it (just like it could request data out of the on-disk
file, assuming it didn't cache everything in memory).

I think we're actually in agreement here, but we're just talking past
each other.  I also view the distributed system as a transaction
processor, where the UI would request what transactions it wants from
the database, and get those transactions back.  The UI need never know
anything about the database (we're not sending SQL over the network).
Instead, it uses the Query API, for example, or other well-defined
APIs to obtain data from the dstore.

> I don't reject that, but would suggest we try to have the design start by
> constraining access by defining the sorts of operations/APIs we _know_
> to be useful; when those prove inadequate, that is the time to "punt"
> and use generic database access schemes.

We still have a well-defined interface/API (indeed, that's why I'm
pushing for an RPC- or CORBA-base system, so we can constrain access
to known APIs).  That way we can have a well-defined, constrained API.

As I said, I think we're in agreement here.

-derek

-- 
       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL: http://web.mit.edu/warlord/    PP-ASEL-IA     N1NWH
       warlord@MIT.EDU                        PGP key available