client-server

Derek Atkins warlord@MIT.EDU
27 Dec 2000 12:26:24 -0500


David Merrill <dmerrill@lupercalia.net> writes:

> > OK.  So the question I have is "So what?"  HTTP is slower than a UDP
> > protocol because it's TCP (that's the basic gist of this right?).  So
> > what?  Is it too slow for our uses?  No one knows because it hasn't
> > been tested.  My bet: nope not anywhere near too slow.
> > 
> > Now.  What's simpler?  What is more easily understandable?  What's
> > more universal?  Those are the questions that seem more important to
> > me.
> 
> I think James is right here. While there may be a slight difference in
> speed, it is unlikely to be very significant. Robustness and
> simplicity seem more important than raw speed to me.
> 
> Of course, I know little of what you are talking about, so I can't
> debate the fine points of one approach over another as you are. It
> just seems you are so focused on the speed issue you aren't
> considering other factors.

Actually, I was looking at RPC/Corba/etc. over TCP, not over UDP.  The
benefits of using RPC or Corba over, e.g. HTTP is multi-fold:

	1) programatic interfaces..  Using RPC or Corba you can define
           an IDL that defines the data structures and function calls
           (methods) that are being called across the network.  You
           are, in essence, building your own protocol based upon the
           programatic needs of your system.  You use a pre-defined
           transport mechanism and infrastructure that is well used
           and tested, so you only need to worry about your own IDL
           and handler functions.

	2) HTTP was designed for document transfers, not RPCs.  What
           we really want to do is more closely related to RPCs than
           document requests.  HTTP was also designed with single
           request-response per session.  Granted, HTTP/1.1 allows
           extended connections (multiple requests/responses), however
           it is unclear how a CGI would interface with that.  I
           certainly have never writter a long-lived CGI.  Also, HTTP
           was not designed with arbitrary function calls; you not
           only have to design your own "IDL" (not that HTTP supports
           IDLs) but you also have to design your own transport and
           marshalling system as well (for both requests and
           responses).

Just to play devil's advocate let me throw another log on the fire:
what we really want here is a distributed file system.  We want data
locking and access control semantics to allow multiple clients to
access the data store.  Why not just split the data store into
multiple "files" and require a network file system that allows us to
lock the files (and directories) when we need to modify data.  If the
File System can encrypt data across the network, doesn't that solve
our problem, too?

Seriously, I think there are multiple approaches to do what we want to
do.  Many of them are "valid", and each approach certainly has its
advantages and disadvantages.  I'm not at all convinced that we can
objectively choose any one approach over any other approach, and quite
honestly I don't know what the "right" way is.

However, I think objectively I'd like to require as few external
dependencies as possible.  If the subsystems we require are supplied
by libc, so much the better :)

-derek

-- 
       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL: http://web.mit.edu/warlord/    PP-ASEL-IA     N1NWH
       warlord@MIT.EDU                        PGP key available