client-server

cbbrowne@ntlug.org cbbrowne@ntlug.org
Thu, 28 Dec 2000 17:09:13 -0600 (CST)


> > Actually, I was looking at RPC/Corba/etc. over TCP, not over UDP.  The
> 
> I got the impression that you were advocating RPC over UDP.

Possibly a not-unreasonable impression, but that seems to me to be a
too low level perspective to focus on.  Better to have a prototype that
works badly, and at _that_ point come to the conclusion that the "fix"
to put in is to improve performance using UDP.  [I'm skeptical that this
would be a big deal; if performance is too bad with TCP, then the 
"improvement" of going to UDP only likely provides a constant speedup
at the cost of added complexity, which probably is still inadequate...]

> > benefits of using RPC or Corba over, e.g. HTTP is multi-fold:
> > 	1) programatic interfaces..  Using RPC or Corba you can define
> >            an IDL that defines the data structures and function calls
> >            (methods) that are being called across the network.  You
> >            are, in essence, building your own protocol based upon the
> >            programatic needs of your system.  You use a pre-defined
> >            transport mechanism and infrastructure that is well used
> >            and tested, so you only need to worry about your own IDL
> >            and handler functions.
> 
> SOAP/XML provides this too, using HTTP as the transport and XML to
> define the call site and pass data.

WHAT?!?!

Where is the API definition?

SOAP/XML provides a way of transferring XML messages around.  Mapping
this onto an API requires that you design a marshalling scheme and a 
set of conventions to correspond to a CORBA language mapping.

> > 	2) HTTP was designed for document transfers, not RPCs.  What
> >            we really want to do is more closely related to RPCs than
> >            document requests.  HTTP was also designed with single
> >            request-response per session.  Granted, HTTP/1.1 allows
> >            extended connections (multiple requests/responses), however
> >            it is unclear how a CGI would interface with that.  I
> >            certainly have never writter a long-lived CGI.  Also, HTTP
> >            was not designed with arbitrary function calls; you not
> >            only have to design your own "IDL" (not that HTTP supports
> >            IDLs) but you also have to design your own transport and
> >            marshalling system as well (for both requests and
> >            responses).
> 
> If you send 
> 	Connection: keep-alive
> requests and responses in the headers of SOAP messages, you leave the
> TCP connection open.
> 
> The CGI could either be long-running, or it might just be a simple
> handler whose job is to keep the TCP connection open, it and forks of
> a "real" CGI to do the work. 

This presumably means writing a CGI handler that is fully cognizant of
the security/encryption requirements?  Methinks this is a task of similar
complexity to writing Yet Another Web Server from scratch...

> HTTP does not require desiging your own IDL, you can use XML for that.
> If you really need the descriptive abilities of IDL, you can use web
> services -- but this is only really going to come into play when you 
> are working with "unknown" interfaces.  But this is only really
> necessary in situations where you would be using DII or DSI in CORBA.

XML is merely a structured tagging scheme.  There is no well-recognized
convention for mapping it onto APIs.  I seem to recall the designers of
XML _specifically disclaiming_ the notion of having such APIs.

In effect, _ALL_ use of XML represents a rough equivalent to using DII
in CORBA; it all needs to be interpreted on the fly.  And as language
mapping conventions do not exist, you have to make up the interpretation
yourself.

> > Just to play devil's advocate let me throw another log on the fire:
> > what we really want here is a distributed file system.  We want data
> > locking and access control semantics to allow multiple clients to
> > access the data store.  Why not just split the data store into
> > multiple "files" and require a network file system that allows us to
> > lock the files (and directories) when we need to modify data.  If the
> > File System can encrypt data across the network, doesn't that solve
> > our problem, too?
> 
> Isn't this just a database + encryption?

Arguably, yes.

> > Seriously, I think there are multiple approaches to do what we want to
> > do.  Many of them are "valid", and each approach certainly has its
> > advantages and disadvantages.  I'm not at all convinced that we can
> > objectively choose any one approach over any other approach, and quite
> > honestly I don't know what the "right" way is.
> 
> There is no right way.  There are only tradeoffs.  It's certainly not
> something you can expect to be even moderately sure of with until you
> have a working prototype or better still a full implementation.

Well, there _are_ things that may be inferred analytically before
implementation.  Including that XML-RPC is only half-baked at this
point.