client-server

linas@linas.org linas@linas.org
Fri, 22 Dec 2000 12:19:41 -0600 (CST)


It's been rumoured that Derek Atkins said:
> 
> <linas@linas.org> writes:
> 
> > It's been rumoured that Derek Atkins said:
> > > 
> > > This is an interesting approach, but HTTP is _SLOW_.  You have to
>
> For example, it took me about 3 seconds to load www.gnucash.org.  And

I suspect that 2.7 of those seconds were were the dns lookup of
gnucash.org; once you've looked it up, it stays cached for a while.

Some fraction of that was also the routers from MIT to austin Tx
figuring out where to route packets.   If the server were on the same
local net as the client, these would be non-issues.  Alternately, 
if you used *any* other method (corba, RPC), these same overheads
would apply.

> If gnucash supported SSL, you
> would see that it was much slower.

IBM's skit ssl toolkit takes about 1.1 seconds to set up an ssl
session on a 300 mhz machine.   OpenSSL takes about 0.1 seconds,
for same config. Long live open source.

(subsequent requests reuse the session, I think its tens of
milliseconds at most).

Besides, this has nothing to do with http.  If you used SSL with
anything, RPC's, corba, raw sockets, you'd get the same figures.
If you used ipsec, s/wan, vpn's, you'd get essentially the same
performance characteristics.

> > http/1.1 uses sopcket keepalive aka persistant connections by
> > default.  netscape 3.0 and i.e. 2.0 and newer ask for keepalive by
> > default.
> 
> Yes, but it's unclear how easily that would fit into a CGI model.

Should work fine with cgi, as far as I know.  With fast-cgi, it would
not work.

> But what about cache coherency issues?  How do you do a callback via
> HTTP when data changes?  Or are you expecting the client will poll the
> server for coherency every time it wants to use a piece of data?

Now that's a good question.  There are several possible answers.
1) User is always viewing 'a recent snapshot'.  This should usually
   be good enough: if you have two users simultaneously editing
   the same finacial information, then you've got a people-problem,
   not a technology problem.

   Think of how CVS works. 'cvs update' gives you a snapshot.  
   Before you commit your changes, you should 'cvs update' first.
   If there's a race, and you 'cvs commit', it'll yell at you.  

   I'd like gnucash to work just like cvs does with respect to
   locking and updates.  Why? Because the same social paradigms
   apply: multiple accountants hacking the same fincial info is
   like multiple coders hacking the same code, and all the same
   social-convention of who has what priveldeges apply.

2) http *does* allow 'slow' servers.  So, e.g. the cgi-bin pushes 
   out a bunch of data. Then is sends some no-op bytes every minute
   to make sure the connection stays alaive.   And if the finacial
   info changes, the still-connected cgi-bin sends the updated info.
   Its a bit cheesy, but, hey ...

3) Polling. <meta http-equiv refresh="5 min">  works. How 'live'
   do you expect the data to be?  If you expect live streaming stock
   quotes, then I think wwe need to investigate live-streaming in 
   in the web setting.


> > Furthermore, if you are running your gnucash server/cgi-bins on
> > a local, ether-net connected web server,  the perceived 
> > responsiveness (as perceived by user) should be around a tenth of 
> > a second, about the same as it would be for a raw socket, corba, 
> > or sql or other client-server desing.    The most likely slowest 
> > part of the system will be the display and graphical layout on 
> > the client side.
> 
> This is a red herring.  How fast it would be is something that can
> only be measured after you're doing it.  It's very hard to model this.
> It also depends on the congestion on the LAN, the load on the server,
> the load on the client, how much data is being sent, etc.

Philosphically, I think its laziness or a dis-service to not attempt
to figure out what the reasonable, expected server load and network
congestion will be.  Failing to even try will lead to inappropritate
design points, and a good chance that they don't solve the user's needs.

> I should also point out that the Applications Area Directors of the
> Internet Engineering Task Force believe that no new protocols should
> be built on top of HTTP.  They're the ones that are in charge of
> protocol standardization, so they know they're stuff.  Ask yourself
> why you think they don't want protocols based on HTTP?

Huh?
I think there's some conflict with the word 'protocol'.  most people
would not call html a protocol, and by expension, xml.  But what the
hell is SOAP, if not an xml-based protocol layer on top of http?  
What do you think OFX is? or zillion other xml-based protocols that
are growing like mushrooms?   Either the ietf means something else
when they say this, or they're totally insane.   http was designed to
provide a reasonable message-passing framework, and it does that
reasonably well, thank you very much.   But to discourage building
higher-level protocols on a workable message-passing system is, 
well, nuts.   At least its nuts until someone, e.g. the ietf, gives
us something better.  And I know of no public initiaives  to even try
to do better.  (what. ibm's proprietary mqseries? microsoft's message
queues? even microsoft isn't stupid enough to push mq for its .net
initiative; .net assumes its all http.) 


--linas