client-server
Derek Atkins
warlord@MIT.EDU
22 Dec 2000 14:59:05 -0500
<linas@linas.org> writes:
> I suspect that 2.7 of those seconds were were the dns lookup of
> gnucash.org; once you've looked it up, it stays cached for a while.
Nope. I ran the test a couple times (clearing my netscape cache each
time). The time didn't change significantly. I will grant that I did
time by visual timing, nothing spectacular..
> Some fraction of that was also the routers from MIT to austin Tx
> figuring out where to route packets. If the server were on the same
> local net as the client, these would be non-issues. Alternately,
> if you used *any* other method (corba, RPC), these same overheads
> would apply.
No, the number of round trips would be different. Consider how TCP
works:
Client Server
SYN ----------------->
<----------------- SYN-ACK
DATA+ACK ------------>
<----------------- ACK
This process needs to be done for every HTTP connection. Indeed, I
just repeated the test and did a tcpdump to actually trace the
packets. Lo and behold, there were two TCP connections (one for the
HTML, and one for the GIF). They were not concatinated together,
even though supposedly both the client and server support it.
I will conceed that my original timing was off. The first SYN was
sent at 14:17:35.769461, and the last ACK was received at
14:17:36.904204, so it was just over one second of network traffic.
There were 46 packets sent across two TCP streams. My DNS was cached,
so that isn't an issue. However, I think it's safe to say that the
greatest amount of time here was spent "waiting" -- the number of
actual round-trips is the killer.
The benefit of Corba or RPC would have been that a single TCP
connection could have been used, saving the SYN - SYN-ACK roundtrip
for the second (and all subsequent) connections. In this particular
example it wouldn't have saved _much_, I'll admit, but it can add up
if you have lots of recursive data to download. Sure, HTTP 1.1
supposedly allows for multiple request/responsed, but in practice I
have no idea if anyone supports it.
> > I should also point out that the Applications Area Directors of the
> > Internet Engineering Task Force believe that no new protocols should
> > be built on top of HTTP. They're the ones that are in charge of
> > protocol standardization, so they know they're stuff. Ask yourself
> > why you think they don't want protocols based on HTTP?
>
> Huh?
> I think there's some conflict with the word 'protocol'. most people
> would not call html a protocol, and by expension, xml. But what the
No, HTML is a document markup language. HTTP is a protocol.
> hell is SOAP, if not an xml-based protocol layer on top of http?
> What do you think OFX is? or zillion other xml-based protocols that
XML is also a document markup language. OFX is a particular XML
instantiation. It, too, is a document markup language, which specific
tags to denote certain types of data. It is not a protocol.
> are growing like mushrooms? Either the ietf means something else
> when they say this, or they're totally insane. http was designed to
> provide a reasonable message-passing framework, and it does that
> reasonably well, thank you very much. But to discourage building
> higher-level protocols on a workable message-passing system is,
> well, nuts. At least its nuts until someone, e.g. the ietf, gives
HTTP was made to pass particular types of messages, namely HTML. XML
is really just a generalization of HTML. However, HTTP is _NOT_ a
general "message passing protocol" any more than SMTP is a general
"message passing protocol". And what we're trying to do is more than
just passing around messages. I suppose if you want to look at it
that way, TCP/IP is a general message-passing protocol: You setup your
endpoints and it can pass any message you want.
> --linas
-derek
--
Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
Member, MIT Student Information Processing Board (SIPB)
URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH
warlord@MIT.EDU PGP key available