IETF HTTP-NG BOF Minutes
See also the HTTP-NG BOF Agenda.
Scribe
Welcome and Discussion of Agenda
-
What W3C has been doing for last year, why it's coming to IETF. First half:
history,
Henrik Frystyk Nielsen: Presentation of HTTP-NG Overview
Online slides are available
Purpose
-
New design for HTTP; two groups:
-
1. Web Characterisation ("what does the web look like?" - so we know what
problems need to be solved, how to solve them, real-live scenarios e.g. dialup).
-
2. Prototype replacement, with evolutionary facilities from day 1. Recent
problems: more and more a transport for extensions, people using HTTP for
various reasons. Many extensions not supprted very well in 1.x; want a better
framework which allows the core protocol to remain solid five years from
now.
-
Public mailing list:
www-http-ng-comments@w3.org
Current Status
-
Here to propose PDG becomes IETF task, to solidify the rough prototype protocol
suite.
-
WCG likely to continue as a W3C Activity (become open service to web community
as a whole).
Not Doing
-
Replacing TCP
-
Yet another IDL
-
Solving non-web problems (use scenarios from WCG to show that the web benefits)
Overview of Goals
-
Reliable extensibility in local and global environments (reduce the number
of cross-checks required when extending the core protocol)
-
Efficiency high priority (HTTP/1.x much improved, but there's still a long
way to go)
-
Simple now, and simple in 5 years time too.
-
Must be better than HTTP/1.1 is today.
HTTP/1.x experience
-
Post is HTTP's IOCTL, but without even the protocol signature
-
Extensions in four dimensions (identification, mandator/optional, scope (e2e
vsh2h), ordering and nesting
-
It's hard to retrofit things
Gettys: people think we should build services on top of services; in today's
world it's fragile, and difficult to build on this fragile base.
Why not
HTTP/1.0 over WebMUX with POST and XML?
-
WebMux replaces HTTP/1.1 TCP interactions
-
XML provides well-defined messages
Web Characterisation Group
-
Summary of what WCG's doing: idea is to become a meta-group of web
characterisation, with a formal way of dealing with log files and a way of
generating characteristics out of it. Currently use output as input to the
HTTP-NG testbed.
Bill Janssen: Presentation on NG Architecture
Online slides are available
NG Architecture
-
Three layer architecture coming from examination of 1.1 spec.: there's an
application iwthin 1.1 expressed as methods; secondary layer describes data
format in standard headers (talks about how messages are formed);
bottome=transport layer, talking about chunking, content length, etc -
independent of message type or method, and more concerned with optimising
the protocol for a specific transport.
-
It makes sense to minimise the coupling between the layers so one can be
changed without affecting the others
Top: Application Layer
-
The Classic Web Application, as well as the extensions of HTTP (SWAP, WebDAV,
IPP,
) TCWA must be runnable un-noticable on top of NG substrate. Also
want to be able to put a new application up without impacting other applications,
or the core protocol. We want to have an extensibility allowing applications
to be extended without breaking old applications.
-
Want to be able to support anarchic, non-centralised evolution.
-
In an IDL: assign a URI to each operation set to allow distinguishability
of operation sets.
Type System
-
Coupling is as important as the layers themselves
Application must
be defined by a small set of types and operations (operations take the types
as parameters, results). Need a small set of built-in operatrions.
-
At the mssaging layer, all applications appear as a small set of messages
and types.
-
HTTP-NG type system obtained by looking at other type systems, and
simplifying/unifying type systems of what's out there, without throwing away
anything important:
-
Fixed and floating numerics,
-
boolean, string, enumerations,
-
sequences, arrays, unions,
-
reference.
-
Pickle,
-
local object
Messaging Layer
-
Interits from X11, XNS Courier, ONC RPC, CORBA IIOP, DCOM, Java RMI, DCE
RPC, TCP
-
Wanted to minimise the number of bytes on the wire, ensure operation on small
devices
-
Added pipelining (multiple invocations at a single time), batching, caching
method and resource identifiers, caching of results.
-
Very efficient wire protocol (it's quite small, but we've not worked hard
enough yet)
-
4-byte requests, 4-byte responses on some farily typical messages
-
Binary protocol for efficiency
-
Uses ONC RPC XDR for marshalling (since it's an IETF protocol)
-
Wire protocol is designed as a hop-by-hop protocol.
-
Proxying currently handled at application level, but still open to looking
at designe where some of those concerns are pushed down to the messaging
layer.
Messaging to transport API
-
Messages at this point just blocks of data. (Doesn't discriminate between
request/reply.)
-
Cunking, batching, security and proxying
-
Identities and contexts important in tranpsort layer
Transport Layers
-
Stack of transports, with components which can be freely combined according
to individual requirements (e.g. compression, GSS, Mux, etc.); possible even
to have client/server negotiate stacks at runtime
-
Each layer is a byte-transforming filter.
-
Also run-time extensible (applications can define new transports at run-time).
WebMUX Transport
Provides four things in one layer:
-
Record marking
-
Chunking (send messages in chunks rather than message-at-a-time)
-
Bi-directionality (allowing browser to identify end-point for server to call
back to)
-
Multiplexing of virtual connections
These layers bundled together to provide more efficiency in terms of header
space.
Jim Gettys: Presentation on WebMux
Problems
What pushed in the direction of WebMUX?
-
Deal with call-back (e.g. notifications, data delivery without a second TCP
stream)
-
Must be able to abort a session (often throwing away TCP connections)
-
If you move off a page before it's been completely loaded, we'd like it to
continue loading (e.g. by
-
Avoid round trips
Is callback any different to passive FTP?
WebMUX Functions
-
Should work over any reliable transport (no particular presumption on TCP)
-
Can establish sessions without round trips
-
Can also be used as a record-marking system (but the transport's record-marking
can also be used if it's more efficient)
-
Multiplexing helps reduce latency (browser needs to retrieve image metadata
early to layout pages early)
-
Can be used to throttle sessions
-
Can be aligned on 32- or 64- byte boundary (thanks to a no-op message)
-
Sessions can be established by either end
-
Derived from Simon Spero's work, but that required a round trip for session
initiation, which WebMUX does not.
Deadlock avoidance
-
Credits aren't there for flow-control: it's for deadlock avoidance instead.
-
Receiver making a promise to the transmitter that the receiver can consume
x bytes of data (either by buffering it, or perhaps by throwing it away).
Open Issues
-
Ssh has a mux; intend to look at that.
-
When can you use WebMUX? How do you transition?
-
Number of implementations, and beginning to grovel over TCP dumps to ensure
efficiency on the wire
-
Many people don't need the credit system; is it necessary to be able to switch
between infinate and finite credits?
-
Atom creation, # atoms
Testbed
-
Specs reflect a pool of running code. Beginning to look at TCP dumps; very
first results are out comparing with 1.1 or IIOP. Eager to design protocols
in an engineering arena rather than by committee. Starting to receive fresh
data from proxy traces, etc. to ensure accuracy of scenarios.
-
Implemented protocols from the drafts, and defined 16 tests. Run same tests
against 1.1, NG. Models of various sites (Microscape, AOL model, Microsoft
model); big/little endian; different network topologies.
-
Intending to increase the number of tests.
-
Plan to have a framework which allows individual features to be tested (e.g.
sorting of parameters in wire protocols). New release of PARC's ILU system
(within the next couple of weeks) will include implementation of the WebMUX
and W3NG wire protocol.
Interfaces
-
Dan Larner has created 5 inter-related interfaces decribing HTTP Resource
object, renderings, web documents, property sets, etc. The interfaces also
available as in I-D. These are not considered to be a final application
- much work is expected.
-
Question of how the interfaces should evolve over time also being investigated,
but more input is requested.
-
Take the interfaces with a grain of salt!
Layering
-
Layers are intended to be conceptual rather than necessarily implementational
concept.
-
Modified TCP Dump available from PARC's FTP site which displays WebMUX and
HTTP Wire protocol information.
Questions
-
How does this relate to CORBA IDL?
-
We're not tossing another distributed system into the mix: NG could be a
substrate which allows CORBA, DCOM, RMI to be layered on top. Could today
run Java code on top of NG.
-
Have one of the POST-based applications been ported?
-
No, but related: one test is to look at implementation of ILU's HTTP marshalling
system (messageing layer for system like HTTP which allow);
-
Are there any plans to support transports other than TCP? What about UDP?
Multicast IP?
-
Interested in running MUX over one of the new wireless transports. TCP is
"just another transprot layer" at this point. The specs specify what's required
of the transport.
-
Would be nice to be able to put performance monitoring in there (at what
level should that be done?)
-
Could put it directly into the application interface design, or in the wire
protocol's extension headers, or even as an additional transport layer stack.
-
What I really want is the ability to interleave req/resp, but also to separate
data/metadata in response. Is it possible to tie multiple data streams on
a single response?
-
Why not? Specifics are in the details of the application design.
-
Muxing on muxing, flow control on flow control - you're setting up a lot
of potentially bad interactions here.
-
JG: Yes, I worry about that. I'd love to see something which solved these
problems instead of TCP; this is our reaction to that, and want to hear from
everyone who understood those problems in TCP to help us avoid making those
mistakes.
-
Do we have an immediate need for this?
-
The perfect is the enemy of the good. I don't have an alternative right now,
but I have problems to solve. Want to have real running code, so we can test
and perform these experiments to see what the effect is.
-
If a WG comes out, thinking is it'll be in transport - because it takes careful
thought of how these problems interact.
-
Could avoid the deadlock by dropping the data if too much is coming in.
-
One of strengths of HTTP is visibility of semantics over the wire. How visible
are the semantics of HTTP-NG over the wire?
-
In 1.1 if an impl doesn't know the semantics of the headers, it can only
deal with them opaquely. In NG, at least there's a pointer to the semantics.
-
BJ: More visible: what protocol, what method set, etc are all up-front. You
can see what the message sets are without
-
Not happy with the approach these documents encourage: it's like solving
traffic congestion by raising the speed limit. Moving things faster between
host and consumer helps get things faster, but is tied to the model that
there's only one authority (source) for this data, and that the consumer
has to go there to get it. This is broken.
-
BJ: NG does deal with that, but not as clearly as some of the other issues
-
Want to support the idea of replication. Do this by separating the object
group ID (the resources you want to talk to ) from the connection information.
This allows a separate binding step to take place (choosing the fastest,
cheapest, closest).
-
What deployment scenario are you looking at? What's the advantage of being
in the first 2% of adopters?
-
-
Experience from 1.1 performance data was that pepole are interested from
a user point if it's faster, from an ISP point if it imposes less load on
routers, networks, etc. Need to show real benefit to both groups of users.
Also intend to show benefit to developers (you can stop your HTTP hacks).
Focussing on performance because it's more saleable.
-
Strategy: have proxies.
http://foo.com must still work. But really, we rely on upgrading (perhaps through
the use of the Upgrade HTTP/1.x header, or by hacking POST!).
-
HTTP was a steam roller, XML is a steam roller, where does XML play in this
model? Application/message/interface layer?
-
XML could be a nice way to express interfaces; we can ship XML documents
just as anything else; could even use it as a wire protocol (perhaps as a
transition strategy), although it doesn't have the same efficiency as the
wire protocol . Could also define an API to extract parts of a XML tree.
-
Deployment: the answer is based on "look at HTTP/1.1 and similar reasoning
implies"; but the bar will be a lot higher since it's a bigger step. A little
concerned that the performance gain weon't be enough.
-
HN: It's clear a lot more is achievable with NG. NG will make it easier to
play games with the content which will allow us to enhance performance (through
compression, etc.).
-
Can see where this is quicker on the client side, but don't believe it'll
be quicker on the server-side as it's scaled up. Should probably take a look
at the history of TCP multiplexing: Larry Peterson's paper "Multiplexing
considered harmful". Noticed a couple of trends: 1. The idea that everything
is a piece of HTTP data: FTP, video, etc are all over HTTP now; also service
differentiation - surprised it's all over a single TCP
-
There's no requirement for it to be over a single TCP - TCWA makes sense,
but there's plenty of sense to open multiple TCPs. The mechanism is fully
general: you could put them all done one.
-
Observation: should be a considerable MUX experience from the terminal server
community in 94 or so.
-
What is the security service?
-
Bill: no particular proposal in the strawman, but in the implementation there's
a model based on GSS, capapble of providing identities in both directions.
Also possible as a plug-in, for example SSL.
-
The concern is that the ability to have different security associations
for different sessions over a single MUX. Must be able to differentiate security
contexts.
-
Current model allows multiple object groups to appear on a single server,
with different security policies; it also allows a single object group to
expose different security contexts to different clients, but we need feedback
to know if this is enough.
-
Firewall admin needs hooks to allow firewall admin to be able to set appropriate
policies: what interface is being called? What MIME type is being used? Need
to be able to detect spoofing
JG has had to deal with some of the firewall
issues, but we need help!
-
BJ: Secure transport layer will be available in next ILU
-
"HTTP data is central now", but it's not: it's MIME data.
Part Two
Continued questions
-
WCG will continue as a souce of sanity-checking data for future HTTP protocols,
to stop us developing solutions which don't fix the relevant problems. The
question: does this solve HTTP's problems of today.
-
We need to see some numbers before we can decide whether it solves HTTP's
problems.
-
There is quite a bit of public information coming out of WCG, in particular
Jim Pitkow's paper from WWW7 (there's a recent update - this week?) available.
-
Do we at this point know how HTTP/1.1 is broken, and if it's not broken,
or we don't know how it's broken, why fix it at this stage?
-
It took a team of geniuses 2 years to put together 1.1; I dread to think
how long 1.2 will take.
-
We have a number of apps which use HTTP as nothing more than a reliable datagram
service. We see a number of proprietary schemes on top of HTTP, which will
soon end with no-one being able to talk, even though they're all talking
"HTTP". Third problem: congestion, which WebMUX is designed to solve, although
this also adds a number of good new features too (by adding mltiple virtual
circuits on a single TCP stream). People are replicating the web as well:
e.g;. document management systems on top of DCOM or CORBA. We hope to reduce
the number of systems in use by making HTTP rich enough to allow document
management to be put back on top of HTTP.
-
We've tried to get around the data model of "ignore what you don't know"
- it's not very good at evolving applications because they all have to downgrade
to the dumbest application in the middle. This is extremely difficult to
retrofit into HTTP itself (Mandatory is trying - succeeding - but there are
still tough interactions between features, e.g. byte ranges in a SUBSCRIBE).
-
Echoing
-
There are a lot of people using HTTP for what they shouldn't be using is
for so we'll change HTTP to accommodate them rather than giving them more
protocols.
-
The extension mechanism in HTTP/1.1 is broken.
-
Extensions aren't broken, they're just not sufficient. Want to get to the
situation where we can say, "extensions are OK". We need to separate extensions,
which we don't get in HTTP/1.1.
-
JG: The world will continue to build the kludge tower yet higher - whether
this project succeeds or not - there's no way to prevent it, but if we never
have an alternative, it leaves us in an uncomfortable position. We'd like
to think about new and interesting applications rather than keeping the tower
stable.
-
You have a compelling argument to the IETF audience about why we should take
this in for people who want to develpoe HTTP-based protocols. What's not
clear is the argument about why this is better for the person running the
classic web applicatoin; those are the people you are going to have to convince.
I see a whole bunch of things which makes me want to say "yay, go for it",
but I'm not hearing enough to convince the person who runs a proxy or a webserver
- that information needs to be worked into the documents.
-
One of the compelling arguments for the ISP are performance and efficincy;
we're just beginning to get that information, and it looks good, but we've
not yet tuned it. We know that case needs to be made, and are working towards
it. We belive it's relatively easy to make it more efficient.
-
Speaking as a prospective user of NG: I designed a protocol layering over
GET; it's widely implemented across universities, and it'd have been very
handy to have this rather than having to kludge HTTP. Also have some preliminary
experiments as to how the upgrade might be done; it was very straightforward
to use HTTP Upgrade, and led to a factor of 3 improvement. You guys should
build this thing and start using it real soon.
-
JG is very sensitive to builing on top of something that's not yet proven
-
The question of why it is so many things layer or clone HTTP needs to be
looked at: what was it which used HTTP? Sugested answer: we're missing some
pieces above TCP which apps need.
-
JG: relatively early emergence of proxying/firewalls in HTTP.
-
BJ: buzzword complience; people are also willing to degrade the security
of their site by punching HTTP-size holes in their firewalls, but aren't
willing to have other holes in the firewalls
-
HTTP is new => sexy: this is even newer, so that effect should
be hightened. The other maybe doesn't help so much: there's so much stuff
that people want on HTTP servers. IETF needs to ask "are we missing some
important services below app and above transport that needs to be done";
the W3C activity seems to have teased some of those answers out. The name
HTTP is not necessarily appropriate.
-
JG: We call it HTTP-NG because we want to drive it from a web pov; HTTP is
a misnomer for this.
-
HN: We are saying, "how has this system evolved?", and make such evolution
more possible.
-
Firewalls: you are exposing the operations each applications are doing at
the firewall? Perhaps that should be part of the justification.
-
Agreed. Firewall will have to know something about the interface to determine
whether or not to allow the application to connect. From the detail, we've
not been working on this dimension too much, but the potential is there.
-
BJ: you should be able to write better firewalls,
-
We also have asynchronous methods - messages - which might form a reasonable
substrate for messaging/notification systems.
-
Some of the comments are slightly out of context: this is advanced work which
isn't guaranteed to be sucessful, but may be proved not. The question should
be "should we allow these pepole to proceed", not "should we deploy this".
-
I believe you should continue, but call it something else
Should we continue?
-
A large number of people would like to join the mailing list and get involved.
-
Howmany think this should move forward: many
-
How many would like to see it move forward, if more of the questions were
answered: a few.
-
Transport AD?: Somewhat risky, potentially big payoff, whole lot of work
involved
-
Whether we should have a working group or not is different to whether we
should have a mailing list; creating a WG too early can slow things down.
-
It's a judgement call, but we've been working on it for a year.
-
We are here because we want the community to be involved, rather than "deep
dark W3C plot n".
-
Larry Masinter: think the work's gone on long enough, and it's time for this
to become a WG, especially to gather transport area people; also it has every
right to be called HTTP-NG - particularly since there should be a centre
of locus within IETF
-
Is a WG about standardisation or collecting a community?
-
Patrik: IETF do not do rubber-stamping; have we had enough experience to
know this is approximately the right way of going? Do we have a focussed
enough scope? Will we be able to produce documents? Larry seems to be saying
we are.
Paul Bennett,
$Id: HTTP-NG-BOF-Minutes.html,v 1.8 1998/11/13 21:06:53 frystyk Exp $