Monday, December 1, 2014

Firefox gecko API for HTTP/2 Push

HTTP/2 provides a mechanism for a server to push both requests and responses to connected clients. Up to this point we've used that as a browser cache seeding mechanism. That's pretty neat, it gives you the performance benefits of inlining with better cache granularity and, more importantly, improved priority handling and it does it all transparently.

However, as part of gecko 36 we added a new gecko (i.e. internal firefox and add-on) API called nsIHttpPushListener that allows direct consumption of pushes without waiting for a cache hit. This opens up programming models other than browsing.

A single HTTP/2 stream, likely formed as a long lasting transaction from an XHR, can receive multiple pushed events correlated to it without having to form individual hanging polls for each event. Each event is both a HTTP request and HTTP response and is as arbitrarily expressive as those things can be.

It seems likely any implementation of a new Web based push notification protocol would be built around HTTP/2 pushes and this interface would provide the basis for subscribing and consuming those events.

nsIHttpPushListener is only implemented for HTTP/2. Spdy has a compatible feature set, but we've begun transitioning to the open standard and will likely not evolve the feature set of spdy any futher at this point.

There is no webidl dom access to the feature set yet, that is something that should be standardized across browsers before being made available.

Friday, November 21, 2014

Proxy Connections over TLS - Firefox 33

There have been a bunch of interesting developments over the past few months in Mozilla Platform Networking that will be news to some folks. I've been remiss in not noting them here. I'll start with the proxying over TLS feature. It landed as part of Firefox 33, which is the current release.

This feature is from bug 378637 and is sometimes known as HTTPS proxying. I find that naming a bit ambigous - the feature is about connecting to your proxy server over HTTPS but it supports proxying for both http:// and https:// resources (as well as ftp://, ws://, and ws:/// for that matter). https:// transactions are tunneled via end to end TLS through the proxy via the CONNECT method in addition to the connection to the proxy being made over a separate TLS session.. For https:// and wss:// that means you actually have end to end TLS wrapped inside a second TLS connection between the client and the proxy.

There are some obvious and non obvious advantages here - but proxying over TLS is strictly better than traditional plaintext proxying. One obvious reason is that it provides authentication of your proxy choice - if you have defined a proxy then you're placing an extreme amount of trust in that intermediary. Its nice to know via TLS authentication that you're really talking to the right device.

Also, of course the communication between you and the proxy is also kept confidential which is helpful to your privacy with respect to observers of the link between client and proxy though this is not end to end if you're not accessing a https:// resource. Proxying over TLS connections also keep any proxy specific credentials strictly confidential. There is an advantage even when accessing https:// resources through a proxy tunnel - encrypting the client to proxy hop conceals some information (at least for that hop) that https:// normally leaks such as a hostname through SNI and the server IP address.

Somewhat less obviously, HTTPS proxying is a pre-requisite to proxying via SPDY or HTTP/2. These multiplexed protocols are extremely well suited for use in connecting to a proxy because a large fraction (often 100%) of a clients transactions are funneled through the same proxy and therefore only 1 TCP session is required when using a prioritized multiplexing protocol. When using HTTP/1 a large number of connections are required to avoid head of line blocking and it is difficult to meaningfully manage them to reflect prioritization. When connecting to remote proxies (i.e. those with a high latency such as those in the cloud) this becomes an even more important advantage as the handshakes that are avoided are especially slow in that environment.

This multiplexing can really warp the old noodle to think about after a while - especially if you have multiple spdy/h2 sessions tunneled inside a spdy/h2 connection to the proxy. That can result in the top level multiplexing several streams with http:// transactions served by the proxy as well as connect streams to multiple origins that each contain their own end to end spdy sessions carrying multiple https:// transactions.

To utilize HTTPS proxying just return the HTTPS proxy type from your FindProxyForURL() PAC function (instead of the traditional HTTP type). This is compatible with Google's Chrome, which has a similar feature.

function FindProxyForURL(url, host) {
  if (url.substring(0,7) == "http://") {
   return "HTTPS proxy.mydomain.net:443;"
  }
  return "DIRECT;"
}


Squid supports HTTP/1 HTTPS proxying. Spdy proxying can be done via Ilya's node.js based spdy-proxy. nghttp can be used for building HTTP/2 proxying solutions (H2 is not yet enabled by default on firefox release channels - see about:config network.http.spdy.enabled.http2 and network.http.spdy.enabled.http2draft to enable some version of it early). There are no doubt other proxies with appropriate support too.

If you need to add a TOFU exception for use of your proxy it cannot be done in proxy mode. Disable proxying, connect to the proxy host and port directly from the location bar and add the exception. Then enable proxying and the certificate exception will be honored. Obviously, your authentication guarantee will be better if you use a normal WebPKI validated certificate.

Tuesday, March 4, 2014

On the Application of STRINT to HTTP/2

I participated for two days last week in the joint W3C/IETF (IAB) workshop on Strengthening the Internet against Pervasive Monitoring (aka STRINT). Now that the IETF has declared pervasive monitoring of the Internet to be a technical attack the goal of the workshop was to establish the next steps to take in reaction to the problem. There were ~100 members of the Internet engineering and policy communities participating - HTTP/2 standardization is an important test case to see if we're serious about following through.

I'm pleased that we were able to come to some rough conclusions and actions. First a word of caution: there is no official report yet, I'm certainly not the workshop secretary, this post only reflects transport security which was a subset of the areas discussed, but I still promise I'm being faithful in reporting the events as I experienced them.

Internet protocols need to make better use of communications security and more encryption - even imperfect unauthenticated crypto  is better than trivially snoopable cleartext.  It isn't perfect, but it raises the bar for the attacker. New protocols designs should use strongly authenticated mechanisms falling back to weaker measures only as absolutely necessary, and updates to older protocols should be expected to add encryption potentially with disabling switches if compatibility strictly requires it. A logical outcome of that discussion is the addition of these properties (probably by reference, not directly through replacement) to BCP 72 - which provides guidance for writing RFC security considerations.

At a bare minimum, I am acutely concerned with making sure HTTP/2 brings more encryption to the Web. There are certainly many exposures beyond the transport (data storage, data aggregation, federated services, etc..) but in 2014 transport level encryption is a well understood and easily achievable technique that should be as ubiquitously available as clean water and public infrastructure. In the face of known attacks it is a best engineering practice and we shouldn't accept less while still demanding stronger privacy protections too. When you step back from the details and ask yourself if it is really reasonable that a human's interaction with the Web is observable to many silent and undetectable observers the current situation really seems absurd.

The immediate offered solution space is complicated and incomplete. Potential mitigations are fraught with tradeoffs and unintended consequences. The focus here is on what happens to http:// schemed traffic,  https is comparably well taken care of. The common solution offered in this space carries http:// over an unauthenticated TLS channel for HTTP/2. The result is a very simple plug and play TLS capable HTTP server that is not dependent on the PKI. This provides protection against passive eaves droppers, but not against active attacks. The cost of attacking is raised in terms of CPU, monetary cost, political implications, and risk of being discovered. In my opinion, that's a win. Encryption simply becomes the new equivalent of clear text - it doesn't promote http:// to https://, it does not produce a lock icon, and it does not grant you any new guarantees that cleartext http:// would not have. I support that approach.

The IETF HTTPbis working group will test this commitment to encryption on Wednesday at the London #IETF89 meeting when http:// schemed URIs over TLS is on the agenda (again). In the past, it not been able to garner consensus. If the group is unable to form consensus around a stronger privacy approach than was done with HTTP/1.1's use of cleartext I would hope the IESG would block the proposed RFC during last call for having insufficiently addressed the security implications of HTTP/2 on the Internet as we now know it.

#ietf89 #strint