1. Taking a Quantum Leap with Html 5 WebSocket: Taking bi-directional communication on the web to the next level Comet Never More! (HTML 5 WebSockets in Theory and Practice) Shahriar Hyder Kaz Software Ltd. WebSockets == âTCP for the Webâ
7. Polling, Long-Polling, Streaming ⊠Comet â Headache 2.0 Use Comet to spellCOMplExiTy⊠Source: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
8. AJAX: Polling Attempting to simulate bi-directional communications with AJAX requires polling schemes, which blindly check for updates irrespective of state changes in the app. The result is poor resource utilization on both the client and server, since CPU-cycles and memory are needlessly allocated to prematurely or belatedly detect updates on the server. Consequently, depending on the rate in which events are published on the server, traditional AJAX apps must constantly strike a balance between shorter and longer polling intervals in an effort to improve the accuracy of individual requests.
9. AJAX: Polling High polling frequencies result in increased network traffic and server demands, while low polling frequencies result in missed updates and the delivery of stale information. In either case, some added latency is incurred. In low-message-rate situations, many connections are opened and closed needlessly. There are two types of polling, short polling and long polling.
10. Short polling Short polling is implemented by making a request to the web server every few seconds or so to see if data has changed. If it has, the web server will respond with the new data. Otherwise, it will respond with a blank message. The drawback to this technique, however, is both a surplus in server requests and an overhead in CPU usage on the web server to constantly check if an update to the data has been made.
11. Long Polling Also known as asynchronous-polling Browser sends a request to the server and the server keeps the request open for a set period If a notification is received within that period, a response containing the message is sent to the client. If a notification is not received within the set time period, the server sends a response to terminate the open request. HTTP headers, present in both long-polling and polling often account for most of the network traffic In high-message rate situations, long-polling results in a continuous loop of immediate polls The drawback to this technique, like short polling, is that the web server still has to check if the data has changed every few seconds or so, creating an overhead in CPU usage.
12. Streaming With streaming, the browser sends a complete request, but the server sends and maintains an open response that is continuously updated and kept open indefinitely (or for a set period of time). The response is then updated whenever a message is ready to be sent, but the server never signals to complete the response, thus keeping the connection open to deliver future messages. However, since streaming is still encapsulated in HTTP, intervening firewalls and proxy servers may choose to buffer the response, increasing the latency of the message delivery. Therefore, many streaming Comet solutions fall back to long-polling in case a buffering proxy server is detected. Alternatively, TLS (SSL) connections can be used to shield the response from being buffered, but in that case the setup and tear down of each connection taxes the available server resources more heavily.
13. Streaming More efficient, but sometimes problematic Possible complications: o Proxies and firewalls o Response builds up and must be flushed periodically o Cross-domain issues to do with browser connection limits
14. Streaming One benefit of streaming is reduced network traffic, which is the result of sending packets that only contain data rather than packets that contain both data and HTTP headers. The downside of streaming is that it is still encapsulated in HTTP, so intervening HTTP proxies may choose to buffer the response, increasing the latency of the message delivery.
15. Callback-Polling or JSONP-Polling Long-polling, but works cross-domain Relies on JSONP technique for establishing trust <script> blocks instead of XHR
16. Comet Comet is known by several other names, including Ajax Push, Reverse Ajax, Two-way-web, HTTP Streaming, and HTTP server push among others. The Comet model for communications was a departure from that found in the classical web model, in which events are client initiated. The most obvious benefit of Comet's model is the server's ability to send information to the browser without prompting from a client. However, this "push" style of communications has limited uses.
17. Comet: Two Connections, Bi-directional Comet attempted to deliver bi-directional communications by maintaining a persistent connection and a long-lived HTTP request on which server-side events could be sent to the browser, and making upstream requests to the server on a newly opened connection. The maintenance of these two connections introduces significant overhead in terms of resource consumption, which translates into added latency for sites under peak load. In addition, Comet solutions that employ a long-polling technique send undue HTTP request/response headers. Each time an event is sent by the server, the server severs its connection with the client browser, forcing the browser to reestablish its connection with the server. This action causes another client request and server response to be sent across the wire. Neither HTTP streaming nor Web Socket incur this network overhead.
18. Comet: Two Connections, Bi-directional Most Comet implementations rely on the Bayeux protocol. The use of this protocol requires messages from the origin services to be transformed from the messages' initial format to conform to the Bayeux protocol. This transformation introduces unnecessary complexity in your system, requiring developers to manipulate one message format on the server (e.g., JMS, IMAP, XMPP, etc.) and a second message format (e.g., Bayeux and JSON) on the client. Moreover, the transformation code used to bridge your origin protocol to Bayeux introduces an unnecessary performance overhead into your system by forcing a message to be interpreted and processed prior to being sent over the wire. With Web Sockets, the message sent by the server is the same message delivered to the browser, eliminating the complexity and performance concerns introduced by transformation code.
19. Solutions or Hacks? But if you think about it, these techniques are just hacks, tricks used to simulate a technology that doesnât exist: server-sent events. If the server could actually start the communication, none of these ugly tricks would be needed.
25. Yawn⊠⊠so why do we need WebSockets? Source: http://www.slideshare.net/goberoi/intro-to-websockets
26. 2 good reasons Source: http://www.slideshare.net/goberoi/intro-to-websockets
27. Desire for real-time Want low latency 2-way communication for: Multiplayer online games (pong) Collaboration (live wikis) Dashboards (financial apps) Tracking (watch user actions) Presence (chat with customer support) Live sports ticker Updating social streams / Social Networking (Twitter Feed) Smart power grid More! Source: http://www.slideshare.net/goberoi/intro-to-websockets
28. HTTP doesnât deliver People hack around this (see âCometâ) Polling, long-polling, stream via hidden iframe BUT these are slow, complex, and bulky Or rely on plugins: Flash, SilverLight, Java applets BUT these donât work everywhere (phones) Source: http://www.slideshare.net/goberoi/intro-to-websockets
29. Damn, this is hairy: Source: http://www.slideshare.net/ismasan/websockets-and-ruby-eventmachine
30. Vs. HTTP hacks, WebSockets provide: Lower latency: no new TCP connections for each HTTP request Lower overhead: for each message sent(2 bytes vs. lines of HTTP header junk) Less traffic: since clients donât need to poll, messages only sent when we have data Source: http://www.slideshare.net/goberoi/intro-to-websockets
31. What are WebSockets? + = ? Source: http://www.slideshare.net/goberoi/intro-to-websockets
32. Definition The WebSocket specificationâdeveloped as part of the HTML5 initiativeâintroduced the WebSocket JavaScript interface, which defines a full-duplex, bi-directional communication channel over a single TCP socket over which messages can be sent between client and server. The WebSocket standard simplifies much of the complexity around bi-directional web communication and connection management. This allows web developers to establish real time two way communications with a server using simple JavaScript without resorting to Flash, Java, Ajax long polling, comet, forever iframe, or other current workarounds.
44. Once upgraded, WebSocket data frames can be sent back and forth between the client and the server in full-duplex modeSource: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
50. Contains UTF-8 data in betweenSource: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
51.
52. There is no defined maximum size o If the user agent has content that is too large to be handled, it must fail the Web Socket connection o JavaScript does not allow >4GB of data, so that is a practical maximum Source: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
53.
54. No latency involved in establishing new TCP connections for each HTTP message
58. Overheard⊠âReducing kilobytes of data to 2 bytesâŠand reducing latency from 150ms to 50ms is far more than marginal. In fact, these two factors alone are enough to make WebSocket seriously interesting to Google.â âIan Hickson (Google, HTML5 spec lead) Source: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
60. JavaScript //Checking for browser support if (window.WebSocket) { document.getElementById("support").innerHTML = "HTML5 WebSocket is supported"; } else { document.getElementById("support").innerHTML = "HTML5 WebSocket is not supported"; } Source: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
88. Proxy server traversal decision tree Source: http://www.slideshare.net/peterlubbers/html5-web-workersunleashed
89. Summary Low latency is the mother of interactivity, and in no place is this more apparent than on the Web. Every slip of a millisecond equates to a slower end-user experience, which in turn translates into elevated risk that a user's eyes will avert elsewhere. Both AJAX and Comet attempt to obscure latency problems, and certainly address the issue of user-perceived latency. However, Web Socket removes the need to obscure such problems and introduces a real solution, one that does not play tricks on the perception of our end users, but delivers content in real time with real results. HTML5 Web Socket provides an enormous step forward in the scalability of the real-time web. As you have seen here, HTML5 Web Sockets can provide a 500:1 or - depending on the size of the HTTP headers - even a 1000:1 reduction in unnecessary HTTP header traffic and 3:1 reduction in latency. That is not just an incremental improvement; that is a revolutionary jump - a quantum leap.
90. Concluding statement If HTTP did not restrict your creativity,what Web application would YOU create?
Editor's Notes
With streaming, the browser sends a complete request, but the server sends and maintains an open response that is continuously updated and kept open indefinitely (or for a set period of time). The response is then updated whenever a message is ready to be sent, but the server never signals to complete the response, thus keeping the connection open to deliver future messages. However, since streaming is still encapsulated in HTTP, intervening firewalls and proxy servers may choose to buffer the response, increasing the latency of the message delivery. Therefore, many streaming Comet solutions fall back to long-polling in case a buffering proxy server is detected. Alternatively, TLS (SSL) connections can be used to shield the response from being buffered, but in that case the setup and tear down of each connection taxes the available server resources more heavily.
With streaming, the browser sends a complete request, but the server sends and maintains an open response that is continuously updated and kept open indefinitely (or for a set period of time). The response is then updated whenever a message is ready to be sent, but the server never signals to complete the response, thus keeping the connection open to deliver future messages. However, since streaming is still encapsulated in HTTP, intervening firewalls and proxy servers may choose to buffer the response, increasing the latency of the message delivery. Therefore, many streaming Comet solutions fall back to long-polling in case a buffering proxy server is detected. Alternatively, TLS (SSL) connections can be used to shield the response from being buffered, but in that case the setup and tear down of each connection taxes the available server resources more heavily.
WebSocket is text-only
HTTP used for handshake onlyOperates over a single socketTraverses firewalls and routers seamlesslyAllows authorized cross-site communicationCookie-based authenticationExisting HTTP load balancersNavigates proxies using HTTP CONNECT, same technique as https, but without the encryption
Text type requires high-order bit setBinary type requires high-order bit _not_ setThere is no defined maximum size. However, the protocol allows either side (browser or server) to terminate the connection if it cannot receive a large frame. So far, the definition of too large is left up to the implementation.If the user agent is faced with content that is too large to behandled appropriately, then it must fail the Web Socket connection.There is probably a practical maximum, but we have not discovered it as far as I know.You can't have four gigabytes of data in JavaScript, so the practical max is <4GB for the JavaScript implementation.
150 ms (TCP round trip to set up the connection plus a packet for the message)50 ms (just the packet for the message)
150 ms (TCP round trip to set up the connection plus a packet for the message)50 ms (just the packet for the message)