Happy Employees == Happy ClientsCAREERS AT DEPT®
DEPT® Engineering BlogProcess

Deploying a WebSocket Application on Beanstalk

So, you are about to build a new app and you need at least part of it to update in real-time. Well, we live in a modern world so it’s pretty common to have at least a part of the app update without requiring a user interaction or a page (re)load.

So, you are about to build a new app and you need at least part of it to update in real-time. Well, we live in a modern world so it’s pretty common to have at least a part of the app update without requiring a user interaction or a page (re)load.

Let’s assume for the sake of this article that you have already investigated polling, long-polling and WebSocket. And, you’ve decided to use WebSocket. A WebSocket is a communication protocol that provides full duplex communication over a single TCP connection. WebSockets allow us to easily receive live updates in either direction. Common use cases include:

  • Multiplayer games
  • Messaging platforms
  • Stock market tickers
  • Sports scores

[By the way, we’re not saying WebSocket is always the best choice. There are a lot of great resources around this comparison, so we won’t dig into that part.]

Let’s say you’ve also decided to deploy your Socket.IO application on AWS Elastic Beanstalk. There are common challenges and questions that come up when deploying to Beanstalk in this scenario, which we’ll try to address in this article.

First, when deploying an app that is utilizing WebSockets, it’s good to have the following things in mind:

  • use Application or Network Load Balancer
  • passing events between nodes
  • set Load Balancer to use Sticky Sessions (maybe)
  • set Upgrade and Connection headers on Nginx
  • be mindful of potential timeouts (ping/pong)

Classic Load Balancer does not support WebSockets so you have to make sure you are using either Application Load Balancer or Network Load Balancer. Most of the time you’ll probably end up using Application Load Balancer. That is if you are going to be using Load Balancer to begin with, things will work on a single instance environment as well.

If you have multiple instances running in the load balanced environment and want to broadcast messages to everyone or even to groups, you’ll need some way of passing messages between instances. With Socket.IO, the interface in charge of routing messages is called the Adapter. The most often used implementation is on top of Redis (socket.io-redis), but you can write your own by inheriting from socket.io-adapter. In this case, Redis makes sure to pass events between all nodes.

A "sticky session" means a load balancer will make sure to route all requests from the same client to the same server instance.

You might be wondering if this is required for WebSockets to work. If you are only using the WebSocket protocol, you don’t need to do this. WebSocket is sticky in its nature, as once a WebSocket TCP connection is established (server responds to initial HTTP Upgrade request) it is established directly from the client to your server instance. So Load Balancer becomes just a network device in between and doesn’t play the role anymore.

However, if you are using some of the popular libraries, like Socket.IO in this case, which offer long polling as the fallback for WebSocket, you do need sticky sessions enabled. Socket.io maintains the state of the connection in memory so all requests from the client have to end at the same server. Even if we are using Redis like mentioned above, we still need Sticky Sessions enabled cause some context and messages are buffered on the instance when it does long-polling

Additionally, Nginx is used by default as a reverse proxy on Beanstalk and it needs to be configured to allow a tunnel to be set up between a client and a backend server. In order to do this, `Upgrade` and `Connection` headers must be set explicitly and it can be done with .ebextensions file as follows:

container_commands:
  enable_websockets:
    command: |
      sed -i '/\s*proxy_set_header\s*Connection/c \
              proxy_set_header Upgrade $http_upgrade;\
              proxy_set_header Connection "upgrade";\
      ' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf

That is it! Your environment should now be ready to use WebSockets, or better yet Socket.IO with it’s fallback to long polling if you choose to do so.

Check your timeouts

One more thing to keep in mind is WebSocket ping/pong messages. These are used to notify the other side everything is still OK and to keep the connection open. Most popular libraries implement these out of the box, but this can be the reason connections are potentially dropped.

Elastic Load Balancer and Nginx both have default timeouts set to 60 seconds and Socket.IO has pingInterval set to 25 seconds by default, so in this case defaults should work. It is good to keep it in mind though as you might be using some other implementations that don’t do it out of the box.

That is it, your deployment should now work like a charm!