Websockets Redis Setup in AWS Elastic Beanstalk

Websockets in AWSWebsockets/Redis Setup in AWS Elastic Beanstalk

For context please read our post on How to Create Scalable Real-Time Applications in AWS first.  

This document will serve as instructions for setting up your auto-scaling Elastic Beanstalk environment to use websockets with Express, Socket.IO and Redis. If you are using something other than Socket.IO for your websocket implementation, you’ll still need to follow these steps to get your environment set up.

Please note: This is the simplest set up that you can have. This will be using the bare minimum for Elasticache, Elastic Beanstalk, and websocket implementation. It is meant to be a foundation that you can build upon, not a production ready service.

Create Elastic Beanstalk Environment

Elastic Beanstalk Environment

Before we create our Elastic Beanstalk environment, write a basic version of your server with a simple implementation of websockets. For example, here’s what a bare-bones Node server would look like.

Now that you have your server written up, you’ll also need to create a few other files to ensure that your environment is set up correctly.

We’ve only been able to make the following work by having these configs set up when creating the environment. We were unable to change these settings later and actually have them applied.

That being said, other users seem to have had success using a .platform directory to issue an nginx restart command to apply settings post-creation.

.ebextensions/websocket.config

.ebextensions/websocket.config

Procfile

procfile

(In the Procfile, make sure the command after “web:” is whatever command you want the server to use to run your server in production)

Once you have all of the necessary files created in your project, compress them into a zip file. Then follow these steps to deploy your code.

  1. Log into your AWS Management Console
  2. Go to the Elastic Beanstalk Console
  3. Click on Environments in the navigation menu on the left side of the page
  4. Click the Create a new environment button on the table that appears
  5. Choose a web server environment and click Select
  6. Give your application a name, the environment name will automatically be populated
  7. Choose whatever platform your server should be deployed on
  8. For Application Code, select Upload your code and then supply the zip file of your project
  9. Click the Configure more options button if you would like to access advanced settings about the project
  10. When finished, click the Create environment button. This will take around 5-10 minutes to deploy your new server

Create Redis Instance with Amazon Elasticache

If your service has no need for auto-scaling or multiple servers, and it can be a single node, then you can skip this step.

Websockets work by establishing and maintaining a connection with a given server on the backend, allowing duplex communication between the client and server. In a horizontally scaled environment, or one with multiple servers, this becomes a complicated problem. What if two clients that need to share events are connected to two different servers? How does one server know what happened on the other server and communicate it to their client?

This is where Redis comes in. Redis is an in-memory data structure store, but more importantly it provides us a mechanism for implementing a publish-subscribe pattern. Using this, all of our servers can publish and subscribe to events from Redis, and then all of the servers will receive the same events and can react accordingly.

1. Create a Security Group for your cache

To set up a Redis cache in AWS, we’ll use a service called Amazon ElastiCache. Before we can do that though, we need to create a Security Group that we’ll apply to the cache when we create it.

  1. Go to the EC2 Dashboard
  2. Select Security Groups from the Resources table in the dashboard or from the navigation menu on the left side of the page
  3. Click the Create security group button on the table that appears
  4. Give it a name related to your cache
  5. Add an Inbound Rule like the following. In the search bar, look for the security group containing your Elastic Beanstalk EC2 instances. This will allow your servers to talk to your Redis cache (when it exists).inbound rule1. In order to find which security group you need to search for, it might be helpful to go back to the Security Groups table. Look for one with the same Name as your EB environment, and with a Security group name that contains the words “AWSEBSecurityGroup“. Copy that Security Group’s ID.

2. Create your Redis cache

  1. Go to the ElastiCache Dashboard
  2. Click on Redis in the navigation panel on the left side of the page
  3. Click the Create button on the table that appears
  4. Leave the Cluster Engine as Redis and the Location as Amazon Cloud
  5. Add a name and optionally a description
  6. For a non-production service, I would change the Node type to be something smaller and uncheck the Multi-AZ box.
  7. For a production service, I recommend reviewing the 5 workload characteristics to consider when sizing Redis clusters. Read more here.
  8. Change the security group to your cache security group that you created in the previous section
  9. Set everything else to your preference and click the Create button in the bottom right corner of the page

Once your cache is created, go back to the Redis table and inspect your cache. There you can find connection information to include in your server, like the Primary Endpoint field.

3. Connect your server to your Redis cache

This will vary depending on your project, but more than likely you’ll have it set up to use some environment variables to take in the Redis connection string, and then you’ll have to implement using Redis yourself. If you’re using http://socket.io , this is pretty straightforward since they’ve created an adapter for Redis on their own.

All I had to do was add the following to my server set up:

server setup

And then add the REDIS_HOST and REDIS_PORT environment variables to my Elastic Beanstalk environment’s configuration. Again, you can find these values by inspecting your cache in the ElastiCache console and looking for the Primary Endpoint property.

Configure Load Balancer for Sticky Sessions

If your service has no need for auto-scaling or multiple servers, and it can be a single node, then you can skip this step.

The last step that we need is to configure the load balancer to enable sticky sessions. This is important in multi-server deployments so that websocket connections made with a given server will continue to engage with that server for the remainder of the connection.

1. Configure Target Group in Elastic Beanstalk

  1. Go to the Elastic Beanstalk Console
  2. Click on Environments in the navigation menu on the left side of the page
  3. Select the environment containing your websocket application
  4. Select Configuration in the navigation bar on the left side of the page and scroll down to Load Balancer. Click the Edit button.
  5. Check the box next to the default process in the Processes section. Click Actions and then Edit.
  6. Under the Sessions section, check the box next to Stickiness Policy Enabled.

2. Configure Load Balancer Stickiness

  1. Go to the EC2 Dashboard
  2. Click on Load Balancers in the navigation bar on the left side of the page
  3. Find the load balancer related to your Elastic Beanstalk environment and check the box next to it
  4. In the Listeners section, select the main listener that forwards requests to your target group and click Edit
  5. In the main action, check the box next to Enable group-level stickiness
  6. Set the duration for whatever you feel is best for your project
  7. Click the Save Changes button at the bottom of the page

And that’s it! Your service should be completely set up to handle websocket connections and pass along events to anyone that should receive them.

Would you like us to email a copy of the documentation?

 

Resources

 

How to Create Scalable Real-Time Applications in AWS

With the exponential rise in online communication and collaboration demand, many entrepreneurs and business teams are seeking to develop software applications that have some sort of real-time component.

Those same teams are increasingly moving to the cloud.

You can’t discuss the cloud without discussing AWS, the worldwide leader in cloud service providers, and you can’t discuss real-time applications that need to scale without considering Websockets (a preferred communication protocol).

But when you consider AWS Scaled Systems and Websockets together things get complicated.

We’ll dig in, but first let’s get some context out of the way:  What are some practical uses for real-time applications? What factors should I consider when thinking about the scalability of my application idea? Why host my application in AWS?  What are Websockets and why should I care?

And for the more technical-minded among us:  When should I use Websockets vs. HTTP? How do I get websockets to work in a scaled system in AWS?

Practical uses for Real-Time Applications

If your application or software idea relies on instantaneous communication, visualization, and/or the need to sense, analyze, and act on data as it happens, then you need real-time.

  • Chat Functionality
  • Calendar, Appointment Scheduling
  • Location-based tools
  • Collaboration
  • Real-time Feeds –  likes, shares, notifications, etc.
  • Data Visualization
  • Multiplayer Games
  • User Data – ie Click stream, error reporting
  • Sports Updates
  • Multimedia Chat
  • Online Education

What factors should I consider when thinking about the scalability of my application idea?

  • Usage
  • Amount of Stored Data
  • Required Speed
  • Anticipated Demand Load Spikes
  • User Experience
  • Developer Hand-off

Why host in AWS? 

While cloud-hosting provider choice depends on the project, for the sake of this conversation we’ll focus on AWS.  Based on AWS’s size, power, flexibility, and that there is tons of customization and support for many third-party integrations it is often one of the main choices.  Its also pretty cost-competitive, you pay for what you use. And, importantly, auto-scaling comes “out-of-the-box” in a way that requires less custom configuration, saving you time in the development phase of your project.

What are Websockets – and why should I care?

As discussed, the demand for real-time information – the millisecond it becomes available – has hit critical mass.  If you have to refresh the page to get new information or the connection is slow, it’s already too late. Your user bails.  In laymen’s terms, websockets is a web-based communication protocol that allows for two-way interactive communication session between the user’s browser/application and the server where the information/data is stored.  Basic communication protocols, such as HTTP, technically only allow for one-way communication.

 

Example:

You are trying to build an app for restaurant reservations. Your developer uses HTTP, not Websockets.  Both User A and User B are searching availability for a table at 7:00pm on Saturday, and there is just one table left.

Using an http communication protocol the flow would look something like this:

User A –> table available > books the table at 6:00:01 > success > connects to server for confirmation (pause to wait for response from server) > success

User B –> table available > books the table at 6:00:012 > success > connect to server for confirmation (pause to wait for response from server) > failure

Both users experience “success” booking the table because there is a lag between User A booking the table and User B finding out the table isn’t available. User B thought they had secured the reservation, only to find out they actually didn’t after the fact, leading to poor user experience with the app.

Using websockets it would look like this:

User A –> table available > books the table at 6:00:01 > success

User B –> table available > attempts to book the table at 6:00:012 > Sorry, table no longer available

User B finds out instantaneously that the table is no longer available, and can proceed with searching for other available tables versus thinking they have the reservation and waiting to find out they actually don’t. This is a more gratifying user experience.

While this is an overly simplified example, it illustrates how the bi-directional communication using websockets for 2 users attempting to secure the reservation for the same table from the server allows the user to see true real-time availability.

 

Determining whether to use WebSockets vs HTTP:

  • Does your app involve multiple users who need to communicate with each other?
  • Does your app need to work with server-side data that’s constantly changing?

If you answered yes to either of these questions, consider using WebSockets.

 

http connectionWebsocket connectionImage Source

 

How do get websockets to work in a scaled system in AWS?

While building a mobile app for a health startup who required real-time messaging and notifications in a multi-server AWS cloud environment we discovered that “out of the box” the AWS load balancer would not maintain websocket connections from several clients to multiple servers simultaneously. The app would communicate with the load balancer, but the default AWS load balancer wouldn’t communicate with the servers via websocket protocol. Instead of leaving our client hanging and passing it off to a network administrator to deal with the deployment concerns, we engineered a solution that we feel may assist many other devops teams when running into this issue. View the documentation here.

 

Most Reviewed Software Development Companies in Phoenix

HSTK Recognized as One of the Most Reviewed Software Development Companies in Phoenix

Most Reviewed Software Development Companies in Phoenix

HSTK Haystack is a client-oriented software development company. We provide our partners with the tools and skills they require to build outstanding experiences. Moreover, we help our customers create solutions with utmost efficiency and precision. Today, we’re pleased to report that we’ve been recognized as a leading service provider in 2022. Right now, you can discover us on The Manifest as one of the most reviewed software development companies in Phoenix!

With the modernization of the business space, there was a demand for technical proficiency in software development. But we believe that what separates a great developer from others is having the dedication to build unparalleled software as if it was their own. For this reason, we banded together, and HSTK Haystack was established — all to build exemplary digital products alongside our partners.

Most Recent Review

Riders United is an outdoor-oriented startup that partnered with us to build a one-of-a-kind mobile solution. We helped the client conceptualize and plan the digital side to craft a tailored experience.

The client was impressed with the quality of the deliverables and was satisfied with the project management. Here’s what they said about the engagement:

“They’re very helpful — a lot of the technology is new to me, but they make it clear cut. Overall, they’re a great company, and I can’t wait to work with them in the future.”

— Shane Routery, Owner, Riders United

In 2022
The Manifest announces the top B2B companies in Phoenix, and we’re proud to be recognized as one of the most reviewed software development companies in the area. We would like to thank all of our partners for being incredibly supportive and giving us such wonderful feedback.

If you’re interested in building a product with us, please get in touch and let us know how we can help!

The Importance of MVP in Mobile App Development

Mobile App Development MVP

What are MVP’s (Minimum Viable Products)?

An MVP is a method of developing software that defines a bare minimum set of functionalities (features, behaviors) that solve a specific problem and allows companies to study the user’s experience with the app. It can also be used to validate an idea in the marketplace.

In short it’s a stripped down, bare bones, scrappy version of your big idea so you can test and validate before making a huge commitment. In some cases it can be used to prove adoption and secure funding, in other high-risk applications it can be used with a select set of users to identify potential concerns early on in the process.

This first version lacks bells and whistles and focuses solely on the features that offer value to the users.

Why are MVP’s important in the mobile app development process?

An MVP of your mobile app has many important benefits, giving you the ability to:

  • Test market demand
  • Prove adoption
  • Helps prevent unnecessary and unwanted feature creep
  • Provides essential early feedback
  • Test in real market conditions
  • Reduce uncertainty of product failure
  • Collect maximum learning from minimum resources
  • Validate or invalidate assumptions
  • Helps to reduce costs, resources, and time

MVP Design Process

Assuming you’ve done your research to determine market need, nailed your value proposition, and determined your MVP’s priorities it’s time to start the design & development process.

  1. Map out User Flow
  2. Design Wireframes
  3. Backend Coding
  4. Designs

User flow example

user flows

Wireframes example

wireframes

Backend Code example

backend coding

 

As one of the top mobile app developers in Indianapolis and Phoenix, we’d love to discuss your project!  Contact us for a Free Consultation.

Also, this post is just the tip of the iceberg… sign up below to be notified when New Insights are posted