One of the things I love about my job at Icelab is that I get to help build complex web applications that are used by thousands of people.
It’s an unfortunate truth when it comes to the internet though that high-profile sites that are used by lots of people often become the target of malicious activity, whether that be account enumeration attacks, brute-force login attempts, DDoS attacks, or worse. Aside from the obvious requirement to protect the potentially sensitive data your application deals with, it’s also important that it’s available to your users when they want to use it (and not unavailable due to being flooded with requests from a bot farm somewhere).
I recently discovered Rack::Attack, which is a handy middleware for protecting Rack-based apps from poorly-behaved clients. I’ve now implemented Rack::Attack in a couple of our apps and figured it was time to write a blog post detailing how.
Rack::Attack is a ‘middleware’ for Rack, which means it’s a component that sits between users and your application and is responsible for processing requests from these users and returning responses from your application back to them. In the case of Rack::Attack it acts as a ‘filter’, by comparing each request made to your application against a set of rules you define, either globally or for specific endpoints.
The relevant section of the README explains this more concisely than I can:
The Rack::Attack middleware compares each request against safelists, blocklists, throttles, and tracks that you define. There are none by default.
- If the request matches any safelist, it is allowed.
- Otherwise, if the request matches any blocklist, it is blocked.
- Otherwise, if the request matches any throttle, a counter is incremented in the Rack::Attack.cache. If any throttle’s limit is exceeded, the request is blocked.
- Otherwise, all tracks are checked, and the request is allowed.
Essentially, if a request meets the requirements defined in your configuration it’s allowed, otherwise, it’s blocked and either a 429 (Too Many Requests) or 403 (Forbidden) response is returned (depending on whether or not the client has been throttled or blocked from accessing the application entirely).
In my research into Rack::Attack I found that the majority of the blog posts and documentation already out there deal with integrating it into a Rails application so rather than rehashing those same examples here, in this post I’ll cover adding it to Ruby app built upon Roda and the dry-rb family of gems, which we’ve been using as the foundation for most of the web apps we build at Icelab for nearly two years now. The specifics I cover here will mostly be relevant in that context, but the general principles should be easily transferable to any Ruby application.
To allow me to provide some concrete code examples, I’ve generated a sample app using dry-web-roda (which is what we typically use to set up new Roda/dry-rb projects), which sets up the following top-level folder structure:
sample_app
├── apps
├── bin
├── db
├── lib
├── log
├── spec
├── system
├── .env
├── .env.test
├── .gitignore
├── .rspec
├── config.ru
├── Gemfile
├── Rakefile
└── README.md
The first step is to add the required gems to the Gemfile
and bundle install
. I’ve opted to use Redis as the cache store for Rack::Attack so I need the redis
gem, but you can use memcached if you prefer.
gem "rack-attack"
gem "redis"
To eliminate any issues due to differing versions or configurations between the machines of individual developers, we generally use Docker to run any external services such as Redis or Elasticsearch. Using Docker in local development has a tendency to over-complicate things but for this particular, limited use-case I think using it is beneficial. This requires having Docker installed (Docker for Mac makes this pretty easy), and then defining the following config in a docker-compose.yml
file added to the root of the project:
version: "2"
services:
redis:
image: registry.hub.docker.com/library/redis:3.2
ports:
- "6379:6379"
Given my app will be deployed to Heroku and will use the Heroku Redis add-on for the Rack::Attack cache store, I’ve opted to match the Redis version used in development to the one that will be used in production (Heroku Redis currently uses 3.2 by default).
Apps generated using dry-web-roda use dry-container together with dry-auto-inject to make low-level dependencies available throughout the application. In this particular case, we need to be able to access the Redis instance we’re running via Docker from within our Rack::Attack config, and we’ll do that by defining a :redis
dependency (dependencies defined in system/boot
are started when the app is booted):
# system/boot/redis.rb
SampleApp::Container.boot :redis do |container|
init do
require "redis"
end
start do
use :settings
redis = Redis.new(url: container.settings.redis_url)
container.register :redis, redis
end
end
We can now access this :redis
dependency anywhere in the application by first requiring the container in which it’s registered:
require "sample_app/container"
redis = SampleApp::Container[:redis]
With the required gems installed and Redis up and running, the next step is to define the rules to be used by Rack::Attack in processing requests. While you can define global rules to apply to all requests to your application, in my sample app I have two routes that I want to protect with Rack::Attack:
/sign-in
— A sign in form for users/reset-password
— A form for users to reset their passwordI’ve defined some relevant config like so (The example configuration for Rack::Attack is great so I’ve kept this example intentionally simple):
# lib/rack_attack.rb
require "sample_app/container"
require "rack/attack"
module Rack
class Attack
# First we setup Redis
redis = SampleApp::Container[:redis]
cache.store = Rack::Attack::StoreProxy::RedisStoreProxy.new(redis)
# Then define some rules
# 1. Throttle POST requests to /sign-in by IP address
throttle("/sign-in/ip", limit: 10, period: 60) do |req|
req.ip if req.path == "/sign-in" && req.post?
end
# 2. Throttle POST requests to /sign-in by email address
throttle("/sign-in/email", limit: 10, period: 60) do |req|
if req.path == "/sign-in" && req.post? && req.params["user"]
req.params["user"]["email"]
end
end
# 3. Throttle GET requests to /reset-password by IP address
throttle("/reset-password/ip", limit: 10, period: 60) do |req|
req.ip if req.path == "/reset-password" && req.get?
end
# 4. Allow all requests from localhost
safelist("allow from localhost") do |req|
req.ip == "127.0.0.1" || req.ip == "::1"
end
end
end
After setting up Rack::Attack to use our previously registered :redis
instance as its cache store, we then define some rules:
/sign-in
route to 10 requests every 60 seconds/reset-password
route to 10 requests every 60 seconds (this is a contrived example and only serves to demo that other request types can be throttled)One of the hurdles I hit the first time I used Rack::Attack was figuring out how to test it. Fortunately, I found a great blog post which pointed me in the right direction.
To be sure that my tests were accurate (and that ultimately Rack::Attack was behaving the way I wanted it to), it was important to make sure that Rack::Attack’s cache was cleared between spec examples. I handled this in spec/support/redis.rb
(which also takes care of making :redis
available in the test environment) like so:
# spec/support/redis.rb
SampleApp::Container.start(:redis)
module Test
module RedisHelpers
module_function
def redis
@redis ||= SampleApp::Container[:redis]
end
def self.included(rspec)
rspec.around(:each) do |example|
with_clean_redis do
example.run
end
end
end
def with_clean_redis(&block)
redis.flushall
begin
yield
ensure
redis.flushall
end
end
end
end
RSpec.configure do |config|
config.include Test::RedisHelpers
end
Now by including require "support/redis"
in my Rack::Attack specs, Rack::Attack’s cache will be cleared when each spec example is run.
Now for the specs themselves (big slab of code ahead!):
require "support/redis"
RSpec.describe "Rack Attack" do
describe "throttle excessive POST requests to /sign-in by IP address" do
let(:limit) { 10 } # Limit is 10 requests per 60 seconds
context "number of requests is lower than the limit" do
it "does not change the request status" do
limit.times do |i|
# We increment the email address here so we can be sure that it's the IP address and not email address that's being blocked
post("/sign-in", { user: { email: "sample#{i}@example.com", password: "password" } }, "REMOTE_ADDR" => "1.2.3.4")
expect(last_response.status).to_not eq(429)
end
end
end
context "number of requests is higher than the limit" do
it "changes the request status to 429" do
(limit + 1).times do |i|
# We again increment the email address as above
post("/sign-in", { user: { email: "sample#{i}@example.com", password: "password" }}, "REMOTE_ADDR" => "1.2.3.4")
expect(last_response.status).to eq(429) if i > limit
end
end
end
end
describe "throttle excessive POST requests to /sign-in by email address" do
let(:limit) { 10 } # Limit is 10 requests per 60 seconds
context "number of requests is lower than the limit" do
it "does not change the request status" do
# This time we increment the IP address so we can be sure that it's the email address and not the IP address that's being blocked
limit.times do |i|
post("/sign-in", { user: { email: "[email protected]", password: "password" }}, "REMOTE_ADDR" => "1.2.3.#{i}")
expect(last_response.status).to_not eq(429)
end
end
end
context "number of requests is higher than the limit" do
it "changes the request status to 429" do
# We again increment the IP address as above
(limit + 1).times do |i|
post("/sign-in", { user: { email: "[email protected]", password: "password" }}, "REMOTE_ADDR" => "1.2.3.#{i}")
expect(last_response.status).to eq(429) if i > limit
end
end
end
end
describe "throttle excessive GET requests to /reset-password by IP address" do
let(:limit) { 10 } # Limit is 10 requests per 60 seconds
context "number of requests is lower than the limit" do
it "does not change the request status" do
limit.times do
get("/reset-password", {}, "REMOTE_ADDR" => "1.2.3.4")
expect(last_response.status).to_not eq(429)
end
end
end
context "number of requests is higher than the limit" do
it "changes the request status to 429" do
(limit + 1).times do |i|
get("/reset-password", {}, "REMOTE_ADDR" => "1.2.3.4")
expect(last_response.status).to eq(429) if i > limit
end
end
end
end
end
And that’s pretty much it — with just a little work my app is now fairly well protected against misbehaving clients. If I notice any obvious patterns of suspicious behaviour in future (say a flood of requests from a particular IP address) I have the flexibility to lock the app down further by simply adding the appropriate rules in lib/rack_attack.rb
.