Konstanty Koszewski, 2020-11-19

Ways to increase your Rails performance

Despite its numerous advantages, Ruby on Rails is still considered to be a relatively slow web framework. We all know that Twitter has left Rails in favor of Scala. However, with a few cleaver improvements you can run your app significantly faster!

Despite its numerous advantages, Ruby on Rails is still considered to be a relatively slow web framework. We all know that Twitter has left Rails in favor of Scala. However, with a few cleaver improvements you can run your app significantly faster!

Ruby First

Ruby is a heavily object-oriented language. In fact, (almost) everything in Ruby is an object. Creating unnecessary objects might cost your program a lot of additional memory usage, so you need to avoid it.

To measure the difference, we will use a memory_profiler gem and a built-in Benchmark module to measure time performance. 

Use bang! methods on strings

require "memory_profiler"

report = MemoryProfiler.report do
    data = "X" * 1024 * 1024 * 100
    data = data.downcase
end

report.pretty_print

In the listing below, we created a 100MB string and downcased each character contained therein. Our benchmark gives us the following report:

Total allocated: 210765044 bytes (6 objects)

However, if we replace line 6 with:

data.downcase!

Read files line by line

Supposedly, we need to fetch a huge data collection of 2 million records from a csv file. Typically, it would look like this:

require 'benchmark'

Benchmark.bm do |x|
    x.report do
        File.readlines("2mrecords.csv").map! {|line| line.split(",")}
    end
end

user     system      total        real

12.797000   2.437000  15.234000 (106.319865) It took us more than 106 seconds to fully download the file. Quite a lot! But we can speed up this process by replacing the map! method with a simple while loop:

require 'benchmark'

Benchmark.bm do |x|
    x.report do
        file = File.open("2mrecords.csv", "r")
        while line = file.gets
            line.split(",")
        end
    end
end

user     system      total        real

6.078000   0.250000   6.328000 (  6.649422) The runtime has now dropped drastically since the map! method belongs to a specific class, like Hash#map or Array#map, where Ruby will store every line of the parsed file within the memory as long as it is executed. Ruby’s garbage collector will not release the memory before those iterators are fully executed. However, reading it line by line will GC it to relocate the memory from the previous lines when not necessary.

Avoid method iterators on larger collections

This one is an extension of the previous point with a more common example. As I mentioned, Ruby iterators are object methods and they won’t release the memory as long as they’re being performed. On a small-scale, the difference is meaningless (and methods such as map seems more readable). However, when it comes to larger data sets, it is always a good idea to consider replacing it with more basic loops. Like on the example below:

number_of_elements \= 10000000
randoms = Array.new(number_of_elements) { rand(10) }

randoms.each do |line|
    #do something
end

and after refactoring:

number_of_elements = 10000000
randoms = Array.new(number_of_elements) { rand(10) }

while randoms.count > 0
    line = randoms.shift
    #do something
end

Use String::<< method

This is a quick yet particularly useful tip. If you append one string to another using the += operator behind the scenes. Ruby will create additional object. So, this: 

a = "X"
b = "Y"
a += b

Actually means this:

a = "X"
b = "Y"
c = a + b
a = c

<< Operator would avoid that, saving you some memory:

a = "X"
b = "Y"
a << b

Let's talk Rails

The Rails framework possesses plenty of “gotchas” that would allow you to optimize your code quickly and without too much additional effort. 

Eager Loading AKA n+1 query problem

Let’s assume that we have two associated models, Post and Author:

class Author < ApplicationRecord
    has_many :posts
end

class Post < ApplicationRecord
    belongs_to :author
end

We want to fetch all the posts in our controller and render them in a view with their authors:

#controller
def index
    @posts = Post.all.limit(20)
end
#view
<% @posts.each do |post| %>
 <%= post %>
    <%= post.author.name %>
<% end %>

In the controller, ActiveRecord will create only one query to find our posts. But later on, it will also trigger another 20 queries to find each author accordingly – taking up an additional time! Luckily enough, Rails comes with a quick solution to combine those queries into a single one. By using the includes method, we can rewrite our controller this way:

def index
    @posts = Post.all.includes(:author).limit(20)
end

For now, only the necessary data will be fetched into one query. 

You can also use other gems, such as bullet to customize the whole process.

Get free code review

Call only what you need

Another useful technique to increase ActiveRecord speed is calling only those attributes which are necessary for your current purposes. This is especially useful when your app starts to grow and the number of columns per table increase as well.

Let’s take our previous code as an example and assume that we only need to select names from authors. So, we can rewrite our controller:

def index
    @posts = Post.all.includes(:author).select("name").limit(20)
end

Now we instruct our controller to skip all attributes except the one we need.

Render Partials Properly

Let’s say we want to create a separate partial for our posts from previous examples:

<%
 @posts.each do |post| %>
 <%= render 'post', post: post %>
<% end %>

At first glance, this code looks correct. However, with a larger number of posts to render, the whole process will be significantly slower. This is because Rails invokes our partial with a new iteration once again. We can fix it by using the collections feature:

<%= render @posts %>

Now, Rails will automatically figure out which template should be used and initialize it only once.

Use background processing

Every process which is more time-consuming and not crucial for your current flow might be considered a good candidate for background processing, e.g. sending emails, gathering statistics or providing periodical reports. 

Sidekiq is the most commonly used gem for background processing. It uses Redis to store tasks. It also allows you to control the flow of your background process, split them into separate queues and manage memory usage per each one of them.

Write less code, use more gems

Rails came up with an enormous number of gems which not only make your life easier and accelerate the development process, but also increase the performance speed of your application. Gems such as Devise or Pundit are usually well-tested regarding their speed and work faster and more securely than code custom-written for the same purpose.

Software development consulting

Read more:

Pros and cons of Ruby software development

Why Poland is full of qualified Ruby on Rails developers?

Python vs. Ruby? Which technology should you use for product development?