<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[Brandon Hilkert]]></title>
  <link href="http://brandonhilkert.com/atom.xml" rel="self"/>
  <link href="http://brandonhilkert.com/"/>
  <updated>2021-09-12T21:24:15-07:00</updated>
  <id>http://brandonhilkert.com/</id>
  <author>
    <name><![CDATA[Brandon Hilkert]]></name>
    <email><![CDATA[brandonhilkert@gmail.com]]></email>
  </author>
  <generator uri="http://octopress.org/">Octopress</generator>

  
  <entry>
    <title type="html"><![CDATA[Reducing Sidekiq Memory Usage with Jemalloc]]></title>
    <link href="http://brandonhilkert.com/blog/reducing-sidekiq-memory-usage-with-jemalloc/"/>
    <updated>2018-04-28T14:42:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/reducing-sidekiq-memory-usage-with-jemalloc</id>
    <content type="html"><![CDATA[<p>Ruby and Rails don&rsquo;t have a reputation of being memory-friendly. This comes with a trade-off of being a higher level language that tends to be more developer-friendly. For me, it works. I&rsquo;m content knowing I might have to pay more to scale a large application knowing I can write it in a language I enjoy.</p>

<p>Turns out&hellip;Ruby’s not the memory hog I&rsquo;d previously thought. After some research and experimentation, I&rsquo;ve found <code>jemalloc</code> to offer significant memory savings while at least preserving performance, if not improving it as well.</p>

<!--more-->


<h2>The Problem</h2>

<p>At <a href="https://www.bark.us">Bark</a>, we poll external APIs for millions of monitored social media, text, and emails. This is all done through <a href="http://sidekiq.org/">Sidekiq</a> background jobs. Even though Ruby doesn&rsquo;t truly allow parallelism, we see great benefit with Sidekiq concurrency as the jobs wait for external APIs to respond. The API responses can often be large, not to mention any media they might include. As a result, we see the memory usage of our Sidekiq workers increase until they&rsquo;re ultimately killed and restarted by <a href="https://www.freedesktop.org/wiki/Software/systemd/"><code>systemd</code></a>.</p>

<p>The following shows a common memory usage pattern for our queue servers:</p>

<p><img class="center" src="http://brandonhilkert.com/images/jemalloc/sidekiq-memory-usage-before.png" title="&#34;Sidekiq servers memory usage before using jemalloc&#34;" alt="&#34;Sidekiq servers memory usage before using jemalloc&#34;"></p>

<p>Two things to notice:</p>

<ol>
<li><p><strong>Memory increased quickly</strong> - The rise of memory happens immediately after the processes are restarted. We deploy multiple times a day, but this was especially problematic on the weekends when deploys are happening less frequently</p></li>
<li><p><strong>Memory wasn&rsquo;t reused until restarted</strong> - The jaggedness of graph towards the center is the result of the memory limits we imposed on the <code>systemd</code> processes, causing them to be killed and ultimately restarted until they later reach the configured max memory setting again. Because the processes didn&rsquo;t appear to be reusing memory, we saw this happen just a few minutes after being restarted.</p></li>
</ol>


<h2>The Solution</h2>

<p>As the <a href="https://brandonhilkert.com/blog/why-i-wrote-the-sucker-punch-gem/">author of a multi-threaded background processing library</a>, I frequently see reports of memory leaks in Rails applications. As a Sidekiq user, <a href="https://github.com/mperham/sidekiq/issues/3824">this one caught my attention</a>. It starts as a classic memory leak report, but later turns towards deeper issues in the underlying operating system, not in the application. With <a href="https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html">Nate Berkopec&rsquo;s post on Ruby memory usage in multi-threaded applications</a> referenced, the reporter found switching to <code>jemalloc</code> to fix their issue.</p>

<p><code>jemalloc</code> describes itself as:</p>

<blockquote><p>a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support</p></blockquote>

<p>The description targets our use-case and issues with the current memory allocator. We were seeing terrible fragmentation when using Sidekiq (concurrent workers).</p>

<h3>How to use jemalloc</h3>

<p>Ruby can use <code>jemalloc</code> a few different ways. It can be compiled with <code>jemalloc</code>, but we already had Ruby installed and were interested in trying it with the least amount of infrastructure changes.</p>

<p>It turns out Ruby will attempt to use <code>jemalloc</code> if the <a href="https://github.com/jemalloc/jemalloc/wiki/Getting-Started">well-document environment variable <code>LD_PRELOAD</code></a> is set.</p>

<p>Our Sidekiq servers use Ubuntu 16.04, so we started by installing <code>jemalloc</code>:</p>

<figure class="code"><pre><code class="bash">sudo apt-get install libjemalloc-dev</code></pre></figure>


<p>From there, we configured the <code>LD_PRELOAD</code> environment variable by adding the following to <code>/etc/environment</code>:</p>

<figure class="code"><pre><code class="bash">LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1</code></pre></figure>


<p><em>Note: The location of <code>jemalloc</code> may vary depending on version and/or Linux distribution.</em></p>

<h3>Benchmark</h3>

<p>We benchmarked <code>jemalloc</code> on just one of the queue servers. This would allow us to do a true comparison against similar activity.</p>

<p><img class="center" src="http://brandonhilkert.com/images/jemalloc/sidekiq-memory-usage-comparison.png" title="&#34;Sidekiq server memory usage with one server using jemalloc&#34;" alt="&#34;Sidekiq server memory usage with one server using jemalloc&#34;"></p>

<p>As we can see, the difference is drastic &ndash; <strong>over 4x decrease in memory usage</strong>!</p>

<p>The more impressive detail was the consistency. Total memory usage doesn&rsquo;t waver much. Processing large payloads and media, I assumed we&rsquo;d continue to see the peaks and valleys common to processing social media content. The sidekiq processes using <code>jemalloc</code> show a better ability to use previously allocated memory.</p>

<p><img class="center" src="http://brandonhilkert.com/images/jemalloc/sidekiq-memory-usage-with-jemalloc-details.png" title="&#34;Sidekiq server memory usage details with one server using jemalloc&#34;" alt="&#34;Sidekiq server memory usage details with one server using jemalloc&#34;"></p>

<h3>Roll it in to production</h3>

<p>With similar behavior over a 3 day period, we concluded to roll it out to the remaining queue servers.</p>

<p>The reduced memory usage continues to be impressive, all without any noticeable negative trade-offs.</p>

<p><img class="center" src="http://brandonhilkert.com/images/jemalloc/sidekiq-memory-usage-after.png" title="&#34;Sidekiq server memory usage after using jemalloc&#34;" alt="&#34;Sidekiq server memory usage after using jemalloc&#34;"></p>

<h2>Conclusion</h2>

<p>We were surprised by the significant decrease in memory usage by switching to <code>jemalloc</code>. Based on the other reports, we assumed it be reasonable, but not a 4x decrease.</p>

<p>Even after looking at these graphs for the last couple days, the differences seem too good to be true. But all is well and it&rsquo;s hard to imagine NOT doing this for any Ruby server we deploy in the future.</p>

<p>Give it a shot. I&rsquo;d love to see your results.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Monitoring Sidekiq using AWS Lambda and CloudWatch]]></title>
    <link href="http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-cloudwatch/"/>
    <updated>2017-03-27T13:58:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-cloudwatch</id>
    <content type="html"><![CDATA[<p>A few articles ago, I wrote about <a href="http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-slack/">how to monitor Sidekiq retries using AWS Lambda</a>. Retries are often the first indication of an issue if your application does a lot of background work.</p>

<p>As <a href="https://www.bark.us">Bark</a> continues to grow, we became interested in how the number of jobs enqueued and retrying trended over time. Using AWS Lambda to post this data to CloudWatch, we were able to visualize this data over time.</p>

<!--more-->


<h2>The Problem</h2>

<p><a href="http://sidekiq.org/">Sidekiq</a> offers a way to visual the jobs processed over time when on the dashboard. In fact, <a href="http://brandonhilkert.com/blog/3-ways-to-get-started-contributing-to-open-source/">this graph was one of my first open source contributions</a>.</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-cloudwatch/sidekiq-dashboard.png" title="&#34;Sidekiq Dashboard&#34;" alt="&#34;Sidekiq Dashboard&#34;"></p>

<p>Unfortunately, these graph don&rsquo;t show the number of retries from 2 am last night, or how long it took to exhaust the queues when 2 million jobs were created.</p>

<p>Historical queue data is important if our application does a lot of background work and number of users is growing. Seeing these performance characteristics over time can help us be more prepared to add more background workers or scale our infrastructure in a way to stay ahead when our application is growing quickly.</p>

<h2>The Solution</h2>

<p>Because Bark is on AWS, we naturally looked to their tools for assistance. We already use CloudWatch to store data about memory, disk, and CPU usage for each server. This has served us well and allows us to set alarms for certain thresholds and graph this data over time:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-cloudwatch/cloudwatch-memory.png" title="&#34;Monitoring memory usage on AWS CloudWatch&#34;" alt="&#34;Monitoring memory usage on AWS CloudWatch&#34;"></p>

<p>Knowing we&rsquo;d have similar data for queue usage, we figured we could do the same with Sidekiq.</p>

<h3>Sidekiq Queue Data Endpoint</h3>

<p>If you remember from the last article on <a href="http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-slack/">monitoring Sidekiq retries using AWS Lambda</a>, we setup an endpoint in our application to return Sidekiq stats:</p>

<figure class='code'><pre><code>require 'sidekiq/api'

class SidekiqQueuesController &lt; ApplicationController
  skip_before_action :require_authentication

  def index
    base_stats = Sidekiq::Stats.new
    stats = {
       enqueued: base_stats.enqueued,
       queues: base_stats.queues,
       busy: Sidekiq::Workers.new.size,
       retries: base_stats.retry_size
    }

    render json: stats
  end
end</code></pre></figure>


<p>along with the route:</p>

<figure class='code'><pre><code>resources :sidekiq_queues, only: [:index]</code></pre></figure>


<p>Using this resource, we need to poll at some regular interval and store the results.</p>

<h3>AWS Lambda Function</h3>

<p>AWS Lambda functions are perfect for one-off functions that feel like a burden to maintain in our application.</p>

<p>For the trigger, we&rsquo;ll use &ldquo;CloudWatch Events - Schedule&rdquo;:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/lambda-trigger.png" title="&#34;AWS Lambda trigger&#34;" alt="&#34;AWS Lambda trigger&#34;"></p>

<p>From here, we&rsquo;ll enter a name and description for our rule and define its rate (I chose every 5 minutes). Enable the trigger and we&rsquo;ll move to defining our code. Next, we&rsquo;ll give the function a name and choose the latest NodeJS as the runtime. Within the inline editor, we&rsquo;ll use the following code:</p>

<figure class='code'><pre><code>var AWS = require('aws-sdk');
var url = require('url');
var https = require('https');

if (typeof Promise === 'undefined') {
  AWS.config.setPromisesDependency(require('bluebird'));
}

var cloudwatch = new AWS.CloudWatch();

sidekiqUrl = '[Sidekiq stat URL]'

var logMetric = function(attr, value) {
    var params = {
        MetricData: [
            {
                MetricName: attr,
                Dimensions: [
                    {
                        Name: "App",
                        Value: "www"
                    }
                ],
                Timestamp: new Date(),
                Unit: "Count",
                Value: value
            }
        ],
        Namespace: "Queues"
    };

    return cloudwatch.putMetricData(params).promise();
}

var getQueueStats = function(statsUrl) {
    return new Promise(function(resolve, reject) {
        var options = url.parse(statsUrl);
        options.headers = {
            'Accept': 'application/json',
        };
        var req = https.request(options, function(res){
            var body = '';

            res.setEncoding('utf8');

            //another chunk of data has been recieved, so append it to `str`
            res.on('data', function (chunk) {
                body += chunk;
            });

            //the whole response has been recieved
            res.on('end', function () {
                resolve(JSON.parse(body));
            });
        });

        req.on('error', function(e) {
           reject(e);
        });

        req.end();
    });
}

exports.handler = function(event, context) {
    getQueueStats(sidekiqUrl).then(function(stats) {
        console.log('STATS: ', stats);

        var retryPromise = logMetric("Retries", stats.retries);
        var busyPromise = logMetric("Busy", stats.busy);
        var enqueuedPromise = logMetric("Enqueued", stats.enqueued);

        Promise.all([retryPromise, busyPromise, enqueuedPromise]).then(function(values) {
            console.log(values);
            context.succeed();
        }).catch(function(err){
            console.error(err);
            context.fail("Server error when processing message: " + err );
        });
    })
    .catch(function(err) {
        console.error(err);
        context.fail("Failed to get stats from HTTP request: " + err );
    });
};</code></pre></figure>


<p><em>Note: <code>sidekiqURL</code>  need to be defined with appropriate values for this to work.</em></p>

<p>Within CloudWatch, we&rsquo;re defining a new namespace (&ldquo;Queues&rdquo;) where our data will live. Within this namespace, we&rsquo;ll segregate these stats by the Dimension <code>App</code>. As we can see, we chose <code>www</code> for this value. If we wanted to monitor the queues of a few servers, each one could use a unique App name.</p>

<p>Review and save the Lambda function and we&rsquo;re all set!</p>

<h3>Graphing Sidekiq Queue Data</h3>

<p>Once the function has run a few times, under CloudWatch &ndash;> Metrics, we&rsquo;ll see the following custom namespace:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-cloudwatch/custom-namespace.png" title="&#34;AWS CloudWatch Custom Namespace&#34;" alt="&#34;AWS CloudWatch Custom Namespace&#34;"></p>

<p>From here, we&rsquo;ll choose the name of our app (<code>www</code>) and graph the values of each of these values over whatever timespan we want:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-cloudwatch/sidekiq-queues.png" title="&#34;AWS CloudWatch Monitoring Sidekiq Queues&#34;" alt="&#34;AWS CloudWatch Monitoring Sidekiq Queues&#34;"></p>

<h2>Conclusion</h2>

<p>I&rsquo;ve found AWS lamba to be a great place for endpoints/functionality that feels cumbersome to include in my applications. Bringing deeper visibility to our Sidekiq queues has given us the ability to see usage trends we weren&rsquo;t aware of throughout the day. This will help us preemptively add infrastructure resources to keep up with our growth.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Using PhantomJS to Capture Analytics for a Rails Email Template]]></title>
    <link href="http://brandonhilkert.com/blog/using-phantomjs-to-capture-analytics-for-a-rails-email-template/"/>
    <updated>2017-02-17T09:20:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/using-phantomjs-to-capture-analytics-for-a-rails-email-template</id>
    <content type="html"><![CDATA[<p>Every Sunday <a href="https://www.bark.us">Bark</a> sends parents a weekly recap of their children&rsquo;s activity online. The first iteration was pretty basic, things like &ldquo;Your children sent X number of messages this past week&rdquo; and &ldquo;You have 10 messages to review&rdquo;. But we wanted to go deeper&hellip;</p>

<p>Using PhantomJS, we were able to take screenshots of a modified version of the application&rsquo;s child analytics page and include the image in the email sent to the parent. The email now contains everything the parent can see from the application, all without leaving their inbox.</p>

<!--more-->


<h2>The Problem</h2>

<p>If you&rsquo;ve every attempted to style an HTML email with anything more than text, you&rsquo;re sadly familiar with its limitations. Tables and other elements from the 90&rsquo;s are the only tools we have to maintain cross-platform compatibility. One of those tools, the subject of this post, is images.</p>

<p>Our weekly recap email contained a line chart illustrating the number of messages the child exchanged during the past week. While this was somewhat helpful to parents, it didn&rsquo;t tell the full story.</p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/recap-v1.png" title="&#34;First version of the Bark weekly recap email&#34;" alt="&#34;First version of the Bark weekly recap email&#34;"></p>

<p>While this email does include a graph, it&rsquo;s the result of calling out to a service that rendered the graph, stored it, and returned the URL to include as an image. While this service worked well for simple illustrations, it didn&rsquo;t provide us the flexibility we had with modern web tools and visualizations. Aside from that, the styling of the charts is limited.</p>

<p>Elsewhere on Bark, we had already built the full story through other lists and illustrations.</p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/analytics-interactions.png" title="&#34;Bark analytics with interactions&#34;" alt="&#34;Bark analytics with interactions&#34;"></p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/analytics-activities.png" title="&#34;Bark analytics with activities&#34;" alt="&#34;Bark analytics with activities&#34;"></p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/analytics-time.png" title="&#34;Bark analytics over time&#34;" alt="&#34;Bark analytics over time&#34;"></p>

<p>Recreating the same lists and charts just for the email felt like a duplication nightmare and vulnerable to becoming stale. We wouldn&rsquo;t be able to use the same rendering because most of the charts rendered SVGs, which aren&rsquo;t compatible with most email clients. Additionally, there were a handful of CSS styles needed for the page that while possible to include in the email, felt excessive.</p>

<p>Stepping back from the problem, we realized we wanted the majority of the analytics page, just embedded in the email. Was there a way to do that without rewriting it for email clients?</p>

<h2>The Solution</h2>

<p>We could take a screenshot of the analytics page and embed it as an image in the recap email.</p>

<h3>wkhtmltoimage</h3>

<p>Our first attempt was using <code>wkhtmltoimage</code> and the <a href="https://github.com/csquared/IMGKit"><code>IMGKit</code></a> ruby gem. Aside from the headaches of installing a working OSX version of <code>wkhtmltoimage</code> due to a regression, getting a working configuration was non-trivial.</p>

<p><code>wkhtmltoimage</code> doesn&rsquo;t parse CSS or JavaScript, so those would have to be explicitly included. Since Bark uses the asset pipeline, we&rsquo;d have to get the latest version of the compiled assets both on development and production. This proved to be extremely difficult under the default configuration given how each group is compiled. We use <a href="https://www.nginx.com/resources/wiki/">Nginx</a> to serve our assets in the production and it felt weird to have a configuration we would <em>hope</em> worked when we pushed to production.</p>

<p>After spending almost a full day trying to get the right combination of settings, we gave up. There had to be a better way&hellip;</p>

<h3>Saas FTW</h3>

<p>Frankly, our next step was to look for a Saas service that provided this functionality. Certainly I should be able to send a URL to an API, and they&rsquo;d return an image, perhaps with some configuration options for size and response. To our surprise, there were none (based on a 15 minute internet search. If you know of one, we&rsquo;d love to hear about it). There were plenty of services focused on rendering PDFs geared towards invoices and other documents one would want to email customers.</p>

<h3>PhantomJS</h3>

<p>We were reminded of Capybara&rsquo;s ability to capture screenshots on failed test runs. After poking around this functionality, <code>phantomjs</code> emerged as a potential solution.</p>

<p>If we installed <code>phantomjs</code> on to the server and ran a command line script to <a href="http://phantomjs.org/screen-capture.html">capture the screenshots</a> prior to sending the email, we could <a href="http://guides.rubyonrails.org/action_mailer_basics.html#complete-list-of-action-mailer-methods">inline include those images</a> in the email.</p>

<p>Installation of <code>phantomjs</code> was simplified using the <a href="https://github.com/colszowka/phantomjs-gem"><code>phantomjs-gem</code> ruby gem</a>, which installs the appropriate <code>phantomjs</code> binary for the operating system and provides an API (<code>#run</code>) to execute commands.</p>

<h3>Script the Screenshot</h3>

<p>Using a <a href="https://github.com/ariya/phantomjs/blob/master/examples/rasterize.js">screenshot example</a> from the <a href="https://github.com/ariya/phantomjs">PhantomJS github repo</a>, we put together a script (<code>vendor/assets/javascripts/phantom-screenshot.js</code>) to capture the analytics page:</p>

<figure class='code'><pre><code>#!/bin/sh

var page   = require('webpage').create();
var system = require('system');
page.viewportSize = { width: 550, height: 600 };
page.zoomFactor = 0.85;

page.onError = function(msg, trace) {
  var msgStack = ['ERROR: ' + msg];
  if (trace && trace.length) {
    msgStack.push('TRACE:');
    trace.forEach(function(t) {
      msgStack.push(' -&gt; ' + t.file + ': ' + t.line + (t.function ? ' (in function "' + t.function +'")' : ''));
    });
  }

  console.error(msgStack.join('\n'));
};

page.open(system.args[1], function(status) {
  if (status !== 'success') {
    console.log('Unable to load the address!');
    phantom.exit(1);
  } else {
    window.setTimeout(function () {
      page.render(system.args[2]);
      phantom.exit();
    }, 2000);
  }
});</code></pre></figure>


<p><em>Note: a variety of the settings (<code>viewPortSize</code>, <code>zoomFactor</code>, and <code>timeout</code>) were found using trial and error for our particular situation.</em></p>

<p>We use Sidekiq to enqueue the thousands of recap emails sent to parents each week. Because this approach relies on using our existing website as the source data for the screenshot, we have to be conscious of spreading the job processing over a certain period of time, so we don&rsquo;t overload the application for regular users.</p>

<h3>Create the Screenshot</h3>

<p>With this script in hand, now we can use the following class to create the image for each child:</p>

<figure class='code'><pre><code>class RecapAnalytics
  ScreenshotError = Class.new(StandardError)

  def initialize(analytics_url:)
    @analytics_url = analytics_url
  end

  def file_path
    unless create_screenshot
      raise ScreenshotError.new("Unable to complete analytics screenshot")
    end

    temp_file_path
  end

  def create_screenshot
    Phantomjs.run screenshot_script, analytics_url, temp_file_path
  end

  private

  attr_reader :analytics_url

  def screenshot_script
    Rails.root.join('vendor', 'assets', 'javascripts', 'phantom-screenshot.js').to_s
  end

  def temp_file_path
    @temp_file_path ||= begin
      file = Tempfile.new("child-analytics")
      file.path + ".png"
    end
  end
end</code></pre></figure>


<p>For each child, we&rsquo;ll provide the URL to the child&rsquo;s analytics page and run the following <code>file_path</code> method to return the path of the screenshot:</p>

<figure class='code'><pre><code>RecapAnalytics.new(analytics_url: "https://www.bark.us/children/XXX/analytics").file_path</code></pre></figure>


<h2>Adding as an Inline Email Attachment</h2>

<p>With an image for each child, we can iterate through each child and inline include the image in the mailer:</p>

<figure class='code'><pre><code>file_path = RecapAnalytics.new(analytics_url: "https://www.bark.us/children/XXX/analytics").file_path
attachments.inline["#{child.first_name}.png"] = File.read(file_path)</code></pre></figure>


<p>Then in the email template, we can include the following to render the image:</p>

<figure class='code'><pre><code> &lt;%= link_to image_tag(attachments["#{child.first_name}.png"].url), child_url(child) %&gt;</code></pre></figure>


<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/email-interactions.png" title="&#34;Bark weekly recap email with interactions&#34;" alt="&#34;Bark weekly recap email with interactions&#34;"></p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/email-activities.png" title="&#34;Bark weekly recap email with activities&#34;" alt="&#34;Bark weekly recap email with activities&#34;"></p>

<p><img class="center" src="http://brandonhilkert.com/images/phantomjs/email-time.png" title="&#34;Bark weekly recap email over time&#34;" alt="&#34;Bark weekly recap email over time&#34;"></p>

<h2>Conclusion</h2>

<p>PhantomJS proved to be the simplest solution for the job. With a small script and no further configuration, we were able to lean on the analytics page we&rsquo;d already built to improve the Bark recap emails.</p>

<p>Parents will now have more visibility in to their child&rsquo;s online activity without leaving their inbox.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Monitoring Sidekiq Using AWS Lambda and Slack]]></title>
    <link href="http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-slack/"/>
    <updated>2016-10-25T11:54:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/monitoring-sidekiq-using-aws-lambda-and-slack</id>
    <content type="html"><![CDATA[<p>It&rsquo;s no mystery I&rsquo;m a <a href="http://sidekiq.org/">Sidekiq</a> fan &ndash; my background job processing library of choice for any non-trivial applications. My favorite feature of Sidekiq has to be retries. By default, failed jobs will retry 25 times over the course of 21 days.</p>

<p>As a remote company, we use Slack to stay in touch with everyone AND to manage/monitor our infrastructure (hello #chatops). We can deploy from Slack (we don&rsquo;t generally, we have full CI) and be notified of infrastructure and application errors.</p>

<!--more-->


<p>When Sidekiq retries accumulate, it&rsquo;s a good indication that something more severe might be wrong. Rather than get an email we won&rsquo;t see for 30 minutes, we decided to integrate these notifications in to Slack. In doing so, we found <a href="https://aws.amazon.com/lambda/">AWS Lambda</a> to be a lightweight solution to tie the monitoring of Sidekiq and notifications in Slack together.</p>

<h2>The Problem</h2>

<p><a href="https://www.bark.us/">Bark</a> is background job-heavy. The web application is a glorified CRUD app that sets up the data needed to poll a child&rsquo;s social media feed and monitor for potential issues. The best-case scenario for a parent is that they will never hear from us.</p>

<p>Because Bark&rsquo;s background jobs commonly interact with 3rd-party APIs, failures aren&rsquo;t a big surprise. APIs can be down, network connections can fail &ndash; Sidekiq&rsquo;s retry logic protects us from transient network errors. Under normal circumstances, jobs retry and ultimately run successfully after subsequent attempts. These are non-issues and something we don&rsquo;t need an engineer to investigate.</p>

<p>There are times when retries accumulate, giving us a strong indication that something more severe may be wrong. Initially, we setup New Relic to notify us of an increased error rate. This worked for simple cases, but was sometimes a false positive. As a result, we started to ignore them, which potentially masked more important issues.</p>

<p>We soon realized one of the gauges of application health was the number of retries in the Sidekiq queue. We have the Sidekiq Web UI mounted within our admin application, so we&rsquo;d browse there a few times a day to make sure the number of retries weren&rsquo;t outside our expectations (in this case &lt; 50 were acceptable).</p>

<p>This wasn&rsquo;t a great use of our time. Ideally, we wanted a Slack notification when the number of Sidekiq retries was > 50.</p>

<h2>The Solution</h2>

<p>Because Bark is on AWS, we naturally looked to their tools for assistance. In this case, we needed something that would poll Sidekiq, check the number of retries, and <code>POST</code> to Slack if the number of retries was > 50.</p>

<p>There were a few options:</p>

<ol>
<li>Add the Sidekiq polling and Slack notification logic to our main application and setup a Cron job</li>
<li>Create a new satellite application that ONLY does the above (microservices???)</li>
<li>Setup an AWS Lambda function to handle the above logic</li>
</ol>


<p>The first two options would&rsquo;ve worked, but I was hesistant to add complexity to our main application. I was also hesitant to have to manage another application (ie. updates, etc.) for something that seemed simple.</p>

<p>Option &ldquo;AWS Lambda&rdquo; won! Let&rsquo;s take a look at the implementation.</p>

<h3>Sidekiq Queue Data Endpoint</h3>

<p>First, we need to expose the number of Sideki retries somehow. As I mentioned above, the Sidekiq web UI is mounted in our admin application, but behind an authentication layer that would&rsquo;ve been non-trivial to publicly expose.</p>

<p>Instead, we created a new Rails route to respond with some basic details about the Sidekiq system.</p>

<figure class='code'><pre><code>require 'sidekiq/api'

class SidekiqQueuesController &lt; ApplicationController
  skip_before_action :require_authentication

  def index
    base_stats = Sidekiq::Stats.new
    stats = {
       enqueued: base_stats.enqueued,
       queues: base_stats.queues,
       busy: Sidekiq::Workers.new.size,
       retries: base_stats.retry_size
    }

    render json: stats
  end
end</code></pre></figure>


<p>along with the route:</p>

<figure class='code'><pre><code>resources :sidekiq_queues, only: [:index]</code></pre></figure>


<p>As you can see, the endpoint is public (there&rsquo;s no job args or names exposed &ndash; just counts). The code digs in to the <a href="https://github.com/mperham/sidekiq/wiki/API">Sidekiq API</a> to interrogate the size of queues.</p>

<h3>Slack Incoming WebHook</h3>

<p>We want to be able to POST to Slack when the number of Sidekiq retries are > 50. To do this, we&rsquo;ll setup a custom incoming webhook integration in Slack.</p>

<p>We&rsquo;ll start by choose <code>Apps &amp; integrations</code> from within the main Slack options. From here, choose <code>Manage</code> in the top right, and then <code>Custom Integrations</code> on the left. You&rsquo;ll have 2 options:</p>

<ol>
<li>Incoming WebHooks</li>
<li>Slash Commands</li>
</ol>


<p>We&rsquo;ll choose <code>Incoming Webhooks</code> and choose <code>Add Configuration</code> to add a new one. From here, we&rsquo;ll supply the information needed to specify the channel where the notifications will appear and how they look.</p>

<p>The most important of this step is to get the <code>Webhook URL</code>. This will be the URL we <code>POST</code> to from within our Lambda function when retries are above our acceptable threshold.</p>

<h3>AWS Lambda Function</h3>

<p>Now that we have our endpoint to expose the number of retries (among other things) and the Slack webhook URL to <code>POST</code> to, we need to setup the AWS Lambda function to tie to the two together. We&rsquo;ll start by creating a new Lambda function with the defaults &ndash; using the latest Node.</p>

<p>For the trigger, we&rsquo;ll use &ldquo;CloudWatch Events - Schedule&rdquo;:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/lambda-trigger.png" title="&#34;AWS Lambda trigger&#34;" alt="&#34;AWS Lambda trigger&#34;"></p>

<p>From here, we&rsquo;ll enter a name and description for our rule and define its rate (I chose every 5 minutes). Enable the trigger and we&rsquo;ll move to defining our code. Next, we&rsquo;ll give the function a name and choose the latest NodeJS as the runtime. Within the inline editor, we&rsquo;ll use the following code:</p>

<figure class='code'><pre><code>var AWS = require('aws-sdk');
var url = require('url');
var https = require('https');
var sidekiqURL, hookUrl, slackChannel, retryThreshold;

sidekiqUrl = '[Sidekiq queue JSON endpoint]'
hookUrl = '[Slack Incoming WebHooks URL w/ token]';
slackChannel = '#operations';  // Enter the Slack channel to send a message to
retryThreshold = 50;

var postMessageToSlack = function(message, callback) {
    var body = JSON.stringify(message);
    var options = url.parse(hookUrl);
    options.method = 'POST';
    options.headers = {
        'Content-Type': 'application/json',
        'Content-Length': Buffer.byteLength(body),
    };

    var postReq = https.request(options, function(res) {
        var chunks = [];
        res.setEncoding('utf8');
        res.on('data', function(chunk) {
            return chunks.push(chunk);
        });
        res.on('end', function() {
            var body = chunks.join('');
            if (callback) {
                callback({
                    body: body,
                    statusCode: res.statusCode,
                    statusMessage: res.statusMessage
                });
            }
        });
        return res;
    });

    postReq.write(body);
    postReq.end();
};

var getQueueStats = function(callback) {
    var options = url.parse(sidekiqUrl);
    options.headers = {
        'Accept': 'application/json',
    };

    var getReq = https.request(options, function(res){
        var body = '';

        res.setEncoding('utf8');

        //another chunk of data has been recieved, so append it to `str`
        res.on('data', function (chunk) {
            body += chunk;
        });

        //the whole response has been recieved, so we just print it out here
        res.on('end', function () {
            if (callback) {
                callback({
                    body: JSON.parse(body),
                    statusCode: res.statusCode,
                    statusMessage: res.statusMessage
                });
            }
        });
    })

    getReq.end();
}

var processEvent = function(event, context) {
    getQueueStats(function(stats){
        console.log('STATS: ', stats.body);

        var retries = stats.body.retries;

        if (retries &gt; retryThreshold) {
            var slackMessage = {
                channel: slackChannel,
                text: "www Sidekiq retries - " + retries
            };

            postMessageToSlack(slackMessage, function(response) {
                if (response.statusCode &lt; 400) {
                    console.info('Message posted successfully');
                    context.succeed();
                } else if (response.statusCode &lt; 500) {
                    console.error("Error posting message to Slack API: " + response.statusCode + " - " + response.statusMessage);
                    context.succeed();  // Don't retry because the error is due to a problem with the request
                } else {
                    // Let Lambda retry
                    context.fail("Server error when processing message: " + response.statusCode + " - " + response.statusMessage);
                }
            });
        } else {
            console.info('Sidekiq retries were ' + retries + ' . Below threshold.');
            context.succeed();
        }
    })
};

exports.handler = function(event, context) {
    processEvent(event, context);
};</code></pre></figure>


<p><em>Note: <code>sidekiqURL</code> and <code>hookURL</code> need to be defined with appropriate values for this to work.</em></p>

<p>Review and save the Lambda function and we&rsquo;re all set!</p>

<h3>Review</h3>

<p>We can review the Lambda function logs on CloudWatch. Go to CloudWatch and choose &ldquo;Logs&rdquo; from the left menu. From here, we&rsquo;ll click the link to the name of our Lambda function:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/sidekiq-logs.png" title="&#34;AWS Cloudwatch logs&#34;" alt="&#34;AWS Cloudwatch logs&#34;"></p>

<p>From here, logs for each invocation of the Lambda function will be grouped in to a log stream:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/log-streams.png" title="&#34;AWS Cloudwatch log streams&#34;" alt="&#34;AWS Cloudwatch log streams&#34;"></p>

<p>Grouped by time, each link will contain multiple invocations. A single execution is wrapped with a <code>START</code> and <code>END</code>, as shown in the logs. Messages in between will be calls to <code>console.log</code> from within our function. We logged the results of the Sidekiq queue poll for debugging purposes, so you can see that below:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/log.png" title="&#34;AWS Cloudwatch log&#34;" alt="&#34;AWS Cloudwatch log&#34;"></p>

<p>This was invocation where the number of retries were &lt; 50, and a result, didn&rsquo;t need to <code>POST</code> to Slack. Let&rsquo;s take a look at the opposite:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/log-post.png" title="&#34;AWS Cloudwatch log posting to Slack&#34;" alt="&#34;AWS Cloudwatch log posting to Slack&#34;"></p>

<p>We can see the <code>Message posted successfully</code> log indicating our message was successfully sent to Slack&rsquo;s incoming webhook.</p>

<p>Finally, here&rsquo;s what the resulting message looks like in Slack when the number of Sidekiq retries are above our threshold:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq-monitor/slack.png" title="&#34;Slack notifications for Sidekiq retries&#34;" alt="&#34;Slack notifications for Sidekiq retries&#34;"></p>

<h2>Conclusion</h2>

<p>Using new tools is fun, but not when it brings operational complexity. I&rsquo;ve personally found AWS lamba to be a great place for endpoints/functionality that feels cumbersome to include in my applications. Bringing these notifications in to Slack has been a big win for our team. We took a previously untrustworthy notification (NewRelic error rate) and brought some clarity to the state and health of our applications.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Care About What You Build]]></title>
    <link href="http://brandonhilkert.com/blog/care-about-what-you-build/"/>
    <updated>2016-07-10T12:08:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/care-about-what-you-build</id>
    <content type="html"><![CDATA[<p>I&rsquo;ve spent the majority of my career working for companies building products I either wasn&rsquo;t interested in, or wasn&rsquo;t the target user. They were jobs. In exchange for my 40 hours, they supplied me a paycheck. As a result, I went home at the end of the day and was able to disconnect.</p>

<p>Fast forward almost 15 years and I&rsquo;m on the opposite end of the spectrum &ndash; I build a product I want to exist, a parent like me is the target user and furthermore, I have equity in the company.</p>

<!--more-->


<p>This spectrum of motivation and responsibility encapsulates all different types of software development jobs. Personal preferences play as large part in defining motivation. This articles explores how caring about what I was building changed my perception of work and the questions I asked myself to get there.</p>

<h2>A Unique Time in Technology</h2>

<p> Knowledge workers..is that we’re called? As most job boards illustrate, I can’t recall another time in my career when there&rsquo;s been such demand for developers. Recognizing the talent pool is limited in most areas, companies are hiring remote workers to grow their engineering teams.</p>

<p>If we, as developers, are in such high demand, <strong>why do we settle for anything short of our dream job?</strong> It&rsquo;s the perfect economic time to make a change if you don&rsquo;t feel fulfilled. Who knows if there&rsquo;ll be another time where we have so much leverage. You deserve a job you love, that also fits your professional ambitions.</p>

<h2>The Soapbox Test</h2>

<p>Most software developers I know are introverts. They generally like writing code, shipping new features, but avoid meetings at all costs. The rise of slack and other remote-focused tooling further increases the human contact void that most developers experience.</p>

<p>My experience with sales people have been the polar opposite. They thrive on human interaction. Sales people, no matter the company, generally position their product/services as the perfect fit for you and your company. That’s ultimately their job &ndash; match the customers&#8217; need(s) with a solution they hopefully sell.</p>

<p>It&rsquo;s not surprising that being in a sales role for a company/product that you love and genuinely want to see succeed is much easier than one you could care less about. This is where roles at a small company often cross-over.</p>

<p>Not too long ago I took a break from writing code to <a href="http://brandonhilkert.com/talks/bark-techcrunch-disrupt.html">pitch Bark at TechCrunch Disrupt</a>. I’m not in sales person don&rsquo;t strive to be. But because I deeply care about Bark and the differences we hope to make, becoming an advocate for the company is easy. I have to find a way to articulate what we do, how we do it, and why we do it. And who better than me? I spend the majority of my days already thinking about it, which makes me more qualified than anyone else.</p>

<p><strong>Imagine you have a microphone in your hand, a crowd of 2,000 people and 2 minutes to tell them what you’re building and why they should care. Does it feel weird or slimy?</strong></p>

<p>If they answer is &ldquo;yes&rdquo;, you should find another job where you&rsquo;re a natural advocate. If your love for what you&rsquo;re building isn&rsquo;t genuine, you’re doing yourself a disservice. Passion and care have a way to turn even the most anti-sales people in to advocates.</p>

<h2>The Deathbed Test</h2>

<p>Doctors and nurses that <a href="http://fourhourworkweek.com/2016/04/14/bj-miller/">spend a lot of time around people in the latter stages of their life</a> get an unusual look at the regrets of those in care. They’re often able to apply the lessons to their own life.</p>

<p>If you read about the most common regrets among the dying, it&rsquo;s something like “I wish I spent more time with my family” or “I wish I didn’t work as much”. Work occupies a large majority of our lives. Assuming we agree to work a somewhat standard career, the next question is “Are we happy with what we’re working on?&#8221;</p>

<p>The deathbed question is a useful one beyond your career decisions.</p>

<p><strong>Imagine you have 1 more day to live and you’re left thinking about all the choices you&rsquo;ve made over your lifetime. Would you be happy with the job you have today or the job you&rsquo;re thinking about taking tomorrow?</strong></p>

<p>If the answer is &ldquo;No&rdquo;, it&rsquo;s time to find something better. Life can be short. Don&rsquo;t waste it on a job you&rsquo;re less than excited about.</p>

<h2>The Lottery Test</h2>

<p>The media constantly reminds us about successful entrepreneurs like Mark Zuckerberg, Bill Gates and Elon Musk who don’t <em>have</em> to work, but do. A common thread amongst those founders is their genuine belief in what they’re creating. Given their financial freedom, they <em>could</em> spend hours sun-bathing on private beaches around the world or traveling to the most remote places on the planet, but they choose to work. In fact, my guess is that none of them refer to what they do as “work”.</p>

<p><strong>If you had unlimited financial freedom, like a sizable lottery, would you continue doing what you’re doing now?</strong></p>

<p>Whether its the position, company, lack of control, or strict hour requirements, most of us would probably adjust our current position in at least a small way. If you’d continue doing <em>exactly</em> what you’re currently doing after winning the lottery, you&rsquo;ve won. This, of all the questions, is the ultimate. If you can answer “Yes” to this one, kudos to you. You’re probably happier than 99.999% of  people in the world.</p>

<h2>Ditch the Product, Love the Craft</h2>

<p>I know what you’re thinking, you don&rsquo;t love what you’re building, so where do you go from here? Not everyone has the freedom and ability to choose a position where they’re 100% happy. There will always be personal trade-offs, some of which might leave you working for a company/product you don&rsquo;t care about and don&rsquo;t have the luxury to leave anytime soon. I&rsquo;ve been there before &ndash; ultimately leaving &ndash; but I made it work for a period of time.</p>

<p>One of the ways I separated myself was to focus on the software development craft. If you’re like me, you care about designing things in a maintainable and reliable way. Whether it’s good design, well-written tests, or using the latest and greatest, shifting your focus away from a product you have no interest levels the playing field.</p>

<p>Take processing incoming email for instance, whether you’re making an internal tool for an ISP or adding email features to your latest drip campaign software, the challenge is the same. Perhaps you work for the less interesting of the two (based on personal preference), removing yourself from the end product where the feature will be present allows you to invest yourself 100% in to making the best programming decisions possible. Being content with this approach requires you have a love for software and everything that comes with it. I’ve done this dozens of times and it’s generally gotten me out of a funk. Looking back, I’ve always been proud of what I’ve accomplished after investing myself 100% in the craft.</p>

<h2>Career Kickstarter</h2>

<p>Whether you’ve just graduated college or a relevant developer bootcamp, you don’t always have the luxury of being picky. You take the best job available at the time, no matter the product, and get some experience under your belt. This is reasonable and expected (I think there’s value in targeting the more interesting companies even if you’re just starting, but depending on your interests, a job with an interesting company may not always be possible). Within 2-3 years (or sooner!), you’ll have the experience needed to be a little more picky. It’s worth keeping an eye on the handful of companies that do interest you in case opportunities arise there in the future.</p>

<h2>Niche Guru</h2>

<p>The other way this comes up is because there&rsquo;s a team or specific developer you want to work with, no matter the company/product. If your interests lie within a specific niche, what better than to learn from the best. Find the most influential person in that niche and try to work with them, attempting to suck every ounce of know-how out of them. This requires putting your own ego and opinions to the side for a bit. The two times I’ve done this in my career, I came out on top.</p>

<h2>Summary</h2>

<p>Happiness is subjective. Every career decision is the result of personally quantifiable trade-offs. When I look back on my career thus far, the happiest I’ve been has coincided with how fulfilled I&rsquo;ve felt about my current position. I’ve recognized this and made changes when necessary. I envy those who can hate their job and block it out when they’re not at work. I can’t. Rather than deal with it, I&rsquo;ve tried to avoid hating my job. Each step I take gets me closer to what I imagine being the <em>perfect</em> job, if there is such a thing.</p>

<p>Care about what you build. Your life will be better for it.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Rails Progress Indicator for Turbolinks Using Nprogress]]></title>
    <link href="http://brandonhilkert.com/blog/rails-progress-indicator-for-turbolinks-using-nprogress/"/>
    <updated>2016-04-29T09:44:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/rails-progress-indicator-for-turbolinks-using-nprogress</id>
    <content type="html"><![CDATA[<p>Contrary to popular opinion, I&rsquo;m a fan of <a href="https://github.com/turbolinks/turbolinks">Turbolinks</a>.
I leave it enabled in all my Rails applications. Most of the negative opinions I hear relate to it &ldquo;breaking&rdquo; third-party jQuery plugins. I say &ldquo;breaking&rdquo; because it&rsquo;s not <em>really</em> changing the plugin&rsquo;s behavior &ndash; just requires the plugin to be initialized differently.</p>

<!--more-->


<p>If you&rsquo;re upgrading to a newer version of Rails and have a bunch of legacy JavaScript code, I can imagine this being difficult. But if you&rsquo;re green-fielding a new application, there&rsquo;s no reason not to take advantage of it. I wrote extensively about <a href="http://brandonhilkert.com/blog/organizing-javascript-in-rails-application-with-turbolinks/">how to organize JavaScript in a Rails application with Turbolinks enabled</a>. If you&rsquo;re struggling to get your JavaScript code to work as expected on clicks through the application, take a look at that post. I continue to use that organization pattern for all my applications and it never lets me down.</p>

<p>With Turbolinks enabled, interacting with an application feels smooth and fast. No more full page refreshes.</p>

<p>Every once in awhile we&rsquo;ll stumble on a page request takes longer than others. Rather than having the user sit there thinking nothing is happening, we can offer better feedback through a loading progress bar, specifically <a href="http://ricostacruz.com/nprogress/">nprogress</a>. I&rsquo;ve found it to be the perfect companion to Turbolinks to create a great user experience.</p>

<h2>The Problem</h2>

<p>In a traditional web application, when we click a link or submit a form, we get a loading spinner where the favicon typically appears. We might also see text in the status bar saying &ldquo;Connecting&hellip;&rdquo; or &ldquo;Loading&hellip;&rdquo;. These are the loading indications that internet users have become accustomed to.</p>

<p>By adopting Turbolinks, we not longer get those loading feedback mechanisms because the request for the new page is asynchronous. Once the request is complete, the new content is rendered in place of the previous page&rsquo;s body element. For fast page loads, this isn&rsquo;t a problem. However, if you have applications like mine, every once in awhile, you might have a page request take a few seconds (reasons for this are beyond the scope of this article). In those cases, a user might click a link and sit there for 2-3 sec. without any indication the page is loading. While Turbolinks generally improves the user experience of our application, having no user feedback for several seconds is not ideal (ideally, you&rsquo;d want to address a page request that takes multiple seconds). This is where <code>nprogress</code> can help.</p>

<h2>The Solution</h2>

<p><a href="http://ricostacruz.com/nprogress/"><code>nprogress</code></a> is a progress loading indicator, like what you see on YouTube.</p>

<p>Like other JavaScript libraries, there&rsquo;s <a href="https://github.com/caarlos0/nprogress-rails">a Ruby Gem that vendors the code and includes it in the Rails asset pipeline</a>.</p>

<p>We&rsquo;ll first add <code>nprogress-rails</code> to our Gemfile:</p>

<figure class='code'><pre><code>gem "nprogress-rails"</code></pre></figure>


<p>Bundle to install the new gem:</p>

<figure class='code'><pre><code>$ bundle install</code></pre></figure>


<p>Now with <code>nprogress</code> installed, we need to include the JavaScript in our application. We&rsquo;ll do this by adding the following the <code>app/assets/javascripts/application.js</code> manifest:</p>

<figure class='code'><pre><code>//= require nprogress
//= require nprogress-turbolinks</code></pre></figure>


<p>We first include the <code>nprogress</code> JavaScript source, and then an adapter that&rsquo;ll hook the Turbolinks request to the progress indicator.</p>

<p><em>Note: If you&rsquo;re familiar with Turbolinks and its events, you&rsquo;ll recognize the <a href="https://github.com/caarlos0/nprogress-rails/blob/master/app/assets/javascripts/nprogress-turbolinks.js">events triggered</a>.</em></p>

<p>By default, the <code>nprogress</code> loading bar is anchored to the top of the browser window, but we need to include some CSS to make this work. Let&rsquo;s open the <code>app/assets/stylesheets/application.scss</code> manifest file and add the following:</p>

<figure class='code'><pre><code>*= require nprogress
*= require nprogress-bootstrap</code></pre></figure>


<p><em>Note: Including <code>nprogress-bootstrap</code> isn&rsquo;t necessary if you don&rsquo;t use <a href="http://getbootstrap.com/css/">Bootstrap</a> in your application. I typically do, so I&rsquo;m going to include it.</em></p>

<p>At this point, we&rsquo;ll have a working loading indicator. But what if we want to tweak the styles to match your application&rsquo;s theme?</p>

<h2>Customizing Nprogress Styles</h2>

<p>Because the <a href="https://github.com/caarlos0/nprogress-rails/blob/master/app/assets/stylesheets/nprogress.scss#L1"><code>nprgress</code> styles are Sass</a>, we can overwrite the variables for customization.</p>

<p>There are 3 variables available to overwite:</p>

<ul>
<li><code>$nprogress-color</code></li>
<li><code>$nprogress-height</code></li>
<li><code>$nprogress-zindex</code></li>
</ul>


<p>For <a href="https://www.bark.us/">Bark</a>, we have an aqua accent color with use throughout the site. It made sense for the <code>nprogress</code> loading indicator to be that same color.</p>

<p>Back in our <code>app/assets/stylesheets/application.scss</code>, I overwrote the variable before including the <code>nprogress</code> source code:</p>

<figure class='code'><pre><code>$nprogress-color: #37c8c9;

@import "nprogress";
@import "nprogress-bootstrap";</code></pre></figure>


<h2>Summary</h2>

<p>I&rsquo;ve found <code>nprogress</code> to be a great companion library to Turbolinks. The two libraries together provide a much better user experience over full page refreshes. Turbolinks helps asynchronously load the page content that&rsquo;s changing and <code>nprogress</code> gives the user feedback that their request is in progress. Now, even when a user has to suffer through multi-second page loads, at least they&rsquo;ll know it&rsquo;s not broken and don&rsquo;t have to click again.</p>

<p>The <a href="https://github.com/turbolinks/turbolinks/blob/master/src/turbolinks/progress_bar.coffee">latest version of Turbolinks has a progress bar
built-in</a>.
I&rsquo;m looking forward to removing the dependency if it performs similarly.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[A Guide to Ruby Gem Post-Install Messages]]></title>
    <link href="http://brandonhilkert.com/blog/ruby-gem-post-install-message/"/>
    <updated>2016-04-22T20:13:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/ruby-gem-post-install-message</id>
    <content type="html"><![CDATA[<p>As gem authors, one of the ways we can provide important information to users of our gems is through post-install messages. Let&rsquo;s explore what they are, how to set them up, what to include and when to use them.</p>

<!--more-->


<h2>What are Post-Install Messages?</h2>

<p>As Rubyists, we have plenty experiences installing gems. By running <code>gem install rails</code>, we&rsquo;re asking Rubygems to install the gem named <code>rails</code> on to our system.</p>

<p>The typical output of installing a gem with no other dependencies (assuming it completes successfully) is minimal:</p>

<figure class="code"><pre><code class="bash">$ gem install so_meta
Successfully installed so_meta-0.1
1 gem installed
</code></pre></figure>


<p>As you can see, we ran <code>gem install so_meta</code> and the output confirmed the install, with nothing more.</p>

<p>If you&rsquo;ve used the <a href="https://github.com/jnunemaker/httparty">HTTParty</a> gem, you&rsquo;ve probably seen the additional line of output it generates when you run <code>gem install httparty</code>:</p>

<figure class="code"><pre><code class="bash">$ gem install httparty
When you HTTParty, you must party hard!
Successfully installed httparty-0.13.7
1 gem installed
</code></pre></figure>


<p>Where did <code>When you HTTParty, you must party hard!</code> come from? It turns out the source of that text was a [post-install message defined in the <a href="https://github.com/jnunemaker/httparty/blob/v0.13.7/httparty.gemspec#L22"><code>gemspec</code></a>.</p>

<p>Now, I know what you&rsquo;re probably thinking&hellip;what good is that message? That&rsquo;s up for debate. In fact, that specific message in <code>HTTParty</code> has been the source of much debate over the years.</p>

<h2>How to configure a Post-Install Message</h2>

<p>As we&rsquo;ve seen before, the <code>gemspec</code> file (located at the root of the gem) defines the specification of a Ruby gem. Using bundler to bootstrap a new gem will automatically create this file. Here&rsquo;s an example of a default <code>gemspec</code> file created by bundler using the command <code>bundle gem brandon</code> (<code>brandon</code> being the name of my fake gem):</p>

<figure class="code"><pre><code class="ruby"># coding: utf-8
lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'brandon/version'

Gem::Specification.new do |spec|
  spec.name          = "brandon"
  spec.version       = Brandon::VERSION
  spec.authors       = ["Brandon Hilkert"]
  spec.email         = ["brandonhilkert@gmail.com"]

  spec.summary       = %q{TODO: Write a short summary, because Rubygems requires one.}
  spec.description   = %q{TODO: Write a longer description or delete this line.}
  spec.homepage      = "TODO: Put your gem's website or public repo URL here."

  # Prevent pushing this gem to RubyGems.org by setting 'allowed_push_host', or
  # delete this section to allow pushing this gem to any host.
  if spec.respond_to?(:metadata)
    spec.metadata['allowed_push_host'] = "TODO: Set to 'http://mygemserver.com'"
  else
    raise "RubyGems 2.0 or newer is required to protect against public gem pushes."
  end

  spec.files         = `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
  spec.bindir        = "exe"
  spec.executables   = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
  spec.require_paths = ["lib"]

  spec.add_development_dependency "bundler", "~> 1.11"
  spec.add_development_dependency "rake", "~> 10.0"
  spec.add_development_dependency "minitest", "~> 5.0"
end
</code></pre></figure>


<p>Aside from <code>summary</code>, <code>description</code>, and <code>homepage</code>, we can leave the rest of this file intact. These setter attributes on the <code>Gem::Specification.new</code> instance allow us to define the options and metadata necessary to properly configure and release a Ruby gem (see <a href="http://guides.rubygems.org/specification-reference/">the Rubygems specification reference</a> for an extensive list of options).</p>

<p>As you might have guessed by now, a post-install message is an <a href="http://guides.rubygems.org/specification-reference/#post_install_message">option available in the gemspec</a>. The value can be a simple string or a more complex <a href="https://en.wikipedia.org/wiki/Here_document">heredoc</a>.</p>

<p>The simplest example being:</p>

<figure class="code"><pre><code class="ruby">spec.post_install_message = "My test post-install message."
</code></pre></figure>


<p>With that in our <code>gemspec</code>, now when we install our fake gem <code>brandon</code>, we&rsquo;ll see the following output:</p>

<figure class="code"><pre><code class="bash">$ gem install brandon
My test post-install message.
Successfully installed brandon-0.1.0
1 gem installed
</code></pre></figure>


<p>Easy, huh?</p>

<p>If we wanted to include a more complex message with line breaks and other formatting, another option would be something like:</p>

<figure class="code"><pre><code class="ruby">s.post_install_message = %q{
My test post-install message.

Another post-install message a few lines down.
}
</code></pre></figure>


<p>The formatting of these messages can get weird because whitespace is preserved in multiline strings. If you&rsquo;re looking to include anything more complex than a simple string literal, it&rsquo;s worth experimenting by installing locally and confirming it&rsquo;s what you want.</p>

<p>The <a href="https://github.com/newrelic/rpm">NewRelic gem</a> is another example that comes to mind that commonly includes more than just a simple string. Looking back at an <a href="https://github.com/newrelic/rpm/blob/v2.12.0/newrelic_rpm.gemspec#L193">older version of the NewRelic gem</a> yields the following <code>post_install_message</code>:</p>

<figure class="code"><pre><code class="ruby">s.post_install_message = %q{
Please see http://support.newrelic.com/faqs/docs/ruby-agent-release-notes
for a complete description of the features and enhancements available
in version 2.12 of the Ruby Agent.

For details on this specific release, refer to the CHANGELOG file.

}
</code></pre></figure>


<p>Notice the message includes a line break both before and after the message. This will help isolate from our post-install messages when included in the longer output of a command like <code>bundle install</code>. Again, if you&rsquo;re focused on formatting and getting it right, it&rsquo;s worth installing locally in to something like a Rails application which yields more output than using <code>gem install [gemname]</code>.</p>

<h2>When to Use Post-Install Messages</h2>

<p>The examples above use post-install messages for different reasons. <code>HTTParty</code>&rsquo;s message wasn&rsquo;t for a serious technical or information reason, just a light-hearted message that&rsquo;s garnered quite a bit of negative attention from users that don&rsquo;t appreciate it.</p>

<p>My suggestion would be to avoid any non-sensical messages and only provide a post-install message for something like breaking changes or information you think is critical to the usage of your gem. In most cases, <strong>post-install messages are most useful when a user is upgrading from an older version of your gem and the new version includes backwards-incompatible changes</strong>. Whether is be syntactical changes or core functionality, post-install messages provide us as gem authors a means to keep our users updated.</p>

<h2>What to Include in Post-Install Messages</h2>

<p>If you&rsquo;re adhering to semantic versioning and introduce any breaking changes in a major release, a post-install message is a great way to warn users about the changes. However, one thing you want to avoid is enumerating your gem&rsquo;s full changelog in the message. In most cases, a short notice about the backwards incompatible changes and a URL for more information is enough.</p>

<p>I <a href="http://brandonhilkert.com/blog/lessons-learned-from-building-a-ruby-gem-api/">introduced a new public API in Sucker Punch</a>, which warranted a major release. Because of these backwards-incompatible changes, I added a post-install message to the new version:</p>

<figure class="code"><pre><code class="bash">$ gem install sucker_punch
Fetching: sucker_punch-2.0.1.gem (100%)
Sucker Punch v2.0 introduces backwards-incompatible changes.
Please see https://github.com/brandonhilkert/sucker_punch/blob/master/CHANGES.md#20 for details.
Successfully installed sucker_punch-2.0.1
1 gem installed
</code></pre></figure>


<p>&ldquo;Sucker Punch v2.0 introduces backwards-incompatible changes&rdquo; provided the heads up that something was different. The URL in the following line allows the users to see a more extension list of the changes and to make adjustments in their application if necessary.</p>

<h2>Summary</h2>

<p>In addition to documentation through a <code>README</code> or wiki, post-install messages are a great way to keep users of our gems informed. Having access to the output of their console is a privilege, so use it sparingly. Like the boy who cried wolf, if we include a wall of text with each release of our gem, users will learn to ignore it and that would negatively affect its value for everyone.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Solving backwards compatibility in Ruby with a proxy object]]></title>
    <link href="http://brandonhilkert.com/blog/solving-backwards-compatibility-in-ruby-with-a-proxy-object/"/>
    <updated>2016-01-26T07:00:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/solving-backwards-compatibility-in-ruby-with-a-proxy-object</id>
    <content type="html"><![CDATA[<p>In a previous article, I <a href="http://brandonhilkert.com/blog/lessons-learned-from-building-a-ruby-gem-api/">documented the upcoming public API changes slated for Sucker Punch v2</a>. Because of a poor initial design, these API changes are <strong>backwards incompatible</strong>.</p>

<p>When I published the previous article, <a href="https://twitter.com/mperham/status/684529380446441472">Mike Perham rightly pointed out the opportunity to support the previous versions&rsquo;s API through an opt-in module</a>. I was hesitant to include support for the old syntax by default, but allowing a developer to require a file to get the old syntax made complete sense to me. My intent was never to abandon existing Sucker Punch users, but it felt necessary for the success of the project going forward.</p>

<!--more-->


<h2>The Problem</h2>

<p>The following is an example of enqueueing a background job with Sucker Punch using the old syntax:</p>

<figure class='code'><pre><code>LogJob.new.async.perform("new_user")</code></pre></figure>


<p>And with the new syntax:</p>

<figure class='code'><pre><code>LogJob.perform_async("new_user")</code></pre></figure>


<p><em>How do we support the old syntax in the new version?</em></p>

<p>Let&rsquo;s step back and reminder ourselves of what a typical job class looks like:</p>

<figure class='code'><pre><code>class LogJob
  include SuckerPunch::Job

  def perform(event)
    Log.new(event).track
  end
end</code></pre></figure>


<p>Important points to notice:</p>

<ol>
<li>Each job includes the <code>SuckerPunch::Job</code> module to gain access to asynchronous behavior</li>
<li>Each job executes its logic using the <code>perform</code> instance method</li>
<li>Each job passes arguments needed for its logic as arguments to the <code>perform</code> instance method</li>
</ol>


<h2>The Solution</h2>

<p>We&rsquo;ll start with the test:</p>

<figure class='code'><pre><code># test/sucker_punch/async_syntax_test.rb

require 'test_helper'

module SuckerPunch
  class AsyncSyntaxTest &lt; Minitest::Test
    def setup
      require 'sucker_punch/async_syntax'
    end

    def test_perform_async_runs_job_asynchronously
      arr = []
      latch = Concurrent::CountDownLatch.new
      FakeLatchJob.new.async.perform(arr, latch)
      latch.wait(0.2)
      assert_equal 1, arr.size
    end

    private

    class FakeLatchJob
      include SuckerPunch::Job

      def perform(arr, latch)
        arr.push true
        latch.count_down
      end
    end
  end
end</code></pre></figure>


<p><em>Note: Some details of this are complex because the job&rsquo;s code is running in another thread. I&rsquo;ll walk through those details in a future article.</em></p>

<p>The basic sequence is:
1. require <code>sucker_punch/async_syntax</code>
2. Execute a background job using the <code>async</code> syntax
3. Assert changes made in that job were successful</p>

<p>Running the tests above, we get the following error:</p>

<figure class='code'><pre><code>1) Error:
SuckerPunch::AsyncSyntaxTest#test_perform_async_runs_job_asynchronously:
LoadError: cannot load such file -- sucker_punch/async_syntax
  /Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:6:in `require'
  /Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:6:in `setup'

1 runs, 0 assertions, 0 failures, 1 errors, 0 skips</code></pre></figure>


<p>Ok, so the file doesn&rsquo;t exist. Let&rsquo;s create it and re-run the tests:</p>

<figure class='code'><pre><code>1) Error:
SuckerPunch::AsyncSyntaxTest#test_perform_async_runs_job_asynchronously:
NoMethodError: undefined method `async' for #&lt;SuckerPunch::AsyncSyntaxTest::FakeLatchJob:0x007fbc73cbf548&gt;
  /Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:12:in `test_perform_async_runs_job_asynchronously'</code></pre></figure>


<p>Progress! The job doesn&rsquo;t have an <code>async</code> method. Let&rsquo;s add it:</p>

<figure class='code'><pre><code>module SuckerPunch
  module Job
    def async # &lt;--- New method
    end
  end
end</code></pre></figure>


<p><em>Notice: We&rsquo;re monkey-patching the <code>SuckerPunch::Job</code> module. This will allow us to add methods to the background job since it&rsquo;s included in the job.</em></p>

<p>The tests now:</p>

<figure class='code'><pre><code>1) Error:
SuckerPunch::AsyncSyntaxTest#test_perform_async_runs_job_asynchronously:
NoMethodError: undefined method `perform' for nil:NilClass
  /Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:12:in `test_perform_async_runs_job_asynchronously'</code></pre></figure>


<p>More progress&hellip;the <code>async</code> method we added returns nil, and because of the syntax <code>async.perform</code>, there&rsquo;s no <code>perform</code> method on the output of <code>async</code>. In short, we need to return something from <code>async</code> that responds to <code>perform</code> and can run the job.</p>

<p>In its most basic form, suppose we create a proxy object that responds to <code>perform</code>:</p>

<figure class='code'><pre><code>class AsyncProxy
  def perform
  end
end</code></pre></figure>


<p>We&rsquo;ll need to do some work in <code>perform</code> to execute the job, but this&rsquo;ll do for now. Now, let&rsquo;s integrate this new proxy to our <code>async_syntax.rb</code> file and return a new instance of the proxy from the <code>async</code> method:</p>

<figure class='code'><pre><code>module SuckerPunch
  module Job
    def async
      AsyncProxy.new # &lt;--- new instance of the proxy
    end
  end

  class AsyncProxy
    def perform
    end
  end
end</code></pre></figure>


<p>Running our tests gives us the following:</p>

<figure class='code'><pre><code>1) Error:
SuckerPunch::AsyncSyntaxTest#test_perform_async_runs_job_asynchronously:
ArgumentError: wrong number of arguments (2 for 0)
  /Users/bhilkert/Dropbox/code/sucker_punch/lib/sucker_punch/async_syntax.rb:9:in `perform'
  /Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:12:in `test_perform_async_runs_job_asynchronously'</code></pre></figure>


<p>Now we&rsquo;re on to something. We see an error related to the number of arguments on the <code>perform</code> method. Because each job&rsquo;s argument list will be different, we need to find a way to be flexible for whatever&rsquo;s passed in, something like&hellip;the splat operator! Let&rsquo;s try it:</p>

<figure class='code'><pre><code>module SuckerPunch
  module Job
    def async
      AsyncProxy.new
    end
  end

  class AsyncProxy
    def perform(*args) # &lt;--- Adding the splat operator, will handle any # of args
    end
  end
end</code></pre></figure>


<p>The tests now:</p>

<figure class='code'><pre><code>1) Failure:
SuckerPunch::AsyncSyntaxTest#test_perform_async_runs_job_asynchronously [/Users/bhilkert/Dropbox/code/sucker_punch/test/sucker_punch/async_syntax_test.rb:14]:
Expected: 1
Actual: 0</code></pre></figure>


<p>At this point, we&rsquo;ve reached the end of test output suggesting the path forward. This error is saying, &ldquo;Your assertions failed.&rdquo;. This is good because it means our syntax implementation will work and it&rsquo;s just about executing the actual job code in the proxy&rsquo;s <code>perform</code> method.</p>

<p>We want to leverage our new syntax (<code>perform_async</code>) to run the actual job asynchronously so it passes through the standard code path. To do so, we&rsquo;ll need a reference to the original job in the proxy object. Let&rsquo;s pass that to the proxy during instantiation:</p>

<figure class='code'><pre><code>module SuckerPunch
  module Job
    def async
      AsyncProxy.new(self) # &lt;--- Pass the job instance
    end
  end

  class AsyncProxy
    def initialize(job) # &lt;--- Handle job passed in
      @job = job
    end

    def perform(*args)
    end
  end
end</code></pre></figure>


<p>Now that the proxy has a reference to the job instance, we can call the <code>perform_async</code> class method to execute the job:</p>

<figure class='code'><pre><code>module SuckerPunch
  module Job
    def async
      AsyncProxy.new(self)
    end
  end

  class AsyncProxy
    def initialize(job)
      @job = job
    end

    def perform(*args)
      @job.class.perform_async(*args) # &lt;---- Execute the job
    end
  end
end
</code></pre></figure>


<p>Lastly, the tests:</p>

<figure class='code'><pre><code>ress ENTER or type command to continue
bundle exec rake test TEST="test/sucker_punch/async_syntax_test.rb"
Run options: --seed 43886

# Running:

.

1 runs, 1 assertions, 0 failures, 0 errors, 0 skips</code></pre></figure>


<p>Success!</p>

<p>Just like that, new users of Sucker Punch will be able to add <code>require 'sucker_punch/async_syntax'</code> to their projects to use the old syntax. This will allow existing projects using Sucker Punch to take advantage of the reworked internals without the need to make sweeping changes to the enqueueing syntax.</p>

<p>Support for the old syntax will be available for foreseeable future via this include. All new code/applications should use the new syntax going forward.</p>

<h2>Conclusion</h2>

<p>Before realizing a proxy object would work, I tinkered with <code>alias_method</code> and a handful of other approaches to latching on to the job&rsquo;s <code>perform</code> method and saving it off to execute later. While some combinations of these might have worked, the proxy object solution is simple and elegant. There&rsquo;s no magic, which means less maintenance going forward. The last thing I want is to make a breaking change, add support for the old syntax and find the support to be bug-ridden.</p>

<p>Ruby is incredibly flexible. Sometimes a 9-line class is enough to get the job done without reaching for an overly complex metaprogramming approach.</p>

<p>Having said all that, <a href="https://github.com/brandonhilkert/sucker_punch">Sucker Punch <code>v2</code> has been
released</a>!</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Lessons Learned from Building a Ruby Gem API]]></title>
    <link href="http://brandonhilkert.com/blog/lessons-learned-from-building-a-ruby-gem-api/"/>
    <updated>2016-01-04T13:12:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/lessons-learned-from-building-a-ruby-gem-api</id>
    <content type="html"><![CDATA[<p>Sucker Punch was created because I had a <a href="http://brandonhilkert.com/blog/why-i-wrote-the-sucker-punch-gem/">need for background processing without a separate worker</a>. But I also figured others did too, given that adding a worker dyno on Heroku was $35. For hobby apps, this was a significant cost.</p>

<p>Having gotten familiar with Celluloid from my work on Sidekiq, I knew Celluloid had all the pieces to puzzle to make this easier. In fact, one of the earliest incarnations of Sucker Punch wasn&rsquo;t a gem at all, just some Ruby classes implementing the pieces of Celluloid necessary to put together a background processing queue.</p>

<!--more-->


<p>The resulting code was less than ideal. It worked, but didn&rsquo;t feel like an API that anyone would want to use. From a beginner&rsquo;s perspective, this would stop adoption in its tracks. This is a common challenge with any code we encounter. No doubt, the Ruby standard library has all the tools necessary to make just about anything we can dream of, but sometimes the result isn&rsquo;t ideal. It&rsquo;s the same reason libraries like Rspec and HTTParty can exist. Developers prefer to use simplistic <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSLs</a> over convoluted, similarly-functioning code. Ruby has always been a language where developers consistently tout their ability to write code that reads well, feeding the levels of developer happiness.</p>

<h2>Why Rewrite Sucker Punch</h2>

<p>It started when <a href="https://github.com/brandonhilkert/sucker_punch/issues/122">a version of Celluloid was yanked from RubyGems.org</a>. This resulted in a flurry of tweets and GH issues detailing their inability to bundle their applications.</p>

<p>As of version <code>0.17</code>, methods in public API changed without supporting documentation. On top of that, the core <code>celluloid</code> gem was split in to a series of child gems causing navigation to be painful.</p>

<p>This made my life as the Sucker Punch maintainer difficult. There were some requests to upgrade Sucker Punch to use Celluloid <code>~&gt; 0.17</code> and I feared of what would happen if I did. This caused me to think about what the future of Sucker Punch looked like without Celluloid. I still use Sucker Punch and believe it&rsquo;s a valuable asset to the community. My goal was to find a way to move it forward productively without experiencing similar pains.</p>

<p>In the end, thanks to some <a href="https://github.com/brandonhilkert/sucker_punch/pull/126">communinity contributions</a>, <a href="https://github.com/brandonhilkert/sucker_punch/blob/master/CHANGES.md#160">Sucker Punch <code>1.6.0</code> was released with Celluloid <code>0.17.2</code> support</a>.</p>

<h2>Where to now?</h2>

<p>Around that same time, Mike Perham had been writing about his experiences <a href="http://www.mikeperham.com/2015/10/14/optimizing-sidekiq/">optimizing Sidekiq</a> and <a href="http://www.mikeperham.com/2015/10/14/should-you-use-celluloid/">whether continuing with Celluloid made sense for Sidekiq</a>. Having less experience with multi-threading, it didn&rsquo;t make sense for me to reinvent the wheel.</p>

<p>I had been hearing about <a href="https://github.com/ruby-concurrency/concurrent-ruby"><code>concurrent-ruby</code></a> through a variety of outlets, one of which was Rails <a href="https://github.com/rails/rails/pull/20866">replacing the existing concurrency latch with similar functionality from <code>concurrent-ruby</code></a>. After poking around <code>concurrent-ruby</code>, I realized it had all the tools necessary to build a background job processing library. Much like Celluloid in that respect, had the tools, but lacked the simple DSL for the use case.</p>

<p>What if Sucker Punch used <code>concurrent-ruby</code> in place of <code>celluloid</code>?</p>

<p>I can hear what you&rsquo;re thinking&hellip;&ldquo;What&rsquo;s the difference? You&rsquo;re swapping one dependency for another!&rdquo;. 100% true. The difference was that the little bit of communication I had with the maintainers of <code>concurrent-ruby</code> felt comfortable, easy, and welcoming. And with <code>concurrent-ruby</code> now a dependency of Rails, it&rsquo;s even more accessible for those using Sucker Punch within a Rails application (a common use case). But like before, there&rsquo;s no way to be sure that  <code>concurrent-ruby</code> won&rsquo;t cause similar pains/frustrations.</p>

<h2>Celluloid Basics</h2>

<p>A basic Sucker Punch job looks like:</p>

<figure class='code'><pre><code>class LogJob
  include SuckerPunch::Job

  def perform(event)
    Log.new(event).track
  end
end</code></pre></figure>


<p>To run the job asynchronously, we use the following syntax:</p>

<figure class='code'><pre><code>LogJob.new.async.perform("new_user")</code></pre></figure>


<p>The most interesting part of this method chain is the <code>async</code>. Removing <code>async</code>, leaves us with a call to a regular instance method.</p>

<p>It so happens that <a href="https://github.com/celluloid/celluloid/wiki/Basic-usage"><code>async</code> is a method in Celluloid that causes the next method to execute asynchronously</a>. And this works because by including <code>SuckerPunch::Job</code>, we&rsquo;re including <code>Celluloid</code>, which gives us the <code>async</code> method on instances of the job class.</p>

<h2>Developing APIs</h2>

<p>If you&rsquo;re familiar with the basics of Celluloid, you&rsquo;ll notice there&rsquo;s not much to Sucker Punch. It adds the Celluloid functionality to job classes and does some things under the hood to ensure there&rsquo;s one queue for each job class.</p>

<p><strong>Early in my <code>concurrent-ruby</code> spike, I realized what a mistake to tie Sucker Punch&rsquo;s API to the API of Celluloid</strong>. Tinkering with the idea of removing Celluloid has left Sucker Punch with two options:</p>

<ol>
<li>Continue using the <code>async</code> method with the new dependency</li>
<li>Break the existing DSL and create a dependency-independent syntax and try my best to document and support the change through the backwards-incompatible change</li>
</ol>


<p>Option 1 is the easy way out. Option 2 is more work, far more scary, but the right thing to do.</p>

<p>I decided to abandon my thoughts about previous versions and write as if it were new today. This will be the basis for the next major release of Sucker Punch (<code>2.0.0</code>).</p>

<p>Settling on abandoning the existing API, the next question is, <strong>&ldquo;What should the new API look like?&rdquo;</strong>.</p>

<p>Being a fan of Sidekiq, it didn&rsquo;t take long for me to realize it could actually make developers lives easier if Sucker Punch&rsquo;s API was the same.</p>

<p>Switching between Sidekiq and Sucker Punch is not uncommon. I look at Sidekiq as Sucker Punch&rsquo;s big brother and often suggest people use it instead when the use case makes sense.</p>

<p>If you&rsquo;re familiar with Sidekiq, using the <code>perform_async</code> class method should look familiar:</p>

<figure class='code'><pre><code>LogJob.peform_async("new_user")</code></pre></figure>


<p><strong>So why not use the same for Sucker Punch?</strong></p>

<p>If so, switching between Sidekiq and Sucker Punch would be no more than swapping <code>include Sidekiq::Worker</code> for <code>include SuckerPunch::Job</code> in the job class, aside from the gem installation itself. The result would be less context switching and more opportunity focus on the important parts of the application.</p>

<p>I can hear the same question again, &ldquo;What&rsquo;s the difference? You suggested isolating yourself from a dependency&rsquo;s API and now you&rsquo;re suggesting using another!&rdquo;. I look at this one a little differently&hellip;</p>

<p>Sidekiq is uniquely positioned in the community as a paid open source project. We&rsquo;re happy users of Sidekiq Pro and continue to do so for the support. You can certainly get support for the open source version, but one way to ensure Sidekiq is actively maintained is by paying for it. This financial support from us and others decreases the likelihood Mike will choose to abandon it. Mike&rsquo;s also been public about his long-term interest in maintaining Sidekiq. With all this in mind, I&rsquo;m willing to bank on its existence as the defacto way to enqueue jobs for background processing.</p>

<p>And if for some reason Sidekiq does disappear, there&rsquo;s nothing lost on Sucker Punch. There&rsquo;s no dependency. Just a similar syntax.</p>

<p>Sucker Punch <code>2.0.0</code> will have 2 class methods to enqueue jobs:</p>

<figure class='code'><pre><code>LogJob.perform_async("new_user")</code></pre></figure>


<p>and</p>

<figure class='code'><pre><code>LogJob.perform_in(5.minutes, "new_user")</code></pre></figure>


<p>The latter defining a delayed processing of the <code>perform</code> method 5 minutes from now.</p>

<h2>Summary</h2>

<p>Settling on a library&rsquo;s API isn&rsquo;t easy. Isolating it from underlying dependencies is the best bet for long-term stability. Using the <a href="https://en.wikipedia.org/wiki/Adapter">adapter pattern</a> can help create a layer (adapter) between your code and the dependency&rsquo;s API. But like always, there are always exceptions.</p>

<p>I&rsquo;m taking a leap of faith that doing what I believe is right won&rsquo;t leave existing users frustrated, ultimately abandoning Sucker Punch altogether.</p>

<p>Sucker Punch <code>v2.0</code> is shaping up to be the best release yet. I&rsquo;m looking forward to sharing it with you.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Sidekiq As A Microservice Message Queue]]></title>
    <link href="http://brandonhilkert.com/blog/sidekiq-as-a-microservice-message-queue/"/>
    <updated>2015-11-30T12:06:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/sidekiq-as-a-microservice-message-queue</id>
    <content type="html"><![CDATA[<p>In the recent series on transitioning to microservices, I detailed a path to move a large legacy Rails monolith to a cluster of a dozen microservices. But not everyone starts out with a legacy monolith. In fact, given Rails popularity amongst startups, <strong>it&rsquo;s likely most Rails applications don&rsquo;t live to see 4+ years in production</strong>. So what if we don&rsquo;t have a huge monolith on our hands? Are microservices still out of the question?</p>

<p>Sadly, the answer is, &ldquo;it depends&rdquo;. The &ldquo;depends&rdquo; part is specific to your context. While microservices may seem like the right move for you and your application, it&rsquo;s also possible it could cause a mess if not done carefully.</p>

<!--more-->


<p>This post will explore opportunities for splitting out unique microservices using <a href="http://sidekiq.org/">Sidekiq</a>, without introducing an enterprise message broker like <a href="https://www.rabbitmq.com/">RabbitMQ</a> or <a href="http://kafka.apache.org/">Apache Kafka</a>.</p>

<h2>When are Microservices right?</h2>

<p>Martin Fowler <a href="http://martinfowler.com/articles/microservice-trade-offs.html">wrote about trade-offs that come when introducing microservices</a>.</p>

<p>The article outlines 6 pros and cons introduced when you moved a microservices-based architecture. The strongest argument for microservices is the strengthening of module boundaries.</p>

<p>Module boundaries are naturally strengthened when we&rsquo;re forced to move code to another codebase. The result being, in most cases a group of microservices appears to be better constructed than the legacy monolith it was extracted from.</p>

<p>There&rsquo;s no doubt Rails allows developers to get something up and running very quickly. Sadly, you can do so while making a big mess at the same time. It&rsquo;s worth noting there&rsquo;s nothing stopping a monolith from being well constructed. With some discipline, <a href="https://www.youtube.com/watch?v=KJVTM7mE1Cc">your monolith can be the bright and shiny beauty that DHH wants it to be</a>.</p>

<h2>Sidekiq Queues</h2>

<p>Ok, ok. You get it. Microservices can be awesome, but they can also make a big mess. I want to tell you about how I recently avoided a big mess without going &ldquo;all in&rdquo;.</p>

<p>There&rsquo;s no hiding I&rsquo;m a huge <a href="http://sidekiq.org/">Sidekiq</a> fan. It&rsquo;s my goto solution for background processing.</p>

<p>Sidekiq has the notion of <a href="https://github.com/mperham/sidekiq/wiki/Advanced-Options#workers">named queues</a> for both <a href="https://github.com/mperham/sidekiq/wiki/Advanced-Options#workers">jobs</a> and <a href="https://github.com/mperham/sidekiq/wiki/Advanced-Options#queues">workers</a>. This is great from the standpoint that it allows you to put that unimportant long-running job in a different queue without delayed other important fast-running jobs.</p>

<p>A typical worker might look like:</p>

<figure class='code'><pre><code>class ImportantWorker
  include Sidekiq::Worker

  def perform(id)
    # Do the important stuff
  end
end</code></pre></figure>


<p>If we want to send this job to a different queue, we&rsquo;d add <code>sidekiq_options queue: :important</code> to the worker, resulting in:</p>

<figure class='code'><pre><code>class ImportantWorker
  include Sidekiq::Worker
  sidekiq_options queue: :important

  def perform(id)
    # Do the important stuff
  end
end</code></pre></figure>


<p>Now, we need to make sure the worker process that&rsquo;s running the jobs knows to process jobs off this queue. A typical worker might be invoked with:</p>

<figure class='code'><pre><code>bin/sidekiq</code></pre></figure>


<p>Since new jobs going through this worker will end up on the <code>important</code> queue, we want to make sure the worker is processing jobs from the <code>important</code> queue too:</p>

<figure class='code'><pre><code>bin/sidekiq -q important -q default</code></pre></figure>


<p><em>Note: Jobs that don&rsquo;t specify a queue will go to the <code>default</code> queue. We have to include the <code>default</code> queue when we using the <code>-q</code> option, otherwise the default queue will be ignored in favor of the queue passed to the <code>-q</code> option.</em></p>

<p>The best part, you don&rsquo;t even have to have multiple worker processes to process jobs from multiple queues. Furthermore, the <code>important</code> queue can be checked twice as often as the <code>default</code> queue:</p>

<figure class='code'><pre><code>bin/sidekiq -q important,2 -q default</code></pre></figure>


<p>This flexibility of where jobs are enqueued and how they&rsquo;re processed gives us an incredible amount of freedom when building our applications.</p>

<h2>Extracting Worker to a Microservice</h2>

<p>Let&rsquo;s assume that we&rsquo;ve deployed your main application to Heroku. The application uses Sidekiq and we&rsquo;ve included a Redis add-on. With the addition of the add-on, our application now has a <code>REDIS_URL</code> environment variable that Sidekiq connects to on startup. We have a web process, and worker process. A pretty standard Rails stack:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq/rails-web-worker.png" title="&#34;Rails with typical worker process&#34;" alt="&#34;Rails with typical worker process&#34;"></p>

<p><strong>What&rsquo;s stopping us from using that same <code>REDIS_URL</code> in another application?</strong></p>

<p>Nothing, actually. And if we consider what we know about the isolation of jobs in queue and workers working on specific queues, there&rsquo;s nothing stopping us from having workers for a specific queue in a different application altogether.</p>

<p>Remember <code>ImportantWorker</code>, imagine the logic for that job was better left for a different application. We&rsquo;ll leave that part a little hand-wavey because there still should be a really good reason to do so. But we&rsquo;ll assume you&rsquo;ve thought long and hard about this and decided the core application was not a great place for this job logic.</p>

<p>Extracting the worker a separate application might now look something like this:</p>

<p><img class="center" src="http://brandonhilkert.com/images/sidekiq/rails-with-microservice.png" title="&#34;Using Sidekiq as a Message Queue between two Rails microservices&#34;" alt="&#34;Using Sidekiq as a Message Queue between two Rails microservices&#34;"></p>

<h2>Enqueueing Jobs with the Sidekiq Client</h2>

<p>Typically, to enqueue the <code>ImportantWorker</code> above, we&rsquo;d call the following from our application:</p>

<figure class='code'><pre><code>ImportantWorker.perform_async(1)</code></pre></figure>


<p>This works great when <code>ImportantWorker</code> is defined in our application. With the expanded stack above, <code>ImportantWorker</code> now lives in a new microservice, which means we don&rsquo;t have access to the <code>ImportantWorker</code> class from within the application. We <em>could</em> define it in the application just so we can enqueue it, with the intent that the application won&rsquo;t process jobs for that worker, but that feels funny to me.</p>

<p>Rather, we can turn to the underlying Sidekiq client API to enqueue the job instead:</p>

<figure class='code'><pre><code>Sidekiq::Client.push(
  "class" =&gt; "ImportantWorker",
  "queue" =&gt; "important",
  "args" =&gt; [1]
)
</code></pre></figure>


<p><em>Note: We have to be sure to define the <code>class</code> as a string <code>"ImportantWorker"</code>, otherwise we&rsquo;ll get an exception during enqueuing because the worker isn&rsquo;t defined in the application.</em></p>

<h2>Processing Sidekiq Jobs from a Microservice</h2>

<p>Now we&rsquo;re pushing jobs to the <code>important</code> queue, but have nothing in our application to process them. In fact, our worker process isn&rsquo;t even looking at that queue:</p>

<figure class='code'><pre><code>bin/sidekiq -q default</code></pre></figure>


<p>From our microservice, we setup a worker process to <strong>ONLY</strong> look at the <code>important</code> queue:</p>

<figure class='code'><pre><code>bin/sidekiq -q important</code></pre></figure>


<p>We define the <code>ImportantWorker</code> in our microservice:</p>

<figure class='code'><pre><code>class ImportantWorker
  include Sidekiq::Worker
  sidekiq_options queue: :important

  def perform(id)
    # Do the important stuff
  end
end</code></pre></figure>


<p>And now when the worker picks jobs out of the <code>important</code> queue, it&rsquo;ll process them using the <code>ImportantWorker</code> defined above in our microservice.</p>

<p>If we wanted to go one step further, the microservice could then enqueue a job using the Sidekiq client API to a queue that only the core application is working on in order to send communication back the other direction.</p>

<h2>Summary</h2>

<p>Any architectural decision has risks. Microservices are no exception. Microservices can be easier than an enterprise message broker, cluster of new servers and a handful of devops headaches.</p>

<p>I originally dubbed this the &ldquo;poor man&rsquo;s message bus&rdquo;. With more thought, there&rsquo;s nothing &ldquo;poor&rdquo; about this. Sidekiq has a been a reliable piece of our infrastructure and I have no reason to believe that&rsquo;ll change, even if we are using it for more than just simple background processing from a single application.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[A Path to Services - Part 3 - Synchronous Events]]></title>
    <link href="http://brandonhilkert.com/blog/a-path-to-services-part-3-synchronous-events/"/>
    <updated>2015-10-15T09:07:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/a-path-to-services-part-3-synchronous-events</id>
    <content type="html"><![CDATA[<p><em>This article was originally posted on the <a href="http://plumbing.pipelinedeals.com/">PipelineDeals Engineering
Blog</a></em></p>

<p>In the <a href="http://brandonhilkert.com/blog/a-path-to-services-part-1-start-small/">previous article in this series</a>, we introduced a billing service to determine which features an account could access. If you remember, <a href="http://brandonhilkert.com/our-path-to-services-part-1-start-small/">the email service</a> was a &ldquo;fire and forget&rdquo; operation and was capable of handling throughput delays given its low value to the core application.</p>

<p>This post will explore how we handle synchronous communication for a service like billing where an inline response is required to service a request from the core application.</p>

<!--more-->


<h2>Background</h2>

<p>If you remember from the previous post, we introduced the billing service to an infrastructure that looked like this:</p>

<p><img class="center" src="http://brandonhilkert.com/images/services/app-email-billing.png" title="&#34;Web application with Email and Billing Microservice&#34;" alt="&#34;Web application with Email and Billing Microservice&#34;"></p>

<p>Handling multiple pricing tiers in a SaaS app means you have to control authorization based on account status. Our billing service encapsulates the knowledge of which features correspond to which pricing tier.</p>

<p>For instance, one feature is the ability to send trackable email to contacts in your PipelineDeals account. To service this request, we add an option to the bulk action menu from a list view:</p>

<p><img class="center" src="http://brandonhilkert.com/images/services/send-email.png" title="&#34;Send email feature&#34;" alt="&#34;Send email feature&#34;"></p>

<h2>Service Request</h2>

<p>Before we can conditionally show this option based on the pricing tier, we have to first make a request to the billing service to get the list of features available to that user.</p>

<figure class='code'><pre><code>class Billing::Features
  def initialize(user)
    @user = user
    @account = user.account
  end

  def list
    Rails.cache.fetch("account_#{account.id}_billing_features") do
      response = Billing::Api.get "account/#{account.id}/features"
      response['features']
    end
  end

  private

  attr_reader :user, :account
end</code></pre></figure>


<p><code>Billing::Api</code>, in this case, is a wrapper around the API calls to handle exceptions and other information like security.</p>

<p><em>Note: When making synchronous HTTP calls like this, it&rsquo;s worth considering the failure state and providing a default response set in that case so the user isn&rsquo;t burdened with a failure page. In this case, one option would be dumb down the features on the page to the most basic tier.</em></p>

<h2>Serving a JSON API</h2>

<p>Plenty of articles have been written about how to create a JSON API with Rails, so we won&rsquo;t rehash those techniques here. Instead, we&rsquo;ll highlight patterns we&rsquo;ve used for consistency.</p>

<p>We tend to reserve the root URL namespace for UI-related routes, so we start by creating a unique namespace for the API:</p>

<figure class='code'><pre><code>namespace :api do
  resources :account do
    resource :features, only: :show
  end
end</code></pre></figure>


<p>This setup gives us the path <code>/api/account/:account_id/features</code>. We haven&rsquo;t found a need for versioning internal APIs. If we decided to in the future, we could always add the API version as a request header.</p>

<p>The <code>features</code> endpoint looks like:</p>

<figure class='code'><pre><code>class Api::FeaturesController &lt; Api::ApiController
  skip_before_filter :verify_authenticity_token

  def show
    render json: {
      success: true,
      features: AccountFeatures.new(@account_id).list
    }
  end
end</code></pre></figure>


<p>Notice <code>Api::FeaturesController</code> inherits from <code>Api::ApiController</code>. We keep the API-related functionality in this base controller so each endpoint will get access to security and response handling commonalities.</p>

<p><code>AccountFeatures</code> is a PORO that knows how to list billing features for a particular account. We could&rsquo;ve queried it straight from an ActiveRecord-based model, but our handling of features is a little more complicated than picking them straight from the database.</p>

<p>Another note here is that we haven&rsquo;t introduced a serializing library like <code>active_model_serializers</code> or <code>jbuilder</code>. Using <code>render json</code> alone has serviced us well for simple APIs. We reach for something more complex when the response has more attributes than shown above.</p>

<h2>Handling Service Response</h2>

<p>By introducing <code>Rails.cache</code>, we can serve requests (after the initial) without requiring a call to the billing service.</p>

<p>One of the first things we do is serialize the set of features to JavaScript so our client-side code has access:</p>

<figure class='code'><pre><code>&lt;%= javascript_tag do %&gt;
  window.Features = &lt;%= Billing::Features.new(logged_in_user).list.to_json %&gt;;
&lt;% end %&gt;</code></pre></figure>


<p>We also include a helper module in to our Rails views/controllers, so we can handle conditional feature logic:</p>

<figure class='code'><pre><code>module Features
  def feature_enabled?(feature)
    Billing::Features.new(logged_in_user).list.include?(feature.to_s)
  end
end</code></pre></figure>


<h2>Synchronous Side Effects</h2>

<p>When we <a href="http://brandonhilkert.com/our-path-to-services-part-2-synchronous-vs-asynchronous/">looked at asynchronous service requests</a>, there was less immediacy associated with the request due to its &ldquo;fire-and-forget&rdquo; nature. A synchronous request, on the other hand, will handle all requests to the core application, so scaling can be challenge and infrastructure costs can add up.</p>

<p><img class="center" src="http://brandonhilkert.com/images/services/synchronous-service-cost.png" title="&#34;Increased cost by introducing synchronous microservice&#34;" alt="&#34;Increased cost by introducing synchronous microservice&#34;"></p>

<p>In addition to the infrastructure costs, performance can be a factor. If the original page response time was 100ms and we&rsquo;re adding a synchronous service request that takes another 100ms, all of a sudden we&rsquo;ve doubled our users&#8217; response times. And while this architectural decision might seem like an optimization, I&rsquo;m positive none of our users will thank us for making their page load times 2x slower.</p>

<h2>Summary</h2>

<p>As you can see, there&rsquo;s little magic to setting up a synchronous service request.</p>

<p>Challenges appear when you consider failure states at every point in the service communication - the service could be down, or the HTTP request itself could fail due to network connectivity. As mentioned above, providing a default response during service failure is a great start to increasing the application&rsquo;s reliability. Optionally, <a href="https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern">the circuit break pattern</a> can provide robust handling of network failures.</p>

<p>Part 4 in this series will cover how we manage asynchronous communication between services, specifically around an <a href="https://github.com/PipelineDeals/mantle">open source gem we built called Mantle</a>.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[A Path to Services - Part 2 - Synchronous vs. Asynchronous]]></title>
    <link href="http://brandonhilkert.com/blog/a-path-to-services-part-2-synchronous-vs-asynchronous/"/>
    <updated>2015-08-14T10:32:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/a-path-to-services-part-2-synchronous-vs-asynchronous</id>
    <content type="html"><![CDATA[<p><em>This article was originally posted on the <a href="http://plumbing.pipelinedeals.com/">PipelineDeals Engineering
Blog</a></em></p>

<p>In the <a href="http://brandonhilkert.com/blog/a-path-to-services-part-1-start-small/">previous article in this series</a>, we moved the responsibility of emails to a separate Rails application. In order to leverage this new service, we created a PORO to encapsulate the specifics of communicating with our new service by taking advantage of Sidekiq&rsquo;s built-in retry mechanism to protect from intermittent network issues.</p>

<p>Communication between microservices can be broken down in to 2 categories: <strong>synchronous</strong> and  <strong>asynchronous</strong>.
Understanding when to use each is critical in maintaining a healthy infrastructure. This post will explore details about these two methods of communication and their associated use cases.</p>

<!--more-->


<h2>Background</h2>

<p>Continuing the discussion of our architecture from last time, we have a primary
Rails web application serving the majority of our business logic. We now have an additional application that&rsquo;s only responsibility is the formatting and sending of emails.</p>

<p><img class="center" src="http://brandonhilkert.com/images/services/app-email.png" title="&#34;Application service with email microservice&#34;" alt="&#34;Application service with email microservice&#34;"></p>

<p>In this article, we&rsquo;ll discuss the addition of our Billing service. The service&rsquo;s responsibility to is to process transactions related to money. This can come in the form of a trial conversion, adding a seat to an additional account, or deleting users from an existing account, among others.</p>

<p>Like many SaaS applications, PipelineDeals has multiple tiers of service. The most expensive intended for customers needing advanced functionality. Part of the billing service&rsquo;s responsibility is to manage the knowledge of which features an account can access.</p>

<p>So stepping back to the main PipelineDeals web application, the app has to decide which features to render at page load. Because the billing service is our source of truth for this information, a page load will now require a call to this service to understand which features to render.</p>

<p>This new dependency looks a little different than the email dependency from the
<a href="http://brandonhilkert.com/blog/a-path-to-services-part-1-start-small/">previous article</a>. Email has the
luxury of not being in the dependency path of a page load. Very few customers
will complain if an email is 10 seconds late. On the other hand, they&rsquo;ll
complain immediately if their account won&rsquo;t load, and rightfully so.</p>

<p><img class="center" src="http://brandonhilkert.com/images/services/app-email-billing.png" title="&#34;Application service include email and billing microservices&#34;" alt="&#34;Application service include email and billing microservices&#34;"></p>

<p>An interesting benefit from having already extracted the email service is that the billing service sends email regarding financial transactions and account changes. Typically, we would have done the same thing for every other Rails app that needed to send email, which was integrate <code>ActionMailer</code> and setup the templates and mailers needed to do the work. In this case, we can add those emails to the email service and use the same communication patterns we do from the main web application to trigger the sending of an email from the billing service. This does require making changes to 2 different projects for a single feature (business logic in billing and mailer in email), but removes the necessity to configure another app to send email properly. We viewed this as a benefit.</p>

<h2>Asynchronous Events</h2>

<p>As the easier of the two, asynchronous would be any communication not necessary for the request/response cycle to complete. Email is the perfect example. Logging also falls in to this category.</p>

<p>For the networks gurus out there, this would be similar to UDP communication. More of a fire-and-forget approach.</p>

<p>An email, in this case, is triggered due to something like an account sign up.
We send a welcome email thanking the customer for signing up and giving them
some guidance on how to get the most benefit from the application. Somewhere in
the process of signing up, the code triggers an email and passes along the data
needed for email template.</p>

<p>As shown in the previous article, the call to send the email looks something like this:</p>

<figure class='code'><pre><code>Email.to current_user, :user_welcome</code></pre></figure>


<p>The value in this call is that under the covers, it&rsquo;s enqueuing a Sidekiq job:</p>

<figure class='code'><pre><code>EmailWorker.perform_async(opts)</code></pre></figure>


<p>where <code>opts</code> is a hash of data related to the email and the variables needed for the template.</p>

<p><em>Note: Because the options are serialized to JSON, values in hash must be simple structures. Objects won&rsquo;t work here.</em></p>

<p>As you can see above, the code invoking the <code>Email.to</code> method doesn&rsquo;t care about what it returns. In fact, it doesn&rsquo;t return anything we care about at this point. So as long as the method is called, the code can move forward without waiting for the email to finish sending.</p>

<p>Extracting asynchronous operations like this that exist in a code path is a
great way to improve performance. There are times, though, where deferring an operation to background job might not make sense.</p>

<p>For example, imagine a user changes the name of a person. They click one of their contact&rsquo;s names, enter a new name, and click &ldquo;Save&rdquo;. It doesn&rsquo;t make sense to send the task of updating the actual name in the database to a background job because depending on what else is in the queue at that time, the update might not complete until after the next refresh, which would make the user believe their update wasn&rsquo;t successful. This would be incredibly confusing.</p>

<p>Logging is another perfect candidate for asynchronicity. In most cases, our users don&rsquo;t care if a log of their actions has been written to the database before their next refresh. It&rsquo;s information we may want to store, and as a result, can be a fire-and-forget operation. We can rest easy knowing we&rsquo;ll have that information, soon-ish, and it won&rsquo;t add any additional overhead to the end user&rsquo;s request cycle.</p>

<p>The opposite of asynchronous events like this are <strong>synchronous</strong> events! (surprise right?). Let&rsquo;s explore how they&rsquo;re different.</p>

<h2>Synchronous Events</h2>

<p>We can look at synchronous events as dependencies of the request cycle. We use MySQL as a backend for the main PipelineDeals web application and queries to MySQL would be considered synchronous. In that, in order to successfully fulfill the current request, we require the information returned from MySQL before we can respond.</p>

<p>In most cases, we don&rsquo;t think of our main datastore as a service. It doesn&rsquo;t necessarily have a separate application layer on top of it, but it&rsquo;s behavior and requirements are very much like a service.</p>

<p>If we consider the addition of our billing service above, we require information about the features allowed for a particular account before we can render the page. This allows us to include/exclude modules they should or should not see. The flow goes something like this:</p>

<p><code>Web request -&gt; lookup account in DB -&gt; Request features from Billing service -&gt; render page</code></p>

<p>If the request to the billing service didn&rsquo;t require a response, we would consider this to be an <strong>asynchronous</strong> service, which might change how we invoke the request for data.</p>

<p>Synchronous communication can happen over a variety of protocols. The most
common is a JSON payload over HTTP. In general, it&rsquo;s not the most performant,
but it&rsquo;s one of the easier to debug and is human-readable, so it tends to be pretty popular.</p>

<p>The synchronous services we&rsquo;ve setup all communicate over HTTP. Rails APIs are a known thing. We&rsquo;re familiar with the stack and the dependencies required to set up a common JSON API, which is a large part of the reason it&rsquo;s our preferred communication protocol between services.</p>

<h2>Summary</h2>

<p>We&rsquo;ve simplified the communication between services into these two categories. Knowing this helps dictate the infrastructure and configuration of the applications.</p>

<p>Next time, we&rsquo;ll take a closer look at the synchronous side and the specifics about the JSON payloads involved to send an email successfully.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[A Path to Services - Part 1 - Start Small]]></title>
    <link href="http://brandonhilkert.com/blog/a-path-to-services-part-1-start-small/"/>
    <updated>2015-07-27T11:18:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/a-path-to-services-part-1-start-small</id>
    <content type="html"><![CDATA[<p><em>This article was originally posted on the <a href="http://plumbing.pipelinedeals.com/">PipelineDeals Engineering
Blog</a></em></p>

<p>The PipelineDeals web application recently celebrated its ninth birthday. It&rsquo;s
seen its fair share of developers, all of whom had their own idea of &ldquo;clean
code&rdquo;. As a team, we&rsquo;d been brainstorming ways to wrangle certain areas of the
application. The question we&rsquo;d frequently ask ourselves was <em>&ldquo;How do we clean
up [x] (some neglected feature of the application)?</em>&rdquo;.</p>

<!--more-->


<p>Reasonable solutions ended up being:</p>

<ol>
<li>Rewrite it</li>
<li>Rewrite and put it elsewhere</li>
</ol>


<p>In short, we chose to rewrite many of the hairy areas of the app into separate services communicating over HTTP. It&rsquo;s been about a year since our first commit in a separate service, and we&rsquo;ve learned quite a bit since then. This is part 1 in a series of posts related to our transition to microservices.</p>

<h2>How we got here</h2>

<p>This was us 18 months ago. PipelineDeals was a crufty Rails 2 application that many of us were scared to open. It&rsquo;d been several years of adding feature upon feature without consistent knowledge, style, or guidance. And it&rsquo;s probably not surprising we had what we did. Regardless, we needed to fix it.</p>

<p>One of our goals was to move to Rails 3, and later more updated versions, but in order to get there, we had to refactor (or remove) quite a bit of code to make the transition easier.</p>

<p>This, to me, was a huge factor around our decision to move to a more service-focused approach. <a href="https://www.youtube.com/watch?v=KJVTM7mE1Cc">At this year&rsquo;s Railsconf keynote</a>, DHH joked about the &ldquo;majestic monolith&rdquo; and how many companies prematurely piece out services, all to later suffer pain when they realize it was a premature optimization.</p>

<p>The same could be said for our move. Instead of spinning out separate services, we could have cleaned up the mess we had by refactoring every nasty piece of our app. We could have turned our ugly monolith into a majestic one. But while it would&rsquo;ve been possible, our team agreed we were better served by more or less starting over. Not in the big-bang rewrite sense, but instead to stand up brand new service apps when we added new features, and when it made sense. &ldquo;Made sense&rdquo; is the key here. There have been many times when it didn&rsquo;t make sense over the past 12 months. But we&rsquo;re learning and getting better at identifying the things that are good candidates for a more isolated service.</p>

<h2>Now what?</h2>

<p><em>Do we wait for the next requested feature or what?</em></p>

<p>At one of our weekly team hangouts, we watched a talk focused on starting by isolating the responsibility of Email. It was the perfect introduction and motivation for us to get a small win and some experience under our belts. For some reason prior, we didn&rsquo;t have a great sense of how to start making that transition.</p>

<p>The idea was to take our emails (and there were plenty) and move them to a separate Rails app that&rsquo;s only responsibilty is sending email. While it sounds trivial, the idea alone introduces a lot of interesting questions: <em>What do we do with those really nasty emails that have 30 instance variables? What do we do if the email service is down? How do we trigger an email to be sent?</em></p>

<h2>Rails new</h2>

<p>We created a new Rails 4 app, removed all the stuff we didn&rsquo;t need and created a golden shrine where emails could flourish&hellip;but seriously, that&rsquo;s all it did. And it did it really well.</p>

<p>The next question was how to send emails from the main application. We&rsquo;re very happy <a href="http://sidekiq.org/pro/">Sidekiq Pro</a> users, and one of the benefits we love about Sidekiq is the built-in retries. This gives us a layer of reliability outside of the code layer. So rather than build some ad-hoc retry mechanism by creating a counter in ruby, and rescuing failures within a certain range, we shoot off a job. If it fails because the network is down, or the endpoint isn&rsquo;t available, the job will retry soon after and continue down the happy path. Sidekiq retries are a recurring theme with our infrastructure. We&rsquo;ve made a number of decisions around the fact that we have this advantage already built-in, and we might as well take advantage of it. More on that to come.</p>

<h2>Communicate</h2>

<p>The defacto communication method between services is over HTTP. And we did nothing different. Our services use JSON payloads to exchange data, which let&rsquo;s us easily take advantage of Sidekiq on both ends.</p>

<p>So now, rather than invoking a built-in Rails mailer like:</p>

<figure class='code'><pre><code>UserMailer.welcome(current_user).deliver</code></pre></figure>


<p>we invoke a PORO to send off the communication:</p>

<figure class='code'><pre><code>Email.to current_user, :user_welcome</code></pre></figure>


<p>where <code>Email</code> is defined as</p>

<figure class='code'><pre><code>class Email
  def initialize(users, email_key, opts)
    @users, @email_key, @opts = users, email_key, opts
  end

  def self.to(users, email_key, opts = {})
    new(users, email_key, opts).queue_email
  end

  def queue_email
    opts[:email_key] = email_key
    opts[:to] ||= email_array
    opts[:name] ||= first_users_name
    opts[:user_id] ||= user_id
    opts[:account_id] ||= account_id

    json = JSON.generate(opts)
    RestClient.post(ENV["PIPELINE_EMAIL_URL"], json, :content_type =&gt; :json)
  end
end</code></pre></figure>


<p>There&rsquo;re a number of use-case specific variables above, but the <code>email_key</code> is probably the most important. We used that to describe what email should be invoked on the service.</p>

<p>In the above example, we triggered the <code>welcome</code> email on the <code>UserMailer</code> class. We translated this request into an email key of <code>user_welcome</code>.</p>

<p>This key then gets interpreted by the Email service app and turned into an actual <code>Mailer</code> class and method within it. We could have done this in a variety of ways, but we split the string on the service-side at the <code>_</code>, and the first element described the mailer, the rest the method. So in this case, it gets interpreted as <code>UserMailer#welcome</code>.</p>

<p>One thing this pattern allowed us to do was almost full copy/paste the old mailer methods in to the new Email service application.</p>

<h2>Failures, failures, failures</h2>

<p>&ldquo;What if the service is down?&rdquo; you say, &ldquo;the email request will fail!&rdquo; Sure will.</p>

<p>So let&rsquo;s wrap that request in a Sidekiq job to take advantage of the built-in retries.</p>

<p>Rather than invoke the following method in the email object:</p>

<figure class='code'><pre><code>RestClient.post(ENV["PIPELINE_EMAIL_URL"], json, :content_type =&gt; :json)</code></pre></figure>


<p>we&rsquo;ll shoot off a Sidekiq job instead, changing the <code>queue_email</code> method to:</p>

<figure class='code'><pre><code>def queue_email
  opts[:email_key] = email_key
  opts[:to] ||= email_array
  opts[:name] ||= first_users_name
  opts[:user_id] ||= user_id
  opts[:account_id] ||= account_id

  EmailWorker.perform_async(opts)
end</code></pre></figure>


<p>There we have it. Network-proof email requests!</p>

<p>Not so fast&hellip;</p>

<p>Astute readers will probably recognize that the service-side network communication can potentially also fail. This is becoming a pattern, huh? More communication, more potential for failure and more potential headaches.</p>

<p>On the <strong>service side</strong>, we have a controller that takes in the request for the email and immediately serializes it to a Sidekiq job:</p>

<figure class='code'><pre><code>  def create
    EmailWorker.perform_async(parsed_params)
    head :accepted
  end

  private

  def parse_params
    JSON.parse(request.body) || {}
  end
end</code></pre></figure>


<p>Because we immediately serialize the job to Sidekiq, we&rsquo;ve successfully acknowledged the job was received, and the main app&rsquo;s Sidekiq job completes successfully. Now the email service can move on to doing the heavy-lifting in whatever way makes the most sense. In our case, we use Mailgun to send our emails, so the <code>EmailWorker</code> Sidekiq job invokes a new mailer based on the <code>email_key</code> param and sends it off to mailgun for transport. And because it&rsquo;s wrapped in a Sidekiq job, we can sleep well knowing that the Mailgun request can fail and the job will successfully retry until it goes through.</p>

<h2>Summary</h2>

<p>Service communication is definitely not for the faint of heart and as a team, we can completely appreciate the challenges that come along with keeping services in sync now&mdash;especially having stood up about 8 new services in the last 12 months.</p>

<p>Sidekiq has been the queueing solution we&rsquo;ve leaned on to keep communication in sync and reliable. We&rsquo;ve also written a few internal tools that piggy-backy off Sidekiq that we&rsquo;re really excited share with the community in the near future.</p>

<p>Part II, in this series, will discuss the methods of communication necessary to consider when implementing a service-based architecture.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[The Ruby Book Bundle Is Live]]></title>
    <link href="http://brandonhilkert.com/blog/the-ruby-book-bundle-is-live/"/>
    <updated>2015-07-06T07:01:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/the-ruby-book-bundle-is-live</id>
    <content type="html"><![CDATA[<p>A few fellow authors and friends of mine have put together the <a href="http://rubybookbundle.com/?c=bh">Ruby Book Bundle</a> including some of the best Ruby/Rails books out there. The <a href="http://rubybookbundle.com/?c=bh">bundle went on sale this morning</a>! It&#8217;ll <strong>only be available for a week</strong>, so be sure to pick it up soon if you&rsquo;re interested.</p>

<!--more-->


<p><img class="center" src="http://brandonhilkert.com/images/ruby-book-bundle.png" title="&#34;The Ruby Book Bundle&#34;" alt="&#34;The Ruby Book Bundle&#34;"></p>

<p>Here&rsquo;s a quick rundown of the details:</p>

<p><strong>Is this bundle right for me?</strong></p>

<p>The books in the Ruby Book Bundle span a range from beginner to advanced developers. <strong>As long as you understand the Ruby basics, though, you’ll learn a lot from these books.</strong></p>

<p>You’ll probably get the <em>most</em> out of the bundle if you’re an intermediate Ruby dev, focused on building more specific development skills. But if you’re interested in mastering the most important and most challenging parts of Rails, like testing, refactoring, gem-building, metaprogramming/DSL-writing, and app-building, this bundle will be perfect for you.</p>

<h2>What will I get?</h2>

<p>When you buy the bundle, you’ll immediately get the following to download:</p>

<ul>
<li><strong>Build a Ruby Gem</strong> (pdf, epub, mobi) with source code and screencasts</li>
<li><strong>Minitest Cookbook</strong> (pdf, epub, mobi) and source code examples</li>
<li><strong>Practicing Rails</strong> (pdf, epub, mobi)</li>
<li><strong>Fearless Refactoring</strong> (pdf)</li>
<li><strong>Ruby DSL Handbook</strong> (pdf, epub, mobi) with cheat sheets, sample code and screencasts</li>
<li><strong>Rebuilding Rails</strong> (pdf)</li>
</ul>


<h2>How long will the sale run?</h2>

<p>The bundle will be <strong>available for 1 week only</strong> - ending <strong>July 10th, 11:59PM PDT</strong>.</p>

<h2>What if I’m not happy with the bundle?</h2>

<p>We want you to be happy with what you’ve bought! <strong>So if you’re not 100% satisfied with the value you got from the bundle, shoot me an email at <a href="mailto:brandonhilkert%2Bbundle@gmail.com">brandonhilkert+bundle@gmail.com</a> within 30 days, and I’ll refund your money.</strong></p>

<p>I feel fortunate to be part of such a great group of developers. They&rsquo;ve
provided a ton of value to the community and their books continue to impress
me.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Organizing Javascript in Rails Application with Turbolinks]]></title>
    <link href="http://brandonhilkert.com/blog/organizing-javascript-in-rails-application-with-turbolinks/"/>
    <updated>2015-06-30T16:10:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/organizing-javascript-in-rails-application-with-turbolinks</id>
    <content type="html"><![CDATA[<p>It&rsquo;s impossible to escape Javascript in a Rails application. From a tiny script to a full-on Javascript framework, websites are becoming more and more reliant on Javascript, whether we like it or not.</p>

<p>Several articles back, I documented <a href="http://brandonhilkert.com/blog/page-specific-javascript-in-rails/">how I handle page-specific Javascript in a Rails application</a>. My solution included a third-party jQuery plugin that did some magic on the <code>$(document).ready</code> function in combination with CSS style scoping to limit the functionality.</p>

<!--more-->


<p>The plugin worked well for awhile, but with the advent of Turbolinks, the
solution felt less and less appropriate. I&rsquo;ve since settled on some techniques
to not only handle page-specific Javascript, but overall organization and structure of Javascript within a Rails application. I&rsquo;ve used it in a hand full of large applications over the past few months and it&rsquo;s held up incredibly well.</p>

<h2>The Problem</h2>

<p>Using &ldquo;sprinkles&rdquo; of Javascript throughout a Rails application can get unwieldly fast if we&rsquo;re not consistent. What we ideally want is some techniques and guidelines that can keep the Javascript organized in our projects. <strong>We also don&rsquo;t want to have to disable Turbolinks to make our application work as we expect</strong>.</p>

<h2>The Solution</h2>

<p>Generally, javascript behavior can be boiled down to the following categories:</p>

<ol>
<li>Behavior that&rsquo;s &ldquo;always on&rdquo;</li>
<li>Behavior that&rsquo;s triggered from a user action</li>
</ol>


<p>But first, a few things to will help us stay organized&hellip;</p>

<h2>Class Scoping</h2>

<p>I still like to scope the body element of the layout(s) with the controller and action name:</p>

<figure class='code'><pre><code>&lt;body class="&lt;%= controller_name %&gt; &lt;%= action_name %&gt;"&gt;
  &lt;%= yield %&gt;
&lt;/body&gt;</code></pre></figure>


<p>This not only let&rsquo;s us control access to the DOM through jQuery if we need to, but also provides some top-level styling classes to allow us to easily add page-specific CSS.</p>

<p>In the case we&rsquo;re working on the proverbial blog posts application, the body tag ends up looking like:</p>

<figure class='code'><pre><code>&lt;body class="posts index"&gt;
  &lt;%= yield %&gt;
&lt;/body&gt;</code></pre></figure>


<p>This gives us the opportunity to scope CSS and Javascripts to all <code>posts</code>-related pages in the controller with the <code>.posts</code> class, or down to the specific page using a combination of both the controller and action: <code>.posts.index</code>.</p>

<h2>Default Application Manifest</h2>

<p>Here&rsquo;s the default <code>app/assets/javascripts/application.js</code>:</p>

<figure class='code'><pre><code>// This is a manifest file that'll be compiled into application.js, which will include all the files
// listed below.
//
// Any JavaScript/Coffee file within this directory, lib/assets/javascripts, vendor/assets/javascripts,
// or any plugin's vendor/assets/javascripts directory can be referenced here using a relative path.
//
// It's not advisable to add code directly here, but if you do, it'll appear at the bottom of the
// compiled file.
//
// Read Sprockets README (https://github.com/rails/sprockets#sprockets-directives) for details
// about supported directives.
//
//= require jquery
//= require jquery_ujs
//= require turbolinks
//= require_tree .</code></pre></figure>


<p>I start by removing the line <code>//= require_tree .</code>. I do this because if you don&rsquo;t, the javascript files in the folder will be loaded in alphabetical order. As you&rsquo;ll see below, there&rsquo;s an initialization file that needs to be loaded before other Javascript. We&rsquo;ll also remove the comments from the top of the file to preserve space.</p>

<p>So we&rsquo;re left with:</p>

<figure class='code'><pre><code>//= require jquery
//= require jquery_ujs
//= require turbolinks</code></pre></figure>


<h2>Initialization</h2>

<p>Let&rsquo;s start by adding the file <code>app/assets/javascripts/init.coffee</code> with the following:</p>

<figure class='code'><pre><code>window.App ||= {}

App.init = -&gt;
  $("a, span, i, div").tooltip()

$(document).on "turbolinks:load", -&gt;
  App.init()</code></pre></figure>


<p>Let&rsquo;s dig in to each pagef of this:</p>

<figure class='code'><pre><code>window.App ||= {}</code></pre></figure>


<p>We&rsquo;re creating the <code>App</code> object on window so the functionality added to the object is available throughout the application.</p>

<p>Next, we define an <code>init()</code> function on <code>App</code> to initialize common jQuery plugins and other Javascript libraries:</p>

<figure class='code'><pre><code>App.init = -&gt;
  $("a, span, i, div").tooltip()</code></pre></figure>


<p>The call to <code>$("a, span, i, div").tooltip()</code> initializes Bootstrap Tooltips. This is an example of the type of libraries that can/should be setup here. Obviously, if you&rsquo;re not using Bootstrap tooltips, you would haven&rsquo;t this here, but coupled with the next line, we&rsquo;ll see why this works.</p>

<p>As many have found out the hard way, <strong>when Turbolinks is enabled</strong> in a project, jQuery <code>$(document).ready</code> functions <strong>don&rsquo;t get fired from page to page</strong>. In order to call the <code>init()</code> function on each page transition, we&rsquo;ll hook in to the <code>turbolinks:load</code> event:</p>

<figure class='code'><pre><code>$(document).on "turbolinks:load", -&gt;
  App.init()</code></pre></figure>


<p><em>Note: the <code>turbolinks:load</code> transition is also triggered on the well known document ready event, so there&rsquo;s no need to add any special handling for first page load.</em></p>

<p>Lastly, we need to add <code>init.coffee</code> to the asset pipeline:</p>

<figure class='code'><pre><code>//= require jquery
//= require jquery_ujs
//= require turbolinks
//= require init</code></pre></figure>


<h2>&ldquo;Always On&rdquo; Javascript Functionality</h2>

<p>Now with the defaults out of the way, let&rsquo;s take a look at adding some behavior.</p>

<p>Let&rsquo;s assume one of our pages will show a Javascript graph of data. We&rsquo;ll start by adding a file with a name related to that responsibility.</p>

<figure class='code'><pre><code># app/assets/javascripts/app.chart.coffee

class App.Chart
  constructor: (@el) -&gt;
    # intialize some stuff

  render: -&gt;
    # do some stuff

$(document).on "turbolinks:load", -&gt;
  chart = new App.Chart $("#chart")
  chart.render()
</code></pre></figure>


<p>A few things to note here&hellip;</p>

<h3>Structure</h3>

<p>I created a class in the <code>App</code> namespace &ndash; the same we initialized in <code>app/assets/javascripts/init.coffee</code>. This gives us an isolated class that has a clear responsiblity. Like our Ruby, we want to do our best to keep its responsibilities to a minimium.</p>

<p>You might notice the file takes the form:</p>

<figure class='code'><pre><code>|
|
class definition
|
|


|
invocation
|</code></pre></figure>


<p>While this may seem obvious, it&rsquo;s an important point to keep in mind. I&rsquo;ve found it offers a predictable structure that allows me to open any coffeescript file that we&rsquo;ve written in the project and generally know where to look for what.</p>

<h3>Turbolinks-Proof</h3>

<p>We called this &ldquo;Always On&rdquo; functionality because, as you probably noticed, using the following event listener <code>$(document).on "turbolinks:load", -&gt;</code>, we know with Turbolinks, this gets triggered on every page transition.</p>

<h3>Add to Manifest</h3>

<p>Because we removed the <code>//= require_tree .</code> line in the default <code>application.js</code> manifest file, we&rsquo;ll have to add our chart file to be included in the asset pipeline (last line):</p>

<figure class='code'><pre><code>//= require jquery
//= require jquery_ujs
//= require turbolinks
//= require init
//= require app.chart</code></pre></figure>


<h3>Page-Specific Javascript</h3>

<p>Uh oh, so maybe we don&rsquo;t want the graph to show up on every page! In this case, we&rsquo;re looking for &ldquo;Always On&rdquo; functionality for specific pages <strong>ONLY</strong>.</p>

<p>We can limit the page pages certain functionality is loaded on by using the classes we added to the body of the layout. In this case, a small conditional to the invocation can prevent this being triggered on pages it shouldn&rsquo;t be.</p>

<figure class='code'><pre><code>$(document).on "turbolinks:load", -&gt;
  return unless $(".posts.index").length &gt; 0
  f = new App.Chart $("#chart")
  f.render()</code></pre></figure>


<p>We added <code>return unless $(".posts.index").length &gt; 0</code> to make sure <code>App.Chart</code> never gets instantiated if we&rsquo;re on the <code>.posts.index</code> page. While this may seem verbose, I&rsquo;ve found that it&rsquo;s not very common to need page-specific functionality. There are probably plenty of libraries that do something similar, like <a href="http://brandonhilkert.com/blog/page-specific-javascript-in-rails/">the one I previously suggested</a>. <strong>However, to me, limiting javascript to a single page and very explicit when I read the code, it&rsquo;s almost never worth dragging in a separate plugin for this. YMMV.</strong></p>

<h2>User-Triggered Javascript</h2>

<p>This type of Javascript is exactly what you&rsquo;d think &ndash; Javascript invoked as a result of a user clicking or performing some type of action. You&rsquo;re probably thinking, &ldquo;I know how to do this, I&rsquo;ll just add a random file to the javascripts directory and throw in some jQuery&rdquo;. While this will functionally work just fine, I&rsquo;ve found that keeping the structure of these files similar will give you great piece of mind going forward.</p>

<h3>&ldquo;data-behavior&rdquo; Attribute</h3>

<p>Let&rsquo;s assume there&rsquo;s a link in the user&rsquo;s account that allows them to update their credit card. In this case, we have the following:</p>

<figure class='code'><pre><code>&lt;%= link_to "Update Credit Card", "#", data: { behavior: "update-credit-card" } %&gt;</code></pre></figure>


<p>You&rsquo;ll probably notice the <code>data-behavior</code> tag being added to the link. This is the key we&rsquo;ll use to attach the Javascript behavior.</p>

<p>We could have added a unique class to the link:</p>

<figure class='code'><pre><code>&lt;%= link_to "Update Credit Card", "#", class: "update-credit-card" %&gt;</code></pre></figure>


<p>or, perhaps, even assign an ID:</p>

<figure class='code'><pre><code>&lt;%= link_to "Update Credit Card", "#", id: "update-credit-card" %&gt;</code></pre></figure>


<p>Both of these techniques don&rsquo;t really indicate whether we added the <code>update-card-card</code> for the use of CSS styling, or to attach Javascript behavior. So in my applications, I leave classes for <strong>styling ONLY</strong>.</p>

<p>So now to the Javascript:</p>

<figure class='code'><pre><code>App.Billing =
  update: -&gt;
    # do some stuff

$(document).on "click", "[data-behavior~=update-credit-card]", =&gt;
  App.Billing.update()</code></pre></figure>


<p>We can use the selector <code>[data-behavior~=update-credit-card]</code> to latch on to the <code>data-behavior</code> tag we defined in the view. We use the <code>on</code> jQuery method to ensure that we&rsquo;re listening to this event whether the element&rsquo;s on the page or not. This is what allows us to load this Javascript when on other pages and have it still work when a user clicks through to the page with the actual link on it.</p>

<p>We could latch on to the <code>change</code> event, or whatever is appropriate to the element we&rsquo;re adding behavior.</p>

<h3>Add to Manifest</h3>

<p>Again, because Javascripts assets we add to <code>app/assets/javascripts</code> won&rsquo;t automatically be inserted in to the asset pipeline, we&rsquo;ll add <code>//= require app.billing</code> to the manifest file:</p>

<figure class='code'><pre><code>//= require jquery
//= require jquery_ujs
//= require turbolinks
//= require init
//= require app.chart
//= require app.billing</code></pre></figure>


<h2>Summary</h2>

<p>Using the techniques above, we can keep the Javascript in our Rails applications organized and predictable. We can rest easy knowing the files will all generally look the same. There&rsquo;s not been any uses cases where this structure hasn&rsquo;t worked for me personally.</p>

<p>One thing that makes me feel good about this approach is there&rsquo;s no real magic or extra plugins. It&rsquo;s using all the tools we already have in a basic Rails application, which is one less thing to maintain and keep up to date. Less depedencies == less pain down the road.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[How to Start with Ruby?]]></title>
    <link href="http://brandonhilkert.com/blog/how-to-start-with-ruby/"/>
    <updated>2015-05-06T06:03:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/how-to-start-with-ruby</id>
    <content type="html"><![CDATA[<p>In light of RailsConf last month, I spent some time thinking about my experience learning Ruby and Rails back in 2009. The conference included quite a few seasoned veterans, but like any popular technology, there was also plenty of people that either just started learning Rails, or are considering doing so in the near future.</p>

<p>Turning the clocks back to when you knew much less about something is hard. But putting yourself back in that position can offer valuable insight to the opportunities available and how they might be improved in the future.</p>

<!--more-->


<h2>How I Started</h2>

<p>Most come to the Rails community not knowing much about Ruby. Learning any new technology is hard. And learning a few at the same time is even harder.</p>

<p>This was me in 2009. A relatively new Rails 2.3 app was dropped in to my lap, and despite only having experience with PHP, my job was to aggressively ship new features. I read Agile Web Development with Rails cover to cover and dove in head first. Little did I know it would be one of the best career decisions of my life.</p>

<p>I spent the next few months pounding my head against my desk. The days and weeks of frustration seemed endless. And then&hellip;it just went away. The pain I&rsquo;d endured merged in to an intense desire to dig in harder.  There were more light bulbs moments in the months that followed than any other time I can remember.</p>

<p>During those intense months of frustration, I leaned heavily on the Rails and Philly.rb IRC rooms. In the former, I tried not to say anything too stupid. Fortunately, the latter felt more welcoming and approachable and I owe that group a lot of holding my hand through what might have otherwise been a deathwish for me and the Ruby language.</p>

<p>A few questions pop out in my head&hellip;I was confused about filtering an ActiveRecord query and was surprised to learn methods that were built in to the Ruby language would do exactly what I want. At the time, if it wasn&rsquo;t in ActiveRecord, it might as well have not existed to me.</p>

<p>From someone who <a href="http://brandonhilkert.com/blog/7-reasons-why-im-sticking-with-minitest-and-fixtures-in-rails/">advocates for using tools without fancy DSL&rsquo;s</a>, this is hysterical to me. Ruby, of all things, had the answer. At that point, I&rsquo;m almost certain I&rsquo;d never seen the Ruby standard library documentation.</p>

<h2>Rails Starts Where Ruby Stops</h2>

<p>Out of convenience, Rails does a lot to make our experience with the Ruby language easier than it is out of the box. Like <code>2.hours.ago</code>&hellip;.none of this is possible if Rails doesn&rsquo;t <a href="http://api.rubyonrails.org/classes/Integer.html">monkey patch the <code>Integer</code> class</a>. For someone who doesn&rsquo;t know any better (me in 2009!), being able to calls <code>#hours</code> on an integer just seems like something the language would do. Because Ruby was created for developer happiness, right?</p>

<p>So perhaps the approach of monkey patching doesn&rsquo;t offer a clear indication of where functionality is coming from. The flip side of that argument is convenience. If I had to instantiate a time-related class every time I wanted &ldquo;noon time yesterday&rdquo;, maybe I&rsquo;d be slightly less enthralled with my ability to get stuff done in Rails. Perhaps it would cater more to the true OO neckbeards, but also may have resulted in far less adoption. Who knows!</p>

<p>I don&rsquo;t have strong numbers to back this up, but I&rsquo;m guessing large majority of developers that get paid to write Ruby, due so within the context of a Rails application. And whether we want to admit it or not, a large reason new developers learn Ruby, is to learn Rails. So does it matter that newcomers don&rsquo;t know Ruby?</p>

<p>From the standpoint of creating a web application fast and being able to iterate quickly, maybe not. But certainly if the person is interested in understanding the interworkings of what&rsquo;s happening within the application, knowing where Ruby stops and Rails starts is ideal.</p>

<h2>How to Start?</h2>

<p>I&rsquo;ve talked to quite a few people that are new to Ruby and I always struggle to suggest a good start project when they ask.  Everyone learns differently and has different interests, but in general, I think there are core-level motivating factors that can keep someone focused and interested.</p>

<p>To me, it&rsquo;s the following:</p>

<ol>
<li>Does the project have real-world value?</li>
<li>Does the project offer immediate feedback?</li>
</ol>


<p>Why does it matter if the project has real-world value? For one, continuing on something that doesn&rsquo;t improve our lives is sometimes hard to keep up with. And learning Ruby/Rails is definitely something that&rsquo;ll take more than a few nights and weekends. If the project we&rsquo;re driving towards continues to seem desirable, we&rsquo;ll have a better chance not to lose focus.</p>

<p>Second, I don&rsquo;t want to get too philosophical, but there are plenty of resources that suggest if you want something badly enough and can visualize the end goal, there&rsquo;s a higher likelyhood that it&rsquo;ll come to be. Our desires will be stronger when we can see the end goal and know there is an increased real-world value for this application to be in existence.</p>

<p>The immediate feedback piece shortens the time we&rsquo;re able to see changes and progress. This brings a lower rate of abandonment and better chance we&rsquo;ll see the project through.</p>

<p>Rails answers both of these questions with a resounding, &ldquo;<strong>YES!</strong>&rdquo;. Think about it&hellip;</p>

<p><strong>Does a Rails application have real world value?</strong> Of course it does. It&rsquo;s a web application. There has been no better time to be focused on Rails, where it be for the web and the backend of a mobile app.</p>

<p><strong>Does the project offer immediate feedback?</strong> Sure does! A couple keystrokes and a refresh can give you instant gratifcation in the browser (or the occasional disappointment!).</p>

<p>Whether it&rsquo;s a command-line tool, game, or other utility, I struggle to find other oppotunities to get people started. Frankly, many newcomers to Rails have never used a terminal before. So why would we suggest a command-line application as a good place to start? This is especially true for someone with little programming experience.</p>

<p>For the experienced developer, it&rsquo;d be much easier to suggest writing something like a markdown processor, but only because they have the context of another language. At that point, they&rsquo;re really just comparing the Ruby language to what they already know and figuring out how to translate the things they <em>do</em> know to Ruby.</p>

<p>For a new developer altogether, they would get little value of writing a markdown processor, and in fact, probably seems more like a research project than a sure-fire way to learn a developer-friendly programming language.</p>

<p>So now we&rsquo;re back to suggesting Rails, but then you consider everything else a newcomer would need familiarity with to traverse the Rails eco-stytem: HTML, CSS, Javascript, Coffeescript, Sass, SQL&hellip;</p>

<p>The list goes on and on. Not so easy after all. No wonder people get intimidated and bail. I suppose there&rsquo;s starting with Sinatra, but the doesn&rsquo;t remove you from the HTML and CSS requirements. Perhaps those impossible to dodge, given the medium. And even with Sinatra, we often end up creating the functionality that&rsquo;s in Rails anyway.</p>

<p>I wish I had a better answer.</p>

<p><strong>How would you suggest someone start with Ruby?</strong></p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Adding Functionality to Ruby Classes with Decorators]]></title>
    <link href="http://brandonhilkert.com/blog/adding-functionality-to-ruby-classes-with-decorators/"/>
    <updated>2015-03-09T15:37:00-07:00</updated>
    <id>http://brandonhilkert.com/blog/adding-functionality-to-ruby-classes-with-decorators</id>
    <content type="html"><![CDATA[<p>In my <a href="http://brandonhilkert.com/blog/using-the-sucker-punch-ruby-gem-to-cache-stripe-data-in-rails/">last article</a>, I presented some code that wrapped up accessing a customer&rsquo;s Stripe data and added a caching layer on top. I wanted to take some time to dig in to that code and see how we can make it better.</p>

<p>Decorators give us a tool to add additional functionality to a class while still keeping the public API consistent. From the perspective of the client, this is a win-win! Not only do they get the added behavior, but they don&rsquo;t need to call different methods to do so.</p>

<!--more-->


<h2>The Problem</h2>

<p>Our original class accessed data from Stripe <strong>AND</strong> cached the response for some time period. I accentuated &ldquo;AND&rdquo; because it&rsquo;s generally the word to be on alert for when considering whether functionality can be teased apart in to separate responsibilities.</p>

<p>The question becomes, can we make one class that accesses Stripe data, and another that&rsquo;s only responsible for caching it?</p>

<p>Of course we can!</p>

<h2>The Solution</h2>

<p>Let&rsquo;s start with the most basic form of accessing our Stripe customer data with the <a href="https://github.com/stripe/stripe-ruby">Stripe gem</a>:</p>

<figure class='code'><pre><code>class AccountsController &lt; ApplicationController
  before_action :require_authentication

  def show
    @customer = Stripe::Customer.retrieve(current_user.stripe_id)
    @invoices = @customer.invoices
    @upcoming_invoice = @customer.upcoming_invoice
  end
end</code></pre></figure>


<h2>Extract an Adapter</h2>

<p>Because we&rsquo;re interfacing with a third-party system (Stripe), it makes sense for to create a local adapter to access the Stripe methods. It&rsquo;s probably not likely we&rsquo;re going to switch out the official Stripe gem for another one that access the same data, but a better argument might be that we could switch billing systems entirely in the future. And if we make a more generic adapter to our third-party billing system, we would only need to update our adapter when that time comes.</p>

<p>While the adapter optimization may seem like overkill here, we&rsquo;ll see how that generic adapter helps us implement our caching layer shortly.</p>

<p>Let&rsquo;s start by removing the notion that it&rsquo;s Stripe and all and call it <code>Billing</code>. Here we can expose the methods needed from the  <code>AccountsController</code> above:</p>

<figure class='code'><pre><code>class Billing
  attr_reader :billing_id

  def initialize(billing_id)
    @billing_id = billing_id
  end

  def customer
    Stripe::Customer.retrieve(billing_id)
  end

  def invoices
    customer.invoices
  end

  def upcoming_invoice
    customer.upcoming_invoice
  end
end</code></pre></figure>


<p>There we have it. A simple <code>Billing</code> class that wraps the methods that we used in the first place &ndash; no change in functionality. But certainly more organized and isolated.</p>

<p>Let&rsquo;s now use this new class in the accounts controller from earlier:</p>

<figure class='code'><pre><code>class AccountsController &lt; ApplicationController
  before_action :require_authentication

  def show
    billing = Billing.new(current_user.stripe_id)

    @customer = billing.customer
    @invoices = billing.invoices
    @upcoming_invoice = billing.upcoming_invoice
  end
end</code></pre></figure>


<p>Not too bad! At this point we&rsquo;ve provide the exact same functionality we had before, but we have a class that sits in the middle between the controller and Stripe gem - an adapter if you will.</p>

<h2>Create a Decorator</h2>

<p>Now that we have our adapter set up, let&rsquo;s look at how we can add caching behavior to improve the performance of our accounts page.</p>

<p>The most of basic form of a decorator is to pass in the object we&rsquo;re decorating (<code>Billing</code>), and define the same methods of the billing, but add the additional functionality on top of them.</p>

<p>Let&rsquo;s create a base form of  <code>BillingWithCache</code> that <strong>does nothing more</strong> than call the host methods:</p>

<figure class='code'><pre><code>class BillingWithCache
  def initialize(billing_service)
    @billing_service = billing_service
  end

  def customer
    billing_service.customer
  end

  def invoices
    customer.invoices
  end

  def upcoming_invoice
    customer.upcoming_invoice
  end

  private

  attr_reader :billing_service
end</code></pre></figure>


<p>So while we haven&rsquo;t added any additional functionality, we have created the ability for this class to be used in place of our existing <code>Billing</code> class because it responds to the same API (<code>#customer</code>, <code>#invoices</code>, <code>#upcoming_invoice</code>).</p>

<p>Integrating this new class with <code>AccountsController</code> looks like:</p>

<figure class='code'><pre><code>class AccountsController &lt; ApplicationController
  before_action :require_authentication

  def show
    billing = BillingWithCache.new(Billing.new(current_user.stripe_id))

    @customer = billing.customer
    @invoices = billing.invoices
    @upcoming_invoice = billing.upcoming_invoice
  end
end</code></pre></figure>


<p>As you can see, we only had to change one line &ndash; the line where we decorated the original billing class:</p>

<figure class='code'><pre><code>BillingWithCache.new(Billing.new(current_user.stripe_id))</code></pre></figure>


<p>I know what you&rsquo;re thinking, &ldquo;But it doesn&rsquo;t actually cache anything!&rdquo;. You&rsquo;re right! Let&rsquo;s dig in to the <code>BillingWithCache</code> class and add that.</p>

<h2>Adding Caching Functionality</h2>

<p>In order to cache data using <code>Rails.cache</code>, we&rsquo;re going to need a cache key of some kind. Fortunately, the original <code>Billing</code> class provides a reader for <code>billing_id</code> that will allow us to make this unique to that user.</p>

<figure class='code'><pre><code>def cache_key(item)
  "user/#{billing_id}/billing/#{item}"
end</code></pre></figure>


<p>In this case, <code>item</code> can refer to things like <code>"customer"</code>, <code>"invoices"</code>, or <code>"upcoming_invoice"</code>. This gives us a method we can use internally with <code>BillingWithCache</code> to provide a cache key unique to the both the user and the type of data we&rsquo;re caching.</p>

<p>Adding in the calls to actually cache the data:</p>

<figure class='code'><pre><code>class BillingWithCache
  def initialize(billing_service)
    @billing_service = billing_service
  end

  def customer
    key = cache_key("customer")

    Rails.cache.fetch(key, expires: 15.minutes) do
      billing_service.customer
    end
  end

  def invoices
    key = cache_key("invoices")

    Rails.cache.fetch(key, expires: 15.minutes) do
      customer.invoices
    end
  end

  def upcoming_invoice
    key = cache_key("upcoming_invoice")

    Rails.cache.fetch(key, expires: 15.minutes) do
      customer.upcoming_invoice
    end
  end

  private

  attr_reader :billing_service

  def cache_key(item)
    "user/#{billing_service.billing_id}/billing/#{item}"
  end
end</code></pre></figure>


<p>The code above caches the call to each of these methods for 15 minutes. We could go further and move that to an argument with a default value, but I&rsquo;ll leave as an exercise for another time.</p>

<h2>Summary</h2>

<p>Separating your application and third-party services helps keeps your applications flexible &ndash; offering the freedom to switch to another service when one no longer fits the bill.</p>

<p>Another benefit of an adapter is you have the freedom to name the class and methods whatever you like. The base gem for a service might not have the best names, or it may be that the names don&rsquo;t make sense when dragged in to your application&rsquo;s domain. This is a small but important point as applications get larger and its code more complex. The more variable/method names you need to think about when you poke around the code, the harder it&rsquo;ll be to remember what was going on. Not to mention the pain new developers will have if they acquire the code. Whether it&rsquo;s you or the next developer, the time you invest in creating great names will be greatly appreciated.</p>

<p>Using decorators in this way makes it easier for clients of the code to avoid change, but keep your applications flexible. The <code>Billing</code> class above was relatively simple &ndash; intentionally so. If the class being decorated has more than a few methods, it might be worth incorporating <code>SimpleDelegator</code> to ensure the methods that don&rsquo;t need additional functionality still continue to respond appropriately.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Using the Sucker Punch Ruby Gem to Cache Stripe Data in Rails]]></title>
    <link href="http://brandonhilkert.com/blog/using-the-sucker-punch-ruby-gem-to-cache-stripe-data-in-rails/"/>
    <updated>2015-02-26T20:46:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/using-the-sucker-punch-ruby-gem-to-cache-stripe-data-in-rails</id>
    <content type="html"><![CDATA[<p>With so many services available these days, it&rsquo;s almost impossible to find or build an application that doesn&rsquo;t rely on a third-party service. Most developers that have dealt with billing systems within the past few years have likely heard of <a href="https://stripe.com/">Stripe</a>. Stripe is, by far, the most developer-friendly billing service I&rsquo;ve implemented.</p>

<p>While Stripe does provide a number of features and plugins that make updating a credit card or signing up for a service simple, there are occasions when data needs to be fetched from Stripe in real-time. For these cases, it&rsquo;s great to be able to fetch and cache this data before-hand, and only expire if you know there&rsquo;s been a change.</p>

<!--more-->


<p>Combining <a href="https://github.com/brandonhilkert/sucker_punch">Sucker Punch</a> with Rails cache allows you to cache Stripe customer data so that billing pages are just as snappy as the rest of the application.</p>

<h2>The Pain</h2>

<p>Even though Stripe is generally pretty fast, retrieving customer data on the fly can be expensive. In order to optimize page load times, we can look to cache this data before it&rsquo;s actually used.</p>

<p>If you&rsquo;re familiary with the Stripe gem, you&rsquo;ve probably seen something like this:</p>

<figure class='code'><pre><code>customer = Stripe::Customer.retrieve(user.stripe_id)</code></pre></figure>


<p>With the response of <code>customer</code>, we can further query customer data with the following methods:</p>

<figure class='code'><pre><code>invoices = customer.invoices
upcoming_invoices = customer.upcoming_invoices</code></pre></figure>


<p>If we make all 3 of these method calls on page load, we&rsquo;d have 3 separate lookups from Stripe. This is pretty common for the typical billing page where you might want to show the customer&rsquo;s current credit card on file, their past invoices, and charges they can expect for the next invoice.</p>

<p>Three lookups like this could potentially add another second or so to page load, which is not ideal.</p>

<p>So how can we improve this?</p>

<h2>The Solution</h2>

<p>First, we can move the code to fetch the relevant stripe data in to a class of it&rsquo;s own, which wraps the notion of caching around the data retrieval.</p>

<figure class='code'><pre><code>class StripeCache
  def initialize(user)
    @user = user
  end

  def refresh
    purge_all
    cache_all
    self
  end

  def customer
    return @customer if @customer

    @customer = Rails.cache.fetch(cache_key("customer"), expires_in: 15.minutes) do
      Stripe::Customer.retrieve(user.stripe_id)
    end
  end

  def invoices
    Rails.cache.fetch(cache_key("invoices"), expires_in: 15.minutes) do
      customer.invoices
    end
  end

  def upcoming_invoice
    Rails.cache.fetch(cache_key("upcoming_invoice"), expires_in: 15.minutes) do
      customer.upcoming_invoice
    end
  end

  private

  attr_reader :user

  def cache_all
    customer
    invoices
    upcoming_invoice
  end

  def purge_all
    Rails.cache.delete_matched("#{user.id}/stripe")
  end


  def cache_key(item)
    "user/#{user.id}/stripe/#{item}"
  end
end</code></pre></figure>


<p>To use this on a billing page, we could do:</p>

<figure class='code'><pre><code>stripe = StripeCache.new(current_user).refresh</code></pre></figure>


<p>And from the response of that class, we could access the <code>customer</code>, <code>invoices</code>, and <code>upcoming_invoice</code> respectively:</p>

<figure class='code'><pre><code>@customer = stripe.customer
@invoices = stripe.invoices
@upcoming_invoice = stripe.invoices</code></pre></figure>


<p>This is great! All future calls to this customer&rsquo;s Stripe data will be fast &ndash; for 15 minutes, of course.</p>

<p>However, the first time the page is load, the user is still burdened with the initial fetch of the data. So the method above works for every request to the billing page after the first.</p>

<p>But let&rsquo;s be honest, what users are going to the billing page multiple times during a session? Probably not many. So we still need to fix the initial load somehow.</p>

<p>This is where <a href="https://github.com/brandonhilkert/sucker_punch">Sucker Punch</a> comes in. Like other Ruby background processing libraries, Sucker Punch allows you to move the processing of code to the background. However, unlike the others, Sucker Punch doesn&rsquo;t require additional infrastructure like Redis, and doesn&rsquo;t require a separate worker process to monitor and execute enqueued jobs. Because of this, the time it takes to extract code to a Sucker Punch job and have it incorporated with your application code is much lower.</p>

<p>In this case, rather than send a transactional email or perform some database calculation, we can write a job thats only responsibility is to run the Stripe caching code.</p>

<figure class='code'><pre><code>class StripeCacheJob
  include SuckerPunch::Job

  def perform(user)
    StripeCache.new(user).refresh
  end
end</code></pre></figure>


<p>The next question is, when do you run this?</p>

<p>Well, I chose to run it on user login, but you could run it anywhere you think would give you a head start if the user were about to go to the billing page. In my case, on login meant that if they didn&rsquo;t go to the billing page at all, after 15 minutes the data would be exhausted from the cache anyway, so no hard done.</p>

<p>But if the user did navigate to the billing page during that session, they would have up the latest Stripe customer and invoice data to see &ndash; all without a request to stripe on page load.</p>

<p>One other thing to keep in mind is there may be times when we&rsquo;d want invalidate the Rails cache data. One example would be when the user&rsquo;s card information is updated. In that case, we can slip in another call to the Stripe cache job, which would invalidate the previous cache and re-request the customer&rsquo;s billing information:</p>

<figure class='code'><pre><code>module Accounts
  class CardsController &lt; ApplicationController
    before_action :require_authentication

    def create
      cust = StripeCache.new(current_user).customer
      cust.save(card: params[:stripeToken])

      StripeCacheJob.new.async.perform(current_user)

      redirect_to account_path, notice: t("card.update.success")
    end
  end
end</code></pre></figure>


<h2>Summary</h2>

<p>Using Sucker Punch in combination with Rails cache feels like a great way make optimizations to third-party data requests. This article focused on using it to fetch Stripe data, but it could be used with another service just as easily.</p>

<p>The beauty of Sucker Punch is that it doesn&rsquo;t require a separate worker process to be running in the background. On a platform like Heroku, this saves the cost of an additional dyno.</p>

<p>Sucker Punch excels at background jobs that are relatively fast and if missed,
wouldn&rsquo;t be critical to the operation. In this case, if a cache job is lost,
it&rsquo;s not the end of the world. At worst, the user&rsquo;s Stripe data would be requested on the fly and the page would be slower than usual. But the majority of the time, the request is fast because the data&rsquo;s been cached beforehand.</p>

<p>What other jobs have you used Sucker Punch for?</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[Using Rails Fixtures To Seed a Database]]></title>
    <link href="http://brandonhilkert.com/blog/using-rails-fixtures-to-seed-a-database/"/>
    <updated>2015-02-04T06:13:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/using-rails-fixtures-to-seed-a-database</id>
    <content type="html"><![CDATA[<p>It’s no mystery that <a href="http://brandonhilkert.com/blog/7-reasons-why-im-sticking-with-minitest-and-fixtures-in-rails/">I’ve grown to love Rails fixtures</a>. And I tend to mostly use relational databases in my applications, <a href="http://brandonhilkert.com/blog/rails-gemfile-teardown-2014/">specifically PostgreSQL</a>.</p>

<p>Most applications have ancillary data that’s required to support the main function of the application — think drop-downs with states for shipping or credit card type.</p>

<!--more-->


<p>This data is almost always never interesting, but completely necessary for the application to work as expected. So when it comes to time send your little baby to production, only to find your users can’t pay because they can’t pick their credit card type, your world comes crashing down.</p>

<p>If you have those credit card types in fixtures from the start, loading them in to your development of production database is just a <code>rake</code> task away.</p>

<h2>The Problem</h2>

<p>Let’s assume our application requires us have a list of supported credit card types, and the user is required to pick from the list to pay for the awesome stuff we sell. A sample fixture might look like:</p>

<figure class='code'><pre><code>visa:
  name: Visa

mastercard:
  name: Mastercard

amex:
  name: American Express</code></pre></figure>


<p>This is a somewhat trivial example because the <code>name</code> matches what one might expect in a potential transaction record if we had a <code>credit_card_type</code> field or something similar if we denormalized.</p>

<p>Perhaps we have a field <code>credit_card_type_id</code> in a <code>transactions</code> table that references the foreign key of the related <code>CreditCardType</code> record.</p>

<p>So how do we get these records in to our development and production databases?</p>

<h2>The Solution</h2>

<p>Fortunately, Rails has our backs. The following rake test is available from a default Rails application:</p>

<figure class='code'><pre><code>$ bin/rake -T
...
rake db:fixtures:load # Load fixtures into the current environment's database</code></pre></figure>


<p>The <code>db:fixtures:load</code> task is an interesting start, but quickly we realize it might be a little heavy-handed. If this application has users, we probably wouldn&rsquo;t want to copy them to production. They might, however, be a great starting pointing for development.</p>

<p>So how do we handle getting trivial model data in to production for only specific models?</p>

<p>It turns out that we can specify <strong>ONLY</strong> the models we want to load by using the <code>FIXTURES</code> environment variable:</p>

<figure class='code'><pre><code>rake db:fixtures:load FIXTURES=credit_card_types</code></pre></figure>


<p><em>Note: The name of the fixture file (usually the database table name) should be used as the value for <code>FIXTURES</code>, not the model name.</em></p>

<p>With that single command, any environment we specify will immediately get the data for our 3 credit card types.</p>

<p>A word of warning, if we run this command multiple times, it will seed the table multiple times. It&rsquo;s not idempotent.</p>

<p>Additionally, if we wanted to load more than just a single fixture, we can specify the names of the files separated by commas:</p>

<figure class='code'><pre><code>rake db:fixtures:load FIXTURES=credit_card_types,states,cities</code></pre></figure>


<p>Let&rsquo;s take a quick look at how Rails implements this functionality, specifically the determination of single models:</p>

<figure class='code'><pre><code>fixtures_dir = if ENV['FIXTURES_DIR']
                 File.join base_dir, ENV['FIXTURES_DIR']
               else
                 base_dir
               end

fixture_files = if ENV['FIXTURES']
                  ENV['FIXTURES'].split(',')
                else
                  # The use of String#[] here is to support namespaced fixtures
                  Dir["#{fixtures_dir}/**/*.yml"].map {|f| f[(fixtures_dir.size + 1)..-5] }
                end

ActiveRecord::FixtureSet.create_fixtures(fixtures_dir, fixture_files)</code></pre></figure>


<p>If the <code>FIXTURES</code> variable is present, code teases appart the model names and looks in the fixtures directory and loads the YAML fixture file for that table name.</p>

<p>An interesting side note, it&rsquo;s possible to specify alternate directories for fixture using the <code>FIXTURES_DIR</code> variable. I personally haven taken advantage of this, but could be useful if you want to keep specific fixture files for production that might be different than those that reside in <code>test/fixtures/*</code>.</p>

<p>I wouldn&rsquo;t suggesting using this approach for anything that needs to reference other foreign keys. When you&rsquo;re transferring to a different database, foreign keys will not match and your application will likely not work as expected.</p>

<h2>Summary</h2>

<p>This approach has saved me quite a bit of time in my last few applications. Build it once, use it everywhere. As mentioned above, using this approach to seed database records  with a foreign key should be avoided.</p>

<p>Most applications have a number of tasks needed for a developer to get up and running. Combining fixture data with additional seed data placed in <code>db/seeds.rb</code> can give you the best of both worlds, while still ensuring you have robust data to test against.</p>
]]></content>
  </entry>
  
  <entry>
    <title type="html"><![CDATA[2014 In Review]]></title>
    <link href="http://brandonhilkert.com/blog/2014-in-review/"/>
    <updated>2014-12-29T14:57:00-08:00</updated>
    <id>http://brandonhilkert.com/blog/2014-in-review</id>
    <content type="html"><![CDATA[<p>For the past 2 years, I’ve committed myself to <a href="http://brandonhilkert.com/blog/be-ambitious/">specific
goals</a> for the year to come. Most
people call them New Year’s Resolutions. Heck, I probably even referred to them
as “resolutions” too. But the more I thought about it, the more it dawned on me
that a “resolution” felt more like a fix for something — something that didn&rsquo;t
go well in the previous year. Think weight loss (everyone makes this resolution at least once in their life) or a dedication to be more focused.</p>

<!--more-->


<p>Mine have been more of the bucket list variety. The first was <a href="http://brandonhilkert.com/blog/loyalty/">completing an Ironman</a>. The second, <a href="http://brandonhilkert.com/blog/be-ambitious/">writing a book</a>.</p>

<p>Both felt almost too big initially, but ultimately led to opportunities and
lifestyle changes I would’ve never expected once finished. So naturally, with
2014 winding down, the question becomes <strong>&ldquo;what’s the goal for 2015?&rdquo;</strong>.</p>

<p>And the answer is&hellip;<em>I don’t know</em>.</p>

<p>It doesn’t mean I’m not going to do anything. In fact, it probably means the
opposite. I’m just not going to set out with a specific goal in mind. If
halfway through something, I want to stop and do something else, so be it.</p>

<p>I remember during my training for the Ironman (9 months total), I constantly thought about what I would do with my free time when it was over. Every time my alarm went off at 5am, I thought about what it would feel like to get another 2 hours of sleep. It was endless. Nine months was a long time to have those thoughts and, perhaps, why I was so well positioned to write and launch the book (I had 9 months to think about what was next and how to accomplish it).</p>

<p>Always having your sights set on the future can wear on you though.</p>

<p>Without those feelings now, I’m going to let 2015 take me wherever it does. I don’t have any expectations financially or professionally. I’m going to do my best to make the most out of every moment and appreciate more of the small things. It’s so easy to skip over the small things and in many cases, the small things are actually the best things. And we don’t realize it until they’re gone.</p>

<p>2014 was a great year in all. Here are a few of the events that stand out the most:</p>

<ul>
<li>my wife and I <a href="http://brandonhilkert.com/about/">welcomed our son, Cruz</a></li>
<li>my wife and I celebrated our 4 year anniversary</li>
<li>wrote a book, <a href="http://brandonhilkert.com/books/build-a-ruby-gem/">Build a Ruby Gem</a></li>
<li>provided a <a href="http://brandonhilkert.com/courses/build-a-ruby-gem/">free email course on building a Ruby gem</a> to 1,218 people</li>
<li>connected with 2,645 people through my <a href="http://brandonhilkert.com/newsletter/">newsletter</a></li>
<li>built a <a href="https://funneloptimizer.herokuapp.com/">funnel optimization service for bloggers selling
  products</a></li>
<li>published 24 <a href="http://brandonhilkert.com/blog/archives/">articles</a></li>
<li>saw <a href="https://github.com/brandonhilkert/sucker_punch">Sucker Punch</a> <a href="https://rubygems.org/gems/sucker_punch">downloaded over 225k times</a></li>
<li>celebrated <a href="https://github.com/brandonhilkert/sucker_punch">Sucker Punch</a> being <a href="http://guides.rubyonrails.org/active_job_basics.html">integrated in to Rails</a></li>
<li>made my <a href="https://github.com/rails/rails/pull/16898">first commit to Rails</a> (even if it was small!)</li>
<li>traveled to Jackson Hole, WY</li>
<li>built a <a href="https://vuier.com/">pay-to-view video platform</a> with a few friends</li>
<li>built a <a href="https://perform.io/">performance management system</a></li>
<li>renovated my
  <a href="http://brandonhilkert.com/images/2014/bathroom-before.jpg">kids&#8217;</a>
  <a href="http://brandonhilkert.com/images/2014/bathroom-after.jpg">bathroom</a></li>
<li>built a deployment system for <a href="https://www.pipelinedeals.com/">PipelineDeals</a> and supporting services that I’m very proud of</li>
<li>built a staging server management application</li>
<li>built <a href="https://chrome.google.com/webstore/detail/how-to-win-friends-and-in/cbmeigkjdnilgodhnhagokhoehbpkdcc?hl=en-US">3</a> <a href="https://chrome.google.com/webstore/detail/pipelinedeals-crm-contact/ieaafnaonfabpgpkkeglkeodkpiijjdd?hl=en-US">chrome</a> <a href="https://chrome.google.com/webstore/detail/pipelinedeals-gmail/fdfifknmbmalmgdjmnhkcfholdgacikl?hl=en-US">extensions</a>, the latter being my first <a href="http://facebook.github.io/react/">React</a> app</li>
<li>started a <a href="http://walnutstlabs.com/event/walnut-st-labs-night-owls/">weekly tech gathering</a> at a <a href="http://walnutstlabs.com">local co-working space</a></li>
<li>stopped push email notifications on all my devices (strongly recommended)</li>
<li>saw <a href="http://defriendnotifierapp.com/">Defriend Notifier</a> be used by 30,128 people</li>
<li>started exploring other programming languages, specifically <a href="https://golang.org/">Go</a> and <a href="http://elixir-lang.org/">Elixir</a></li>
</ul>


<p>I’m excited for 2015!</p>
]]></content>
  </entry>
  
</feed>
