Hi I'm Tony.

  • @freedrull
  • freedrull
01 April 2014

Right now the metadata for the currently streaming song/live dj on datafruits is updated via simple polling and ajax requests. The metadata is stored in a simple redis key, and a sinatra app with an endpoint /currentsong will return this key in JSON format. The ajax request simply polls this app every few seconds.

I'd like to use EventSource instead of polling via setInterval. I thought this would be a good opportunity to try out the new ActionController::Live feature in Rails. Luckily Aaron Patterson has a good write-up on his blog.

I ran into a few caveats along the way. First off is some good news, an implementation of Aaron's SSE emitter class has been merged into Rails, so you no longer need to write your own.

sse = SSE.new(response.stream)

Also one thing is that it seems if there is most any kind of error in your controller, nothing happens. The Rails developers seem to have decided this is correct behaviour for the most part. https://github.com/rails/rails/pull/9604 There is an on_error callback, although I couldn't find any documentation on how to use it.

The final caveat is that you are probably going to new a different server than you are used to. I tried out Puma, mostly since that's what Aaron used in his guide.

I thought of the implications of every datafruits visitor keeping a connection open to my site. Am I going to require a thread for each connection? Is each connection going to additionally require a database connection in the activerecord pool?

I think I can come up with a simpler solution. I could extract this functionality to a smaller service, perhaps running on faye or sinatra. All I really require is the redis connection, this doesn't actually have anything to do with the rest of my rails application anyway.

03 December 2013

I recently saw someone point out that mozGetMetaData() exists on the jPlayer google group. Its a commonly asked question in this group if its possible to pull icecast stream metadata directly from the

Here is the description for this method on this page: The mozGetMetadata method returns a javascript object whose properties represent metadata from the playing media resource as {key: value} pairs. A separate copy of the data is returned each time the method is called. This method must be called after the loadedmetadata event fires.

https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement

I tried this out with the ogg version of my stream and got this object back:

Server: "Icecast 2.3.3"
Title: "Unknown"

When I tried the mp3 version it didn't return this data however.

Seems like it could be useful, although its a shame its Firefox only. This type of functionality really needs to be standardized.

30 September 2013

I recently added backbonejs to one of my rails apps. Here is a description of the process that can maybe help you transition to adding backbone or another JS MVC framework to your app! I will break up this article into a series of a few articles. You can check out my current progress in the 'backbone' branch of this repository: https://github.com/mcfiredrill/forttree/tree/backbone

Many articles have stated this, but if you arent aware, you need. to know that backbone thinks about mvc in a different way than rails. The views in backbone are more like controllers in rails, in that they set up data for display. Templates in backbone are more like views in rails, they are just simple templates with js embedded for displaying data. There is no controller really in backbone, although there is a router, which does some of the work you might see done in a Rails controller.

Honestly this seems a little better to me, I think that controllers in Rails often have too much responsibility. This seems like a decent way to divide up the work a little better.

I went with this Gem for adding backbone to rails. https://github.com/meleyal/backbone-on-rails

Now, your first instinct might be to create a backbone model for every rails model that you have. However, there is really no need to create an exact mapping to your frontend backbone models and your backend.

By default, Rails will not fetch your associations when returning JSON. A nice way to control the JSON that rails controllers give you is with the activemodel::serializers gem. https://github.com/rails-api/active_model_serializers

Additionally, backbone does not have any built in support for relations, really. There are a couple of libraries like backbone-relational and a couple simple ways to roll it on your own. I will go into more details in the next post in this series.

My application that I am converting is a forum application. There are 3 main models, boards, threads and posts. Boards have many threads and threads have many posts. I will start out by just converting the boards to backbone.

First you need to create a backbone model and collection. In rails, there is no separate classes for single models and collections of models, but backbone makes this distinction.

// app/assets/javascripts/models/board.js
var Board = Backbone.Model.extend({
});
// app/assets/javascripts/collections/boards.js
Forttree.Collections.Boards = Backbone.Collection.extend({
  model: Board,
  url: '/boards'
});

We'll start with converting the index action to backbone. You can create a view for the boards index.

// app/assets/javascripts/views/boards_index.js
Forttree.Views.BoardsIndex = Backbone.View.extend({
  render: function() {
    this.$el.html(JST['boards/index']({ boards: this.collection });
    return this;
  }
});

Then set up your boards router for the index action.

// app/assets/javascripts/router/boards.js
Forttree.Routers.Boards = Backbone.Router.extend({
  routes: {
    "": "index"
  },
  index: function() {
    var view = new Forttree.Views.BoardsIndex({ collection: Forttree.boards });
    $('body').html(view.render().$el);
  }
});

And finally the simple template.

<!-- app/assets/templates/boards/index.jst.ejs -->
<h1>boards</h1>
<% boards.each(function(board) { %>
  <%= board.escape('name') %>
<% }); %>

You can bootstrap your app with some data in your rails template:

<!-- app/views/layouts/application.html.erb -->
    <%= javascript_tag do %>
      Forttree.initialize({boards: <%== @boards.to_json %> });
    <% end %>

Visit your app at root and you will see backbone has rendered the boards. I will write another article next, covering setting up the associations for this app.

29 September 2013

I recently tried to apply ideas about testing at the correct boundaries to testing a decorator.

class Foo < ActiveRecord::Base
  def get_row(headers)
    row = []
    headers.each do |header|
      cell_decorator = CellDecorator.decorate self
      row << cell_decorator.to_cell(header) 
    end
    row
  end
end
describe Foo do
  it "generates the row correctly" do
    headers = [:first_header, :second_header]
    row = foo.get_row(headers)
    expect(row).to == ["data","more_data"]
  end
end

If you test the return value of get_row, you are effectively doing an integration test. Testing outgoing messages for state is no good, you are effectively testing the behaviour of another class if you are doing so. CellDecorator should be the one responsible for testing this. You are also increasing your test maintenance costs, you will have to change both tests (Foo and CellDecorator) if the interfaces change.

A good way to handle this is inject the decorator as a dependency. This makes it easy to just use a double and assert that the decorator recieved the message in the test. Then you can worry about testing that the decorator generated the string properly in the decorator test, where it belongs.

Although it seems like this is adding production code to make the tests easier to write, I think decoupling like this is always good. Its not that much code to add either. The default argument makes it even less of a big deal.

class Foo < ActiveRecord::Base
  def get_row(headers,decorator=CellDecorator.new(self))
    row = []
    headers.each do |header|
      row << decorator.to_cell(header)
    end
    row
  end
end
describe Foo do
  let(:decorator){ double }
  let(:foo){ create :foo }
  it "generates the row correctly" do
    headers = [:first_header, :second_header]
    headers.each do |h|
      expect(decorator).to receive(:to_cell).with(h)
    end
    row = foo.get_row(headers,decorator)
  end
end
describe CellDecorator do
  it "decorates the cell" do
    foo = double
    foo.stub(:first_header){ "data" }
    foo.stub(:second_header){ "more_data" }
    headers = [:first_header, :second_header]
    expect(CellDecorator.new(headers)).to == ["data", "more_data"]
  end
end
21 September 2013

I finally switched over to a hosted ci service last week. I am beyond happy with the new setup. Hours spent toiling away configuring jenkins and its many plugins are no more.

No more trying to get janky setup either. It seemed that I wasn't able to use janky without creating a github organization. The janky github user needed admin permissions on the repository, and it didn't seem we were able to do this without creating an organization and placing the repo under that organization. Unfortunately we couldn't do that without upgrading to the github business plan, which is something we plan on doing soon anyway, but regardless it was still another hurdle.

The service I chose was circle ci. I logged in via github and was running my tests in seconds. The only thing I had to really do was specify ruby 2.0.0 in my gemfile to make sure the tests were running on ruby 2.

I'm not trying to drink too much saas kool aid here, but this really seemed like a great win. So my question to anyone managing Jenkins themselves is, why not use a hosted service? It is cheaper than paying your engineers to manage a ci system. Is there anything Jenkins provides that a hosted solution cannot?

17 September 2013

I decided to try to make a small game in Coffeescript. I wanted to carry over my tdd practices from my ruby work. However I was a bit unfamiliar with the coffeescript/node ecosystem. After a bunch of research this is what I've decided to use for now.

Grunt seems to be a great equivalent to rake. You install different tasks via npm packages. You need to create a package.json.

{
  "name": "dumbgame",
  "version": "0.0.0",
  "description": "dumb game i made"
}

Then just install grunt via npm like so.

$ npm install grunt --save-dev

This will save the dependency to your package.json.

Grunt tasks are distributed as npm packages as well. I am using the grunt-contrib-coffee and grunt-contrib-watch tasks.

$ npm install grunt-contrib-coffee --save-dev
$ npm install grunt-contrib-watch --save-dev

Then you can create this Gruntfile.coffee to use these tasks.

module.exports = (grunt) ->

  grunt.initConfig
    coffee:
      app:
        options:
          sourceMap: true
        files:
          './lib/dumbgame.js': './src/*.coffee'
    watch:
      app:
        files: './src/*.coffee'
        tasks: ['coffee']

  grunt.loadNpmTasks 'grunt-contrib-coffee'
  grunt.loadNpmTasks 'grunt-contrib-watch'

  grunt.registerTask 'default', ['coffee']

I use the watch task for compiling coffeescript whenever a .coffee file changes. I wasn't sure I would like this workflow at first, but its not so bad. I just run the watch task in another tmux pane and inspect it whenever there is a compile error.

You can use mocha for testing. It can run scripts through a headless browser and has 'should' and 'expect' syntax. There is a grunt task for running mocha tests. It doesn't work with coffeescript by default unless you pass this require: coffeescript option.

The coffee task that is called by the watch task concatenates all my coffeescript files into a single file, and provides a sourcemap for chrome to use. I was surprised the sourcemap worked right away, I didn't need to change any settings in chrome.

Finally the grunt connect task starts an http server to test out my compiled js in the browser.

My setup will hopefully get a little more sophisticated as the need arises, but this should do fine for now.

10 September 2013

When you start working with background jobs, you're going to want a reliable way to monitor those processes. I first started out with monit. I found its configuration file a bit ugly. Here is an example config file:

check process resque_worker
  with pidfile /var/www/vhosts/myapp/shared/tmp/pids/resque_worker.pid
  start program = "/usr/bin/env HOME=/home/deploy RACK_ENV=production
  PATH=/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH /bin/sh -l -c 'cd /var/www/vhosts/myapp/current; nohup bundle exec rake environment resque:work  RAILS_ENV=production QUEUE=my_queue VERBOSE=1 PIDFILE=tmp/pids/resque_worker.pid COUNT=2 >> log/resque_worker.log 2>&1'" as uid deploy and gid deploy with timeout 60 seconds
  stop program = "/bin/sh -c 'cd /var/www/vhosts/myapp/current && kill -9 `cat tmp/pids/resque_worker.pid` && rm -f tmp/pids/resque_worker.pid; exit 0;'"
  if totalmem is greater than 800 MB for 10 cycles then restart  # eating up memory?
  group resque_workers

And that's just one process!

Also ran into lots of problems where the pid file wasn't being found, etc. What happens when the pid file doesn't exist? Should it be created? What if there is already a pid file that didn't get deleted properly last time? These are all problems a process manager should solve.

I tried bluepill next. The configuration syntax is nice, its written in ruby.

However it seemed painfully slow. Also I'm afraid I have to agree with this rather brief and undetailed bug report from Jeff Atwood: https://github.com/arya/bluepill/issues/193

So I found a more recent project called eye. Its inspiration comes from bluepill but its incredibly fast and has worked very reliably for me.

https://github.com/kostya/eye

The configuration is quite nice:

Eye.application "resque" do
    env 'PATH' => '/usr/local/bin:/usr/local/ruby/bin:/usr/bin:/bin:$PATH',
        'VERBOSE' => '1',
        'COUNT' => '2'
  process :resque_worker do
    start_command "/usr/bin/env /bin/sh -l -c 'bundle exec rake environment resque:work'"
    pid_file "/var/www/vhosts/myapp/shared/tmp/pids/resque_worker.pid"
    daemonize true
    env 'RAILS_ENV' => 'production',
        'RACK_ENV' => 'production',
        'QUEUE' => 'my_queue'
    working_dir "/var/www/vhosts/myapp/current"
    stdall "/var/www/vhosts/myapp/shared/log/resque_worker.log"
  end
end

You can add more process blocks for more resque workers.

I've been using eye now to manage lots of different processes now including hubots, sinatra apps, liquidsoap, etc. Its great to use with capistrano to restart the processes on deploy. All you have to do is send your process or group of processes the stop,start, restart commands. Be sure to reload your config file to pick up any changes you may have made.

namespace :resque do
  desc "Start resque process"
  task :start, :roles => :app do
    run "cd #{latest_release} && #{sudo} eye l config/resque.eye"
    run "#{sudo} eye start resque_worker"
  end

  desc "Stop resque process"
  task :stop, :roles => :app do
    #{sudo} eye stop resque_worker;
  end

  desc "Restart resque process"
  task :restart, :roles => :app do
    #{sudo} eye restart resque_worker;
  end
end

after "deploy:start", "resque:start"
after "deploy:stop", "resque:stop"
after "deploy:restart", "resque:restart"

Now I am working on integrating eye with hubot to restart processes via chat!

01 September 2013

I've thought of a great hack to overcome the limitations that most icecast clients don't let you specify the 'source' field. At first I thought of maybe finding a user by password rather than username. Then I thought, why not just use the password field for the username and password? The username and password can be separated by something like a semicolon. Here's how I implemented this in liquidsoap:

def get_user(user,password) =
  if user == "source" then
    x = string.split(separator=';',password)
    list.nth(x,0)
  else
    user
  end
end

def get_password(user,password) =
  if user == "source" then
    x = string.split(separator=';',password)
    list.nth(x,1)
  else
    password
  end
end

#auth function
def dj_auth(user,password) =
  u = get_user(user,password)
  p = get_password(user,password)
  #get the output of the php script
  ret = get_process_lines("bundle exec ./dj_auth.rb #{u} #{p}")
  #ret has now the value of the live client (dj1,dj2, or djx), or "ERROR"/"unknown"
  ret = list.hd(ret)
  #return true to let the client transmit data, or false to tell harbor to decline
  if ret == "true" then
    title_prefix := "LIVE NOW ♫✩ -- #{u} ✩♪" 
    true
  else
    false
  end
end

# use the auth function with input.harbor
live_dj = input.harbor("datafruits",port=9000,auth=dj_auth,on_disconnect=on_disconnect)

You simply check if the username sent was 'source' and then split the password string at ';'.

27 July 2013

If you find yourself checking for nil constantly, a null object of some sort can help.

I wanted to make a null object for dates. Here is the NullDate class I came up with.

class NullDate
  include Comparable
  def strftime format
    "No date yet."
  end
  def to_s
    "No date yet."
  end
  def <=>(other_date)
    Time.new(0000,1,1) <=> other_date
  end
  def to_datetime
    Time.new(0000,1,1).to_datetime
  end
end

I found myself checking for nil all over the place when comparing dates, so I made my NullDate class implement <=>, and just have it compare to Time.new(0000,1,1), which is the lowest time/date value possible. Be sure to implement to_datetime, or you'll get an error like this:

Here's an ascii.io screencast I made that demonstrates use of this class!

23 July 2013

Although I added some features to B.U.T.T.(broadcast using this tool), its still not the ideal broadcasting tool for all DJs. A web based client would be ideal. Some developers at Liquidsoap are working on what I think is an interesting solution, using Websockets and lame.js. They currently have support for this in a branch, and will probably be merged soon.

Of course, most of the bottleneck is in the browser. Lame.js is simply not fast enough. As toots points out in this discussion, this could be sped up with asm.js, which is only available in Firefox. Unfortunately, Firefox doesn't seem to implement the required audio APIs we need. AudioContext appears to only be supported in the nightly builds for now. So until either Chromium supports something like asm.js, or Firefox implements the audio APIs, this solution is going to be a bit janky. I wonder if a Chrome app using NACL would be another possible solution.

13 June 2013

When working on the podcast for datafruits I did lots of repetitive tasks. I was inspired by companies like Github that have a passion for internal tools. I decided to try to whip up a few tools of my own to make this task a little more efficient and fun.

The tools I created are simple command line scripts in Ruby. Its quite easy to make something quickly if you use hoe and an options parsing library like Trollop.

I made a tool for uploading the mp3s so S3 and updating the podcast xml. It uses Nokogiri and the aws gem. This tool might be too specific to my workflow for anyone else to use, I don't know how other people make their podcasts. :) https://github.com/datafruits/podcast_updater

I made another tool for tagging mp3s. I was using eyeD3 but I got kind of annoyed about its syntax for adding album art. I could never remember the exact syntax with all the options, so I made my mp3 tagger use a simpler syntax, just --pic <path_to_file>. There are other options in eyeD3 for setting the front or back album art that I never use, so I didn't bother including them in my tagger. https://github.com/datafruits/rupeepeethree

I decided to upload my mp3 tagger rp3 to rubygems.org, since it might be marginally useful to other people. I'm not sure about podcast_updater however, so I'll hold off on that for now.

Its educational and rewarding to make your own tools, and the benefits you get from speeding up your workflow are worth it.

22 May 2013

I'm running into a bit of trouble with my clever/hacky Icecast DJ authentication scheme. Turns out that most of the icecast clients out there don't support changing the source username from the default 'source', which I am using in my setup to authenticate source clients against a user database. The client I have been using, ShoutVST, supports this field just fine. However, since its a VST plugin, its not the best solution for all my users.

After filing bug reports for various clients and not getting any response, I am trying to add support for changing source username to a client called B.U.T.T. (Broadcast Using This Tool). I'm not a windows dev however.

I'm not sure what alternatives exist if I am unable to patch BUTT. It might be nice if Mixlr supported multiple users for a single account, I might consider paying for their pro plan. Apparently you can also authenticate icecast sources by appending parameters to the stream url. This seems less convenient for my users though.

If anyone is interested in helping me fix icecast clients or has any ideas for alternative solutions, please let me know in a comment. :)

27 March 2013

My internet radio station datafruits.fm needed metadata in its audio stream. I wanted to make a cool hack to set metadata using liquidsoap. This would solve 2 problems:

  • dj's are unable to set metadata with their clients, so I'll do it for them when they authenticate.
  • the <audio> tag is unable to read the metadata from the stream. I'll have to get it some other way.

I'll have liquidsoap call a ruby script when the metadata in the stream changes, using on_meta. We'll store the metadata in redis, using pub-sub. Then I'll set up a controller to relay this metadata.

As a bonus I now effectively have an api for getting the current song that I can use in my mobile apps for this radio.

pub_metadata.rb

# adopted from https://github.com/gorsuch/sinatra-streaming-example/blob/master/worker.rb

require 'redis'

redis_url = ENV["REDISTOGO_URL"] || "redis://localhost:6379"
uri = URI.parse(redis_url)
r = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)

meta = ARGV[0]

puts "setting metadata..."
r.publish "metadata", meta

radio.liq

source = on_metadata(pub_metadata, source)

def pub_metadata(m) =
  log("metadata changed: #{m}")
  title = m["title"]
  result = get_process_lines("./pub_metadata.rb #{m}")
  log("pub_metadata: #{result}")
end

I wanted to use streaming in Sinatra to update the metadata quickly, instead of polling.

sinatra_app.rb

require 'redis'
require 'sinatra'

configure do
redis_url = ENV["REDISTOGO_URL"] || "redis://localhost:6379"
uri = URI.parse(redis_url)
set :redis, Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
end

get '/metadata' do
  puts "connection made"

  stream do |out|
    settings.redis.subscribe 'metadata' do |on|
      on.message do |channel, message|
        out << "#{message}\n"
      end
    end
  end
end

Then we can do something like this in the javascript:

var source = new EventSource('/metadata');
source.addEventListener('refresh', function(e){
  console.log("got sse");
  console.log(e.data);
  $('#nowplaying').html(e.data);
});

This was a little bit more difficult to manage than I expected. It was working sporadically, and it turns out you need to manage all the connections by hand, you can't just code one connection and expect it to work.

So I decidedly to go with the certainly not as cool solution of using polling. However I think its a perfectly fine solution for this situation. Polling is simple, fairly cheap, and as far as this situation goes, truly 'live' updating of the metadata is not really necessary.

So I ended up just using a regular redis key.

pub_metadata.rb

# adopted from https://github.com/gorsuch/sinatra-streaming-example/blob/master/worker.rb

require 'redis'

redis_url = ENV["REDISTOGO_URL"] || "redis://localhost:6379"
uri = URI.parse(redis_url)
r = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)

meta = ARGV[0]

puts "setting metadata..."
puts r.set "currentsong", meta

We can just set up a normal get route to get the key.

sinatra_app.rb

  redis_url = ENV["REDISTOGO_URL"] || "redis://localhost:6379"
  uri = URI.parse(redis_url)
  set :redis, Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)

  get '/metadata' do
    settings.redis.get("current-song").to_s
  end

A simple set setInterval will work for polling and updating the html.

  setInterval(function(){
    $.get("/metadata",function(data){
      console.log("got data: "+data);
      $('#nowplaying').html(data);
    });
  },5000);

While not as cool, this is a lot easier to work with at the moment. Perhaps I can think of other radio data I could store in redis and attach an API to from my app.