Running Opa Applications on Heroku

TL;DR

As I’ve mentioned before, Opa is a new web framework that introduces not only the framework itself but a whole new language. A lot has changed in Opa since I last posted about it. Now Opa has a Javascript-esque look and runs on Node.js. But it still has the amazing typing system that makes Opa a joy to code in.

The currently available Heroku buildpack for Opa only supported the old, pre-Node, support. So I’ve created an all new buildpack and here I will show both a bit of how I created that buildpack and how to use it to run your Opa apps on Heroku.

The first step was creating a tarball of Opa that would work on Heroku. For this I used the build tool vulcan. Vulcan is able to build software on Heroku in order to assure what is built will work on Heroku through your buildpacks.

vulcan build -v -s ./opalang/ -c "mkdir /app/mlstate-opa && yes '' | ./opa-1.0.7.x64.run" -p /app/mlstate-opa

This command is telling vulcan to build what is in the directory opalang with a command that creates the directory /app/mlstate-opa and then runs the Opa provided install script to unpack the system. This is much simpler than building Opa from source, but it is still necessary to still use vulcan to create the tarball from the output of the install script to ensure paths are correct in the Opa generated scripts.

After this run, by vulcan’s default, we will have /tmp/opalang.tgz. I upload this to S3, so that our buildpack is able to retrieve it.

Since Opa now relies on Node.js, the new buildpack must install both Node.js and the opalang.tgz that was generated. To do this I simply copied from the Node.js buildpack.

If you look at the Opa buildpack you’ll see, as with any buildpack, it consists of three main scripts under ./bin/: compile, detect and release. There are three important parts for understanding how your Opa app must be changed to be supported by the buildpack.

First, the detect script relies on there being a opa.conf to detect this being an Opa application. This for now is less important since we will be specifying the buildpack to use to the heroku script. Second, in the compile script we rely on there being a Makefile in your application for building. There is no support for simply running opa to compile the code in your tree at this time. Third, since Opa relies on Node.js and Node modules from npm you must provide a package.json file that the compile script uses to install the necessary modules.

To demostrate this I converted Opa’s hello_chat example to work on Heroku, see it on Github here.

There are two necessary changes. One, add the Procfile. A Procfile define the processes required for your application and how to run them. For hello_chat we have:

web: ./hello_chat.exe --http-port $PORT

This tell Heroku that our web process is run from the binary hello_chat.exe. We must pass the $PORT variable to the Opa binary so that it binds to the proper port that Heroku expects it to be listening on to route our traffic.

Lastly, a package.json file is added so that our buildpack’s compile script installs the necessary Node.js modules:

{
  "name": "hello_chat",
  "version": "0.0.1",
  "dependencies": {
      "mongodb" : "*",
      "formidable" : "*",
      "nodemailer" : "*",
      "simplesmtp" : "*",
      "imap" : "*"
  },
  "engines": {
    "node": "0.8.7",
    "npm": "1.1.x"
  }
}

With these additions to hello_chat we are ready to create an Opa app on Heroku and push the code!

$ heroku create --stack cedar --buildpack https://github.com/tsloughter/heroku-buildpack-opa.git
$ git push heroku master

The output from the push will show Node.js and npm being install, followed by Opa being unpacked and finally make being run against hello_chat. The web process in Procfile will then be run and the output will provide a link to go to our new application. I have the example running at http://mighty-garden-9304.herokuapp.com

Next time I’ll delve into database and other addon support in Heroku with Opa applications.

Cowboy and Batman.js for Erlang Web Development

Why Cowboy and Batman.js

There are a lot of Erlang web frameworks out there today. Not all are modeled after the MVC model (see Nitrogen), but I think all of them are addressing the problem the wrong way. I recently gave a presentation, slides here and the code for this example here, describing my perferred method for using Erlang for web development and why I think it is the best way to go. In this post, I’ll go into more details on how to build the Erlang backend for the TodoMVC clone I did with Batman.js. I will not spend time on Batman.js but instead only give a quick list of reasons I prefer it to other Javascript frameworks.

Batman.js advantages:

  • Automatic URL generation based on model
  • HTML data-bind templates
  • Coffeescript

Cowboy is a newer Erlang web server that provides a REST handler based on Webmachine. Both of these are perfect for developing a RESTful API, because they follow the HTTP standard exactly and when you are building an API based on HTTP, being able to properly reason about how the logic of the application maps to the protocol eases development and eases getting REST “right”.

Nginx

Any non-dynamic content should be served by Nginx since there is no logic needed and it is something Nginx is great at, so why have Erlang do it? The snippet below configures Nginx to listen on port 80 and serve files from bcmvc’s priv directory. Each request is checked to see if it is a POST or any other method with a JSON request type. If either of those are true, the request is proxied on to a server listening on port 8080, in our case the Cowboy server.

server {
  listen 80;
  server_name localhost;

  location / {
    root   <PATH TO CLONE>/bcmvc/lib/bcmvc_web/priv/;

    if ($request_method ~* POST) {
      proxy_pass        http://localhost:8080;     }

    if ($http_accept ~* application/json) {
      proxy_pass        http://localhost:8080;     }
  }
}

The API

Batman.js knows what endpoints to use and what data to send based on the name of the model we created and the encoded variables, code here. This results in the following API:

 
Method Endpoint Data Return
POST todos {todo : {body:”bane wants to meet, not worried”,isDone:false}}
PUT /todos/33e93b30-2371-4071-afc5-2d48226d5dba {todo : {body:”bane wants to meet, not worried”,isDone:false}}
GET todos [{todo : {id:"33e93b30-2371-4071-afc5-2d48226d5dba", body:"bane wants to meet, not worried", isDone:false}}]
DELETE /todos/33e93b30-2371-4071-afc5-2d48226d5dba

Cowboy Dispatch and Supervisor

Dispatch rules are matched by Cowboy to know what handler to send the request to. Here we have two rules. One that matches just the URL /todos and one that matches the URL with an additional element which will be associated with the atom todo. Both requests will be sent to the module bcmvc_todo_handler.

Dispatch = [{'_', [{[<<"todos">>], bcmvc_todo_handler, []},
                   {[<<"todos">>, todo], bcmvc_todo_handler, []}]}],

Cowboy provides a useful function child_spec for creating a child specfication to use in our supervisor. The child spec here tells Cowboy we want a TCP listener on port 8080 that handles the HTTP protocol. We additionally provide our dispatch list for it to match against and pass on requests.

ChildSpec = cowboy:child_spec(bcmvc_cowboy, 100, cowboy_tcp_transport, 
                              [{port, 8080}], cowboy_http_protocol, [{dispatch, Dispatch}]),

Cowboy Handler

Now that we have a server on port 8080 that knows to send certain requests to our todo handler, we can build the module. The first required function to export is init/3. This function let’s Cowboy know we have a REST protocol, this is how it knows what functions to call (some have defaults and some existing in our module) to handle the request.

init(_Transport, _Req, _Opts) ->
    {upgrade, protocol, cowboy_http_rest}.

Knowing that this is a REST handler Cowboy will pass the request on to allowed_methods/2 to find out if our handler is able to handle this method. Next, the content types accepted and provided by the handler are checked against the incoming request. The expected HTTP response status codes are returned if any of these fail. 405 for allowed_methods, XXX for content_types_accepted and XXX for content_types_provided.

allowed_methods(Req, State) ->
    {['HEAD', 'GET', 'PUT', 'POST', 'DELETE'], Req, State}.

content_types_accepted(Req, State) ->
    {[{{<<"application">>, <<"json">>, []}, put_json}], Req, State}.

content_types_provided(Req, State) ->
    {[{{<<"application">>, <<"json">>, []}, get_json}], Req, State}.

Now the request is sent to the function that handles the HTTP method type of the request.

For a POST, a request to create a new todo item, the function process_post/2 is sent the request. Here we retrieve the body, a JSON object, from the request, convert it to a record and save the model. We’ll see how this record conversion is done when we look at the model module. To inform the frontend of the id of our new resource we set the location header to be the path with the id.

process_post(Req, State) ->
    {ok, Body, Req1} = cowboy_http_req:body(Req),
    Todo = bcmvc_model_todo:to_record(Body),
    bcmvc_model_todo:save(Todo),

    NewId = bcmvc_model_todo:get(id, Todo),
    {ok, Req2} = cowboy_http_req:set_resp_header(
                   <<"Location">>, <<"/todos/", NewId/binary>>, Req1),

    {true, Req2, State}.

For this handler we expect PUT for an update to an object, that is what Batman.js does, but a PATCH would make more sense. For a PUT the URL contains the id for the todo item to be updated. That is retrieved with the binding/2 function. The todo record is created the same as in process_post/2 but then the this id is set for the model and the update/1 function is used to save it to the database.

put_json(Req, State) ->
    {ok, Body, Req1} = cowboy_http_req:body(Req),
    {TodoId, Req2} = cowboy_http_req:binding(todo, Req1),
    Todo = bcmvc_model_todo:to_record(Body),
    Todo2 = bcmvc_model_todo:set([{id, TodoId}], Todo),
    bcmvc_model_todo:update(Todo2),    
    {true, Req2, State}.

For a GET request, which for this application we do not deal with a request for a single todo item, all todo items are retrieved from the model module. Each of these is passed to the model’s to_json/1 function and the result of converting each to JSON is combined into a binary string and placed between brackets so the Batman.js frontend receives a proper JSON list of JSON objects.

get_json(Req, State) ->
    JsonModels = lists:foldr(fun(X, <<"">>) ->
                                 X;
                            (X, Acc) ->
                                 <<Acc/binary, ",", X/binary>>
                         end, <<"">>, [bcmvc_model_todo:to_json(Model) || Model <- bcmvc_model_todo:all()]),

    {<<"[", JsonModels/binary, "]">>, Req, State}.

And lastly, DELETE. Like in PUT the todo item’s id is retrieved from the bindings created based on the dispatch rules and this is passed to the model’s delete function.

delete_resource(Req, State) ->
    {TodoId, Req1} = cowboy_http_req:binding(todo, Req),
    bcmvc_model_todo:delete(TodoId),
    {true, Req1, State}.

Models

Model’s are repsented as records and must provide serialization functions to go between JSON and a record. Each model uses a parse transform that creates functions for creating and updating the record. The transform is a modified version of exprecs from Ulf Wiger that also uses the type definitions in the record to ensure when setting a field that it is the correct type. For example in the todo model isDone is a boolean, so when the model is created the boolean convert function will be matched to convert the string representation to an atom:

convert(boolean, <<"false">>) ->
    false;
convert(boolean, <<"true">>) ->
    true;

So the key pieces of the bcmvc_model_todo are:

-compile({parse_transform, bcmvc_model_transform}).

-record(bcmvc_model_todo, {id = ossp_uuid:make(v1, text) :: string(),
                           body                          :: binary(),
                           isDone                        :: boolean()}).

to_json(Record) ->
    ?record_to_json(?MODULE, Record).

to_record(JSON) ->
    ?json_to_record(?MODULE, JSON).

The ?record_to_json and ?json_to_record macros are defined in jsonerl.hrl. These marcos are generic and work for any record that is typed and uses the model transform.

Conclusion

Clearly, much of what the resource handler and model do is generic and can be abstracted out so that implementing new models and resources can be even simpler. This is the goal of my project Maru. Currently it is based on Webmachine but is now being convered to Cowboy.

In the end, using Cowboy for building a RESTful interface for your application allows you to build interfaces for the frontend entirely separted from backend development, and if you want multiple interfaces (like native mobile and web), they both talk directly to the same backend. Also, from the beginning you have the option to open up your application with an API for other developers to take your application new places, and, shameless plug here, add your API to Mashape to spread your new app!