Deploy Erlang Target System to Heroku

In this post these new tools will be used:

First, clone minasan and create the Heroku application on Cedar-14:

$ git clone https://github.com/tsloughter/minasan.git
$ cd minasan
$ heroku create --stack cedar-14

Now that Heroku has the cedar-14 stack if you are also running a recent Linux distro you can upload the target system created by relx directly to your app, before now we would have to build it on Heroku or in a system with an older glibc to work on Heroku’s Ubuntu 10.04.

Since minasan is using binary packages and a fork of rebar be sure to use the rebar included in the repo, same goes for relx so that including the Procfile in the tarball works. The first step will be to update the package index for rebar, then compiling and building the release tarball (with erts included and dev-mode off so the Erlang runtime is included):

$ ./rebar update
$ ./rebar compile
$ ./relx -i true --dev-mode false release tar

Using the new Slug API endpoint through hk slug the tarball can be pushed directly as a slug to your app and then scale up the web process to at least 1:

$ hk slug _rel/minasan/minasan-0.0.1.tar.gz
$ hk scale web=1
$ hk open

Your browser should now open to your new app.

A few things to note:

  • ‘./rebar pkgs’ will show you a list of available packages to use in rebar deps
  • Currently ‘hk slug’ only support sending a tarball that does not yet have the structure of a slug. So it is unpacked and repacked. I plan to support directories and properly formatted tarballs.

Designing for Actor Based Systems

Many people are intrigued and excited about Erlang style concurrency. Once they get the capability in their hands though they realize that they don’t know how to take advantage of the capabilities processes or actors provide. To do this we need to understand how to decompose systems with process based concurrency in mind. Keep in mind, that this material works equally well for actors in Scala or agents in F#. Differences between actors and processes don’t much matter for the sake of this discussion. Before we dive into process based design it will be helpful to look at a more familiar approach so we can contrast the two.

If you come from an OO background your natural instinct is to design much like you do when decomposing a problem for OO programming. After all, processes are much like objects in that they send messages to one another and they hold state. It goes something like this

  1. Determine your use cases
  2. Create a narrative of what it is you are trying to design
  3. Run through the narrative and pull out the nouns as potential classes
  4. Do the same for the verbs acting on the nouns as potential methods on the classes
  5. Clean all this up getting consolidating any duplication

For example, lets say you were trying to build software to run a vending machine. The use cases might be paying for and getting a soda. Another one might be paying too little and getting change back. So one of the narratives there might be

As a customer I put sufficient coins into the vending machine and then press the selection button for coke and then press the button to vend and the robotic arm fetches a coke and dumps it into the pickup tray. The coke is nice and cold because the cooling system keeps the air in the vending machine at 50 degrees.

Now we think about all the unique nouns in our narrative which are: customer, coins, vending machine, selection button, coke, vend button, robotic arm and cooling system. And we generally turn them into objects. Next we consider the verbs that act on those nouns and consider them for methods.

selection button : push

vend button : push

robotic arm : pickup (coke)

etcetera… After this you apply some lovely object oriented design principles and voila – you have a system that nicely models your narrative but does not take advantage of more than a single core on your system and is positively undistributable.

Oh come on you say, don’t be daft, substitute the word object for actor and you are good to go. Well as it turns out not quite. Do we really need a coin process, how about a vend button process and lets not forget the coke process?? Darnit, this makes no sense! Lets back up and see what we can do about this.

Designing for Process Based Concurrency

The first thing you must do before we move on is say this three times

“Processes are not threads. Processes are really cheap. Ohmmmm”

“Processes are not threads. Processes are really cheap. Ohmmmm”

“Processes are not threads. Processes are really cheap. Ohmmmm”

This trips up folks new to process bases systems. They want to be stingy with processes worrying that they will take a long time to create, have massive context switching times, pollute L1 cache, etc… Remember that in almost all such systems, certainly for Erlang, Scala and F# processes/actors/agents are green threads. This means they have their own schedulers built into the VM they are running in. You never have to swap out a thread to switch between running one process vs another. With Erlang based systems you usually configure one erlang scheduler per core on the system. These schedulers remain relatively constant.

With that in mind we solve some of the sticking points many new to process based systems have; not taking advantage of all the concurrency in the system or using complex “process pooling”.

One process for each truly concurrent activity in the system.

That is the rule. Going back to our vending machine what do we have that is really concurrent in that system. Coins? Not really. Slots? Not really. Buttons? Not really. Those are not activities they are things. What are the truly concurrent activities, the activities that do not have to happen in synchronous lock step?

  • Putting coins into the slot
  • Handling coins
  • Handling selections
  • Fetching the coke and putting it into the pickup tray
  • Cooling the soda

We can use a process for all of these activities. We don’t If you want to name them for the nouns that perform the activities – but remember we are not making them processes because they are nouns. Notice how we go granular here, we did not just create a process for the customer and the vending machine. We created one for all the truly concurrent activities in our narrative – in this way we leverage more of the concurrency available to us. Now we know what processes we need, the next step is to organize them.

Organizing Processes

The various languages that use process based concurrency have differing levels of sophistication here. I am going to draw on the concepts from the Erlang language which have been used in the Akka system for Scala and which I have rolled successfully myself in F#.

Again, forget all of your OO modeling techniques. Processes are not objects, they are fundamental units of concurrency. Forget all your thread modeling techniques – it’s not even close. Share nothing copy everything changes the game. To get started think about which of your processes have to cooperate with one another. In this case what do we have.

  • Putting coins in a slot, cooperates with
  • Handling coins, cooperates with
  • Handling selections, cooperates with
  • Fetching the coke and putting it into the pickup tray

and nothing cooperates with, Cooling the soda – it happens whether or not other processes are there to support it or not. Putting coins in the slot however makes no sense if there is no way to handle them, and handing them makes no sense of you can’t make a selection and making a selection… well you get the drift.

To model this we are going to use a tree of “Supervisors”. Supervisors create and watch over processes. Because of copy everything share nothing properties of actors one can’t corrupt another. So, a supervisor can watch over an actor and restart it when it blows up in the presence of some error. This means we get some incredible fault tolerance. But, that aside, lets talk about how to model these dependencies. We do so in a tree. First, we setup a supervisor at the top of the tree which models no dependencies between any of the processes it starts. In this layer we add the cooling system and then we add another supervisor which will start the group of dependent processes in the order in which they depend on one another. This supervisor will restart processes that die according to their dependencies. If a dependency dies the supervisor will kill and restart dependent processes so that everything starts in a known base state down the chain.


proctree

Now with things decomposed into processes, dependencies fleshed out and placed into a supervision hierarchy you are basically ready to go. Is there more to design for actor based concurrency – yes of course there is but here you have the fundamentals. Now it’s time to go and play with it and generate questions. Feel free to ask them here or on twitter at @martinjlogan.  I may do a second installment on some more advanced topics based on feedback.

If you want to learn more come to Erlang Camp Oct 10 and 11, 2014 in Austin!

How to use Vim for Erlang Development

vim editor logo

This post sponsored by ErlangCamp 2013 in Nashville which was epic!

You are about to learn to use Vim as your editor for Erlang development. You will learn how to install and use a variety of really powerful Vim plugins to make Erlang dev with Vim smooth and satisfying!

I have been developing Erlang now for about 13 years, many of them full time and even wrote a book on Erlang: Erlang & OTP in Action. I have loved every minute of it but there was always one thing that made me sad, probably makes you sad too – Emacs. Emacs is the de-facto editor for Erlang. The emacs mode included with the Erlang distro is quite wonderful. The fact still remains, Emacs, we do not like it. ctrl ~, ctrl x ctrl f etc… Nope!

Setting up Vim for Erlang

Let’s get started setting up Vim for Erlang development. The first thing we need to do is setup pathogen so that installing subsequent packages is really simple. The first thing to do is create the directory $HOME/.vim/autoload. Download pathogen.vim from here and place it into this directory. Now add the following 2 commands to your $HOME/.vimrc file.


call pathogen#infect()
call pathogen#helptags()

At this point pathogen will install and generate help documentation for any plugin you place into the $HOME/.vim/bundle directory – which you should of course create.

With this created now we are ready to start installing plugins to make your life easier. Try these on for size by cloning these git repos directly into the $HOME/.vim/bundle directory. They will simply work next time you start vim.

vimerl.vim Indenting, autocomplete and more for Erlang
ctrlp.vim ctrl p and open a powerful fuzzy file finder. Makes navigating file trees a thing of the past.
NERDTree Powerful file tree navigator right in vim – don’t use it much since I installed ctrlp though.
NERDTree Tabs Add the NERDTree file finder to all tabs you have open in vim.

Before we get into basics on how to use all these plugins to create Erlang magic I want to show you two bonus tricks I really love. First, get a better color scheme. To do this create the directory $HOME/.vim/colors and find yourself a slick color scheme to drop into it. I recommend vividchalk.vim by TPope.

Pro Tip
For dropbox or other file sync users keep all your vim installs in sync easily like so; take your .vim and your .vimrc and move them into your Dropbox directory. Then run:


ln -s ~/Dropbox/.vim ~/.vim
ln -s ~/Dropbox/.vimrc ~/.vimrc

Now all your machines vim installs will run just the same. If you have compatibility problems on any one, well then just skip this for that machine.

Ok, so now on to how to use these plugins for Erlang/Vim greatness.

How to Use our Vim Plugins for Erlang Dev

I am going to use the source for Erlware Commons as an example. So I clone it first and then change into the erlware_commons directory and run vim. Now lets say I know what file I want to update, specifically the “ec_date.erl” file. The first thing I do is type p and then start typing ec_date.erl.

                                                                                                                                                          
~                                                                               
[No Name] [TYPE= unix] [0/1 (100%)]                                             
> test/ec_dictionary_proper.erl
> src/ec_dictionary.erl
> src/ec_date.erl                                                               
 prt  path  ={ files }=  >> ec_da

You can see that as I start typing and get to “ec_da” ctrlp has already displayed a narrowed down list of files in the directory tree under where I have opened vim that match. The file on the bottom ec_date.erl is the one selected and so just pressing enter here will open it. If I wanted to select “test/ec_dictionary_proper.erl” then I could simply press the up arrow and select it or keep typing until it was the only selection.

Now, what if I don’t know what file I want to select? This is where NERDTree comes into play. Run :NERDTree and you will pop open the file browser. Like this:

  Press ? for help             |
                               |~                                               
.. (up a dir)                  |~                                               
<lang-projects/erlware_commons/|~                                               
▸ doc/                         |~                                               
▸ priv/                        |~                                               
▸ src/                         |~                                               
▸ test/                        |~                                               
  CONTRIBUTING.md              |~                                               
  COPYING                      |~                                               
  Makefile                     |~                                               
  README.md                    |~                                               
  rebar.config                 |~                                               
  rebar.config.script          |~                                               
~                              |~                                               
~                              |~                                               
~                              |~                                                                                                                                       

Here we can see the directory tree for Erlware Commons. Each of the directories can be easily selected and expanded. Individual files can be selected and opened. There are a variety of ways to open a file. Below are the most common:

  • <enter> will open the file in the right pane
  • T will open in a new tab within vim and keep focus in NERDTree
  • t will open in a new tab and bring focus to the new tab

IF you want to see the NERDTree browser in all your tabs use :NERDTreeTabsToggle to toggle it on and off. It will be the exact same NERDTree in the exact same state and cursor position on all tabs – nice! Once you are focused on the code in a given tab and you want to jump back to the left and into the NERDTree pane use <ctrl> ww

Once you have a load of tabs open you need to switch between then and to do this you need only two commands:

  • gt will goto the next tab the next tab
  • gT will goto the previous tab

Pro Tip
Map the tab commands and the NERDTreeTabsToggle command by adding the following to your vimrc.


map <C-t> :tabn<Enter>
map <C-n> :tabnew<Enter>
map nt :NERDTreeTabsToggle<Enter>

Ok, now on to editing Erlang with vimerl.

Editing with vimerl

This is not going to be an exhaustive list of vimerl editing commands but just a few of the goodies. The 20% you will use 80% of the time.

Auto-indenting

vimerl will auto-indent for you as you type. But if you come across a line that you want to indent try typing ==. Lets say you want to indent a block of code. Simple, mark the line that starts the block with ma then go to the end of the block and tell vimerl to indent to the mark as such: ='a. Now if your whole file is a mess then try gg to go to the beginning of your file and then =G to indent all the way to the end. You can do this all in one step as in gg=G.

Code Completion

ctrl-x ctrl-o after typing a module name and a : will cause vimerl to suggest function names for you. It does this by searching the .beam and .erl files in the erlang code path (code:get_path() to see what they are) as well as looking at your rebar deps_dir if you are using rebar.config as part of your project.

Skeletons

This is the feature that I loved most about the emacs mode for Erlang, well this and the auto indenting (most of the time, the fun() indenting still feels like a kick in the teeth). Here is a list of the most useful skeletons and the commands to generate them from within vimerl.

  • :ErlangApplication generate the skeleton for an OTP application behaviour.
  • :ErlangSupervisor generate the skeleton for an OTP supervisor behaviour.
  • :ErlangGen[Server|Fsm|Event] skeletons for gen server, fsm and event – yay!

Brilliant isn’t it. Before I let you go there is one more invaluable command you should know about which is :help vimerl which will give you a list of all the other useful commands you may want to use. Remember to get it working be sure to add call pathogen#helptags() to the top of your .vimrc file. Goodbye Emacs, welcome back old friend Vim.

Follow me on twitter @martinjlogan

<esc>:wq

Universal Makefile for Erlang Projects That Use Rebar

This post sponsored by ErlangCamp 2013 in Nashville and Amsterdam

At this point in the game nearly every Erlang project uses Rebar. The problem with that is that Rebar’s approach to the command line and command dependency chaining is leaves a lot to be desired. You tend to end up typing the same command with the same options list over and over again during the course of your work. Because of the poor dependency chaining you often must retype the same sequence of commands as well. Finally, there are certain things (like Dialyzer support) that Rebar does not support.

In our Erlware projects, we want a consistent and recognizable entry point into the build process. For that reason we tend to treat Rebar as a low level tool and drive it and the other build tools I mentioned with a Makefile. That makes it far easier for us, as developers, to chain rules as needed and create additional rules that add features to the build system. This allows us to integrate other tools seamlessly into the build experience. At Erlware, we have developed a pretty standard Makefile that can be used with little or no changes from project to project. You can find the whole of that Makefile here. However, I will work my way through a few parts of it explaining so you understand what is going on and can make changes relevant to your project.

The main targets this Makefile supports are as follows:

  • deps: Pull the project dependencies (called automatically as needed)
  • update-reps: Update the dependencies (never called automatically)
  • compile: Compiles the project
  • doc: Builds the edoc documentation
  • test: Compiles the code and runs the tests (designed to be called by a human)
  • dialyzer: Build the dependency PLT and run dialyzer on the project
  • typer: Run Typer on the project
  • shell: Bring up an Erlang shell with all the dependencies already loaded and unit tests compiled and available.
  • pdf: Turn your README.md into a pdf using pandoc (pretty useful at times, but completely optional)
  • clean: Delete the build output files
  • distclean: Remove the build output files as well as the project PLT file and all the dependencies
  • rebuild: Do a dist clean, rebuild everything from scratch and run both the tests and dialyzer

Now that we have an idea of the targets available lets work through the major points of the Makefile.

Defining Variables

ERLFLAGS= -pa $(CURDIR)/.eunit -pa $(CURDIR)/ebin -pa $(CURDIR)/deps/*/ebin

DEPS_PLT=$(CURDIR)/.deps_plt
DEPS=erts kernel stdlib

At the top of the make file a few variables that are set. For the most part you don’t ever have to touch any of these with the exception of DEPS. The DEPS variable provides a list of dependent applications that are used by Dialyzer to build the dependency PLT file. The others are ERLFLAGS, which is used by the shell command to correctly make your code available in the shell, and DEPS_PLT, which points to the location where the project PLT file will be located.

PLT Files and Dialyzer

$(DEPS_PLT):
	@echo Building local plt at $(DEPS_PLT)
	@echo
	dialyzer --output_plt $(DEPS_PLT) --build_plt \
	   --apps $(DEPS) -r deps

dialyzer: $(DEPS_PLT)
	dialyzer --fullpath --plt $(DEPS_PLT) -Wrace_conditions -r ./ebin

This is how the Dialyzer command is run. The main things to notice here are that a PLT file specific to the project is built using the dependencies that you described at the top of the file in the DEPS variable. Building a per project PLT solves a raft of potential problems but has the downside that the first run of Dialyzer or the first run after a rebuild can take several minutes as it analyzes all of the dependencies to build the PLT file.

Rebuilding

Rebuilding is basically a completely clean rebuild and test of the system. You should run this code before you submit a PR or share code with your peers. It basically tries to ensure that you have not forgotten or left off anything that is needed.

Conclusion

You can, quite literally, drop this makefile into your project and use it today with only some very minor modification to the DEPS variable. If you are not already using something like this in your project I encourage you to add this Makefile now. It will save you a lot of tedious typing and make your build process much clearer to your users.

Alternatives

There are a few alternatives to this approach out there. These are quite good if somewhat more complex.

Running Opa Applications on Heroku

TL;DR

As I’ve mentioned before, Opa is a new web framework that introduces not only the framework itself but a whole new language. A lot has changed in Opa since I last posted about it. Now Opa has a Javascript-esque look and runs on Node.js. But it still has the amazing typing system that makes Opa a joy to code in.

The currently available Heroku buildpack for Opa only supported the old, pre-Node, support. So I’ve created an all new buildpack and here I will show both a bit of how I created that buildpack and how to use it to run your Opa apps on Heroku.

The first step was creating a tarball of Opa that would work on Heroku. For this I used the build tool vulcan. Vulcan is able to build software on Heroku in order to assure what is built will work on Heroku through your buildpacks.

vulcan build -v -s ./opalang/ -c "mkdir /app/mlstate-opa && yes '' | ./opa-1.0.7.x64.run" -p /app/mlstate-opa

This command is telling vulcan to build what is in the directory opalang with a command that creates the directory /app/mlstate-opa and then runs the Opa provided install script to unpack the system. This is much simpler than building Opa from source, but it is still necessary to still use vulcan to create the tarball from the output of the install script to ensure paths are correct in the Opa generated scripts.

After this run, by vulcan’s default, we will have /tmp/opalang.tgz. I upload this to S3, so that our buildpack is able to retrieve it.

Since Opa now relies on Node.js, the new buildpack must install both Node.js and the opalang.tgz that was generated. To do this I simply copied from the Node.js buildpack.

If you look at the Opa buildpack you’ll see, as with any buildpack, it consists of three main scripts under ./bin/: compile, detect and release. There are three important parts for understanding how your Opa app must be changed to be supported by the buildpack.

First, the detect script relies on there being a opa.conf to detect this being an Opa application. This for now is less important since we will be specifying the buildpack to use to the heroku script. Second, in the compile script we rely on there being a Makefile in your application for building. There is no support for simply running opa to compile the code in your tree at this time. Third, since Opa relies on Node.js and Node modules from npm you must provide a package.json file that the compile script uses to install the necessary modules.

To demostrate this I converted Opa’s hello_chat example to work on Heroku, see it on Github here.

There are two necessary changes. One, add the Procfile. A Procfile define the processes required for your application and how to run them. For hello_chat we have:

web: ./hello_chat.exe --http-port $PORT

This tell Heroku that our web process is run from the binary hello_chat.exe. We must pass the $PORT variable to the Opa binary so that it binds to the proper port that Heroku expects it to be listening on to route our traffic.

Lastly, a package.json file is added so that our buildpack’s compile script installs the necessary Node.js modules:

{
  "name": "hello_chat",
  "version": "0.0.1",
  "dependencies": {
      "mongodb" : "*",
      "formidable" : "*",
      "nodemailer" : "*",
      "simplesmtp" : "*",
      "imap" : "*"
  },
  "engines": {
    "node": "0.8.7",
    "npm": "1.1.x"
  }
}

With these additions to hello_chat we are ready to create an Opa app on Heroku and push the code!

$ heroku create --stack cedar --buildpack https://github.com/tsloughter/heroku-buildpack-opa.git
$ git push heroku master

The output from the push will show Node.js and npm being install, followed by Opa being unpacked and finally make being run against hello_chat. The web process in Procfile will then be run and the output will provide a link to go to our new application. I have the example running at http://mighty-garden-9304.herokuapp.com

Next time I’ll delve into database and other addon support in Heroku with Opa applications.

Projmake-mode: Flymake Replacement

There is a great new Emacs plugin from Eric Merritt that like FlyMake builds your code and highlights within Emacs any errors or warnings, but unlike FlyMake builds across the whole project. You can clone the mode from here projmake-mode

After cloning the repo to your desired location add this bit to your dot emacs file, replacing <PATH> with the path to where you cloned the repo.

This Emacs code also uses add-hook to set projmake-mode to start for erlang-mode is loaded. Projmake by default knows how to handle rebar and Make based builds so there is no setup after this, assuming your project is built this way.

Here is my Makefile for building Erlang projects with rebar, replace PROJECT with the name of your project:

Now you can load Emacs and a file from your project and if it is an Erlang file due to the add-hook function in our dot emacs file it will automatically load projmake-mode. You can add hooks for other modes or simply run M-x projmake-mode.

For more documentation and how to extend to other types of projects check out the documentation.

Getting Flymake and Rebar to Play Nice

TLDR;
Copy and paste the following into your elisp erlang-mode configuration to get flymake working with Rebar projects.

 (defun ebm-find-rebar-top-recr (dirname)
      (let* ((project-dir (locate-dominating-file dirname "rebar.config")))
        (if project-dir
            (let* ((parent-dir (file-name-directory (directory-file-name project-dir)))
                   (top-project-dir (if (and parent-dir (not (string= parent-dir "/")))
                                       (ebm-find-rebar-top-recr parent-dir)
                                      nil)))
              (if top-project-dir
                  top-project-dir
                project-dir))
              project-dir)))

    (defun ebm-find-rebar-top ()
      (interactive)
      (let* ((dirname (file-name-directory (buffer-file-name)))
             (project-dir (ebm-find-rebar-top-recr dirname)))
        (if project-dir
            project-dir
          (erlang-flymake-get-app-dir))))

     (defun ebm-directory-dirs (dir name)
        "Find all directories in DIR."
        (unless (file-directory-p dir)
          (error "Not a directory `%s'" dir))
        (let ((dir (directory-file-name dir))
              (dirs '())
              (files (directory-files dir nil nil t)))
            (dolist (file files)
              (unless (member file '("." ".."))
                (let ((absolute-path (expand-file-name (concat dir "/" file))))
                  (when (file-directory-p absolute-path)
                    (if (string= file name)
                        (setq dirs (append (cons absolute-path
                                                 (ebm-directory-dirs absolute-path name))
                                           dirs))
                        (setq dirs (append
                                    (ebm-directory-dirs absolute-path name)
                                    dirs)))))))
              dirs))

    (defun ebm-get-deps-code-path-dirs ()
        (ebm-directory-dirs (ebm-find-rebar-top) "ebin"))

    (defun ebm-get-deps-include-dirs ()
       (ebm-directory-dirs (ebm-find-rebar-top) "include"))

    (fset 'erlang-flymake-get-code-path-dirs 'ebm-get-deps-code-path-dirs)
    (fset 'erlang-flymake-get-include-dirs-function 'ebm-get-deps-include-dirs)

Intro

Its probably no great surprise to anyone that I dislike Rebar a lot. That said there are times when I have no choice but to use it. This is always either because a company I am contracting for uses it, or an open source project I am contributing to uses it. When I am forced to use it there are a few things I don’t want to give up. Most important among these is Flymake for Erlang. The default setup for Flymake doesn’t work for Rebar projects because Flymake does not know where the code and include paths for dependencies are. Fortunately, we can fix this with a few lines of elisp.

Flymake For Erlang

First make sure you have Flymake for Erlang installed. It is easiest just to follow the directions available on the Erlang Website.

The Elisp Additions for Erlang Flymake

There are two defvars that point to functions that are used to search for the correct code paths and include paths respectively. We are going to replace those functions with our own functions. Both these functions search upwards from the directory that contains the file pointed to by the current buffer, looking for the top most ‘rebar.config’ in the directory path. It then uses that for a base and searches down the directory structure looking for either ‘ebin’ files or ‘include’ files.

There are two things to note here. The first is that you must have already run `get-deps` for rebar for this to work and the second is that if your project is truly huge or you have way more dependencies then you probably need this search could take a second or two. That is a second or two too long in an interactive compiler like Flymake. That said, the likelihood that you will run into this second problem is quite low.

Getting Started

The very thing you want to do is ensure that you have required the erlang-flymake module. Most of what we do below depends on this.

(require 'erlang-flymake)

Finding the Top rebar.config

The second thing we want to do is look for the top rebar.config in the project. If a rebar project contains more then one OTP application its quite likely that it will contain more then one rebar.config. The very topmost `rebar`config` is the right one to serve as root of our search. So we introduce a set of recursive functions to look for that top level dir.

    (defun ebm-find-rebar-top-recr (dirname)
      (let* ((project-dir (locate-dominating-file dirname "rebar.config")))
        (if project-dir
            (let* ((parent-dir (file-name-directory (directory-file-name project-dir)))
                   (top-project-dir (if (and parent-dir (not (string= parent-dir "/")))
                                       (ebm-find-rebar-top-recr parent-dir)
                                      nil)))
              (if top-project-dir
                  top-project-dir
                project-dir))
              project-dir)))

ebm-find-rebar-top-recr will return either the top most directory or nil. Our next function takes that result and does something useful with.

    (defun ebm-find-rebar-top ()
      (interactive)
      (let* ((dirname (file-name-directory (buffer-file-name)))
             (project-dir (ebm-find-rebar-top-recr dirname)))
        (if project-dir
            project-dir
          (erlang-flymake-get-app-dir))))

In this function, we get the directory containing the file pointed at by the current buffer. We then call our recr function. If it returns a directory we return that, if it returns nil however, we call the original erlang-flymake-get-app-dir function.

At this point we should have our project root. Now its a simple matter of recursively searching down the directory tree looking for files of a certain name. So we create a function that does just that, given a directory and a name will return a list of absolute paths for each subdirectory that matches the specified name.

(defun ebm-directory-dirs (dir name)
        "Find all directories in DIR."
        (unless (file-directory-p dir)
          (error "Not a directory `%s'" dir))
        (let ((dir (directory-file-name dir))
              (dirs '())
              (files (directory-files dir nil nil t)))
            (dolist (file files)
              (unless (member file '("." ".."))
                (let ((absolute-path (expand-file-name (concat dir "/" file))))
                  (when (file-directory-p absolute-path)
                    (if (string= file name)
                        (setq dirs (append (cons absolute-path
                                                 (ebm-directory-dirs absolute-path name))
                                           dirs))
                        (setq dirs (append
                                    (ebm-directory-dirs absolute-path name)
                                    dirs)))))))
              dirs))

Now we write a couple of functions to replace the corresponding functions in `erlang-flymake`. The first looks for all `ebin` directories while the second looks for all `include` directories.

    (defun ebm-get-deps-code-path-dirs ()
        (ebm-directory-dirs (ebm-find-rebar-top) "ebin"))

    (defun ebm-get-deps-include-dirs ()
       (ebm-directory-dirs (ebm-find-rebar-top) "include"))

Finally we replace the `erlang-flymake` versions of those functions with our implementations.

(fset 'erlang-flymake-get-code-path-dirs 'ebm-get-deps-code-path-dirs)
(fset 'erlang-flymake-get-include-dirs-function 'ebm-get-deps-include-dirs)

Conclusion

This approach is a bit of a hack, we basically use some heuristics to find a root and then just grab everything under that that looks remotely like a code or include directory. While its a bit hacky it has the valuable upside that its flexible and robust.

Sinan Releases and Being Right

Fred, of Learn You Some Erlang for Great Good, today posted on his blog about the problems around how rebar handles releases, Rebar Releases and Being Wrong. The problems he mentions and a few others are why, despite giving it a legitimate shot, I have found rebar unusable for my workflow to be efficient and stable while adhering to OTP standards at the same time.

I suggest first reading his post, if you already use rebar, and then continuing on with the rest of this.

I’ll start with an example on the generation of a project containing two applications and a dependency from one of those applications of cowboy. Next, I’ll create a release (and in the process a deployable target system) to show the difference in how sinan handles this process.

TL;DR Sinan does OTP the right way, rebar does not.

First, you can download the latest version sinan from this link, it is simply an executable escript, so ‘chmod +x sinan‘ and put it in your PATH and you are good to go.

Sinan provides a ‘gen’ command to create your project. I include the output of the steps I took to build this project. Sinan assumes this is a multiple application project, but if you give “y” instead it will create a directory structure similar to rebars default structure with a src/ directory instead of a lib/ directory.

$ sinan gen
Please specify your name 
your name> Tristan Sloughter
Please specify your email address 
your email> tristan@mashape.com
Please specify the copyright holder 
copyright holder ("Tristan Sloughter")> 
Please specify name of your project
project name> rel_example
Please specify version of your project
project version> 0.0.1
Please specify the ERTS version ("5.9")> 
Is this a single application project ("n")> 
Please specify the names of the OTP apps that will be developed under this project. One application to a line. Finish with a blank line.
app> app_1
app ("")> app_2
app ("")> 
Would you like a build config? ("y")> y
Project was created, you should be good to go!

We now have a project named rel_example and can see the generated contents.

$ cd rel_example/
$ ls
config  doc  lib  sinan.config

Before going further I add the line {include_erts, true}. to sinan.config so that a generated tarball of the release contains erts and can be booted on a machine without Erlang installed.

$ cat sinan.config
{project_name, rel_example}.
{project_vsn, "0.0.1"}.

{build_dir,  "_build"}.

{ignore_dirs, ["_", "."]}.

{ignore_apps, []}.

{include_erts, true}.

A tree structure view of the generated project is below:

.
├── config
│   └── sys.config
├── doc
├── lib
│   ├── app_1
│   │   ├── doc
│   │   ├── ebin
│   │   │   └── overview.edoc
│   │   ├── include
│   │   └── src
│   │       ├── app_1_app.erl
│   │       ├── app_1.app.src
│   │       └── app_1_sup.erl
│   └── app_2
│       ├── doc
│       ├── ebin
│       │   └── overview.edoc
│       ├── include
│       └── src
│           ├── app_2_app.erl
│           ├── app_2.app.src
│           └── app_2_sup.erl
└── sinan.config

You’ll see we have a lib directory with two applications containing their source files under a src directory. Now in order to boot the release we’ll create, we need to remove a couple tings from each supervisor. Instead of creating something for them to supervise just remove the variable AChild and replace [AChild] with [].

Next, so we have a third party dependency in the example, add cowboy to the applications in nano lib/app_1/src/app_1.app.src:

{applications, [kernel, stdlib, cowboy]},

Sinan provides a depends command to show the depenedencies of the project and where they are located:

$ sinan depends -v
starting: depends
Using the following lib directories to show resolved dependencies and where it found them:

    /home/tristan/.kerl/installs/r15b/lib
    /home/tristan/Devel/rel_example/_build/rel_example/lib

compile time dependencies:

runtime dependencies:

    kernel                    2.15       : /home/tristan/.kerl/installs/r15b/lib/kernel-2.15
    stdlib                    1.18       : /home/tristan/.kerl/installs/r15b/lib/stdlib-1.18
    cowboy                    0.5.0      : /home/tristan/.kerl/installs/r15b/lib/cowboy-0.5.0

project applications:

    app_1                     0.1.0      : /home/tristan/Devel/rel_example/_build/rel_example/lib/app_1-0.1.0
    app_2                     0.1.0      : /home/tristan/Devel/rel_example/_build/rel_example/lib/app_2-0.1.0

Now lets build a release and target system.

$ sinan dist

After running the dist command we have a _build directory that we find the following structure. I removed the files/dirs under each app to shorten the listing.

_build/
├── rel_example
│   ├── bin
│   │   ├── rel_example
│   │   └── rel_example-0.0.1
│   ├── erts-5.9
│   │   ├── 
│   ├── lib
│   │   ├── app_1-0.1.0
│   │   │   ├── 
│   │   ├── app_2-0.1.0
│   │   │   ├── 
│   │   ├── cowboy-0.5.0
│   │   │   ├── 
│   │   ├── kernel-2.15
│   │   │   ├── 
│   │   └── stdlib-1.18
│   │       ├── 
│   └── releases
│       └── 0.0.1
│           ├── rel_example.boot
│           ├── rel_example.rel
│           ├── rel_example.script
│           └── sys.config
└── tar
    └── rel_example-0.0.1.tar.gz

Sinan has created a lib directory containing all necessary applications for our release as well as the needed files for booting the release. Additionally the dist command creates a tar.gz for easy deployment. But if we simply want to run our release where we are we can:

$ _build/rel_example/bin/rel_example
Erlang R15B (erts-5.9)

[64-bit] [smp:4:4] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.9  (abort with ^G)
1>

This is only the tip of the iceberg of what sinan is capable of. I can’t go into all of it here but I’ll mention that you are able to define multiple releases for a project to generate and which of your project apps to include in each. Additionally you are able to provide a custom rel file if you require tweaks.

The important part to take away from this post is the structure of what you are working with when using sinan and how it is based on OTP standards, both for the source you work on and the results of the build process under _build/.

Cowboy and Batman.js for Erlang Web Development

Why Cowboy and Batman.js

There are a lot of Erlang web frameworks out there today. Not all are modeled after the MVC model (see Nitrogen), but I think all of them are addressing the problem the wrong way. I recently gave a presentation, slides here and the code for this example here, describing my perferred method for using Erlang for web development and why I think it is the best way to go. In this post, I’ll go into more details on how to build the Erlang backend for the TodoMVC clone I did with Batman.js. I will not spend time on Batman.js but instead only give a quick list of reasons I prefer it to other Javascript frameworks.

Batman.js advantages:

  • Automatic URL generation based on model
  • HTML data-bind templates
  • Coffeescript

Cowboy is a newer Erlang web server that provides a REST handler based on Webmachine. Both of these are perfect for developing a RESTful API, because they follow the HTTP standard exactly and when you are building an API based on HTTP, being able to properly reason about how the logic of the application maps to the protocol eases development and eases getting REST “right”.

Nginx

Any non-dynamic content should be served by Nginx since there is no logic needed and it is something Nginx is great at, so why have Erlang do it? The snippet below configures Nginx to listen on port 80 and serve files from bcmvc’s priv directory. Each request is checked to see if it is a POST or any other method with a JSON request type. If either of those are true, the request is proxied on to a server listening on port 8080, in our case the Cowboy server.

server {
  listen 80;
  server_name localhost;

  location / {
    root   <PATH TO CLONE>/bcmvc/lib/bcmvc_web/priv/;

    if ($request_method ~* POST) {
      proxy_pass        http://localhost:8080;     }

    if ($http_accept ~* application/json) {
      proxy_pass        http://localhost:8080;     }
  }
}

The API

Batman.js knows what endpoints to use and what data to send based on the name of the model we created and the encoded variables, code here. This results in the following API:

 
Method Endpoint Data Return
POST todos {todo : {body:”bane wants to meet, not worried”,isDone:false}}
PUT /todos/33e93b30-2371-4071-afc5-2d48226d5dba {todo : {body:”bane wants to meet, not worried”,isDone:false}}
GET todos [{todo : {id:"33e93b30-2371-4071-afc5-2d48226d5dba", body:"bane wants to meet, not worried", isDone:false}}]
DELETE /todos/33e93b30-2371-4071-afc5-2d48226d5dba

Cowboy Dispatch and Supervisor

Dispatch rules are matched by Cowboy to know what handler to send the request to. Here we have two rules. One that matches just the URL /todos and one that matches the URL with an additional element which will be associated with the atom todo. Both requests will be sent to the module bcmvc_todo_handler.

Dispatch = [{'_', [{[<<"todos">>], bcmvc_todo_handler, []},
                   {[<<"todos">>, todo], bcmvc_todo_handler, []}]}],

Cowboy provides a useful function child_spec for creating a child specfication to use in our supervisor. The child spec here tells Cowboy we want a TCP listener on port 8080 that handles the HTTP protocol. We additionally provide our dispatch list for it to match against and pass on requests.

ChildSpec = cowboy:child_spec(bcmvc_cowboy, 100, cowboy_tcp_transport, 
                              [{port, 8080}], cowboy_http_protocol, [{dispatch, Dispatch}]),

Cowboy Handler

Now that we have a server on port 8080 that knows to send certain requests to our todo handler, we can build the module. The first required function to export is init/3. This function let’s Cowboy know we have a REST protocol, this is how it knows what functions to call (some have defaults and some existing in our module) to handle the request.

init(_Transport, _Req, _Opts) ->
    {upgrade, protocol, cowboy_http_rest}.

Knowing that this is a REST handler Cowboy will pass the request on to allowed_methods/2 to find out if our handler is able to handle this method. Next, the content types accepted and provided by the handler are checked against the incoming request. The expected HTTP response status codes are returned if any of these fail. 405 for allowed_methods, XXX for content_types_accepted and XXX for content_types_provided.

allowed_methods(Req, State) ->
    {['HEAD', 'GET', 'PUT', 'POST', 'DELETE'], Req, State}.

content_types_accepted(Req, State) ->
    {[{{<<"application">>, <<"json">>, []}, put_json}], Req, State}.

content_types_provided(Req, State) ->
    {[{{<<"application">>, <<"json">>, []}, get_json}], Req, State}.

Now the request is sent to the function that handles the HTTP method type of the request.

For a POST, a request to create a new todo item, the function process_post/2 is sent the request. Here we retrieve the body, a JSON object, from the request, convert it to a record and save the model. We’ll see how this record conversion is done when we look at the model module. To inform the frontend of the id of our new resource we set the location header to be the path with the id.

process_post(Req, State) ->
    {ok, Body, Req1} = cowboy_http_req:body(Req),
    Todo = bcmvc_model_todo:to_record(Body),
    bcmvc_model_todo:save(Todo),

    NewId = bcmvc_model_todo:get(id, Todo),
    {ok, Req2} = cowboy_http_req:set_resp_header(
                   <<"Location">>, <<"/todos/", NewId/binary>>, Req1),

    {true, Req2, State}.

For this handler we expect PUT for an update to an object, that is what Batman.js does, but a PATCH would make more sense. For a PUT the URL contains the id for the todo item to be updated. That is retrieved with the binding/2 function. The todo record is created the same as in process_post/2 but then the this id is set for the model and the update/1 function is used to save it to the database.

put_json(Req, State) ->
    {ok, Body, Req1} = cowboy_http_req:body(Req),
    {TodoId, Req2} = cowboy_http_req:binding(todo, Req1),
    Todo = bcmvc_model_todo:to_record(Body),
    Todo2 = bcmvc_model_todo:set([{id, TodoId}], Todo),
    bcmvc_model_todo:update(Todo2),    
    {true, Req2, State}.

For a GET request, which for this application we do not deal with a request for a single todo item, all todo items are retrieved from the model module. Each of these is passed to the model’s to_json/1 function and the result of converting each to JSON is combined into a binary string and placed between brackets so the Batman.js frontend receives a proper JSON list of JSON objects.

get_json(Req, State) ->
    JsonModels = lists:foldr(fun(X, <<"">>) ->
                                 X;
                            (X, Acc) ->
                                 <<Acc/binary, ",", X/binary>>
                         end, <<"">>, [bcmvc_model_todo:to_json(Model) || Model <- bcmvc_model_todo:all()]),

    {<<"[", JsonModels/binary, "]">>, Req, State}.

And lastly, DELETE. Like in PUT the todo item’s id is retrieved from the bindings created based on the dispatch rules and this is passed to the model’s delete function.

delete_resource(Req, State) ->
    {TodoId, Req1} = cowboy_http_req:binding(todo, Req),
    bcmvc_model_todo:delete(TodoId),
    {true, Req1, State}.

Models

Model’s are repsented as records and must provide serialization functions to go between JSON and a record. Each model uses a parse transform that creates functions for creating and updating the record. The transform is a modified version of exprecs from Ulf Wiger that also uses the type definitions in the record to ensure when setting a field that it is the correct type. For example in the todo model isDone is a boolean, so when the model is created the boolean convert function will be matched to convert the string representation to an atom:

convert(boolean, <<"false">>) ->
    false;
convert(boolean, <<"true">>) ->
    true;

So the key pieces of the bcmvc_model_todo are:

-compile({parse_transform, bcmvc_model_transform}).

-record(bcmvc_model_todo, {id = ossp_uuid:make(v1, text) :: string(),
                           body                          :: binary(),
                           isDone                        :: boolean()}).

to_json(Record) ->
    ?record_to_json(?MODULE, Record).

to_record(JSON) ->
    ?json_to_record(?MODULE, JSON).

The ?record_to_json and ?json_to_record macros are defined in jsonerl.hrl. These marcos are generic and work for any record that is typed and uses the model transform.

Conclusion

Clearly, much of what the resource handler and model do is generic and can be abstracted out so that implementing new models and resources can be even simpler. This is the goal of my project Maru. Currently it is based on Webmachine but is now being convered to Cowboy.

In the end, using Cowboy for building a RESTful interface for your application allows you to build interfaces for the frontend entirely separted from backend development, and if you want multiple interfaces (like native mobile and web), they both talk directly to the same backend. Also, from the beginning you have the option to open up your application with an API for other developers to take your application new places, and, shameless plug here, add your API to Mashape to spread your new app!