Universal Makefile for Erlang Projects That Use Rebar

This post sponsored by ErlangCamp 2013 in Nashville and Amsterdam

At this point in the game nearly every Erlang project uses Rebar. The problem with that is that Rebar's approach to the command line and command dependency chaining is leaves a lot to be desired. You tend to end up typing the same command with the same options list over and over again during the course of your work. Because of the poor dependency chaining you often must retype the same sequence of commands as well. Finally, there are certain things (like Dialyzer support) that Rebar does not support.

In our Erlware projects, we want a consistent and recognizable entry point into the build process. For that reason we tend to treat Rebar as a low level tool and drive it and the other build tools I mentioned with a Makefile. That makes it far easier for us, as developers, to chain rules as needed and create additional rules that add features to the build system. This allows us to integrate other tools seamlessly into the build experience. At Erlware, we have developed a pretty standard Makefile that can be used with little or no changes from project to project. You can find the whole of that Makefile here. However, I will work my way through a few parts of it explaining so you understand what is going on and can make changes relevant to your project.

The main targets this Makefile supports are as follows:

  • deps: Pull the project dependencies (called automatically as needed)
  • update-reps: Update the dependencies (never called automatically)
  • compile: Compiles the project
  • doc: Builds the edoc documentation
  • test: Compiles the code and runs the tests (designed to be called by a human)
  • dialyzer: Build the dependency PLT and run dialyzer on the project
  • typer: Run Typer on the project
  • shell: Bring up an Erlang shell with all the dependencies already loaded and unit tests compiled and available.
  • pdf: Turn your README.md into a pdf using pandoc (pretty useful at times, but completely optional)
  • clean: Delete the build output files
  • distclean: Remove the build output files as well as the project PLT file and all the dependencies
  • rebuild: Do a dist clean, rebuild everything from scratch and run both the tests and dialyzer
    Now that we have an idea of the targets available lets work through the major points of the Makefile.

Defining Variables

  
ERLFLAGS= -pa $(CURDIR)/.eunit -pa $(CURDIR)/ebin -pa $(CURDIR)/deps/*/ebin  

DEPS_PLT=$(CURDIR)/.deps_plt  
DEPS=erts kernel stdlib  

At the top of the make file a few variables that are set. For the most part you don't ever have to touch any of these with the exception of DEPS. The DEPS variable provides a list of dependent applications that are used by Dialyzer to build the dependency PLT file. The others are ERLFLAGS, which is used by the shell command to correctly make your code available in the shell, and DEPS_PLT, which points to the location where the project PLT file will be located.

PLT Files and Dialyzer

  
$(DEPS_PLT):  
    @echo Building local plt at $(DEPS_PLT)  
    @echo  
    dialyzer --output_plt $(DEPS_PLT) --build_plt \  
       --apps $(DEPS) -r deps  

dialyzer: $(DEPS_PLT)  
    dialyzer --fullpath --plt $(DEPS_PLT) -Wrace_conditions -r ./ebin  

This is how the Dialyzer command is run. The main things to notice here are that a PLT file specific to the project is built using the dependencies that you described at the top of the file in the DEPS variable. Building a per project PLT solves a raft of potential problems but has the downside that the first run of Dialyzer or the first run after a rebuild can take several minutes as it analyzes all of the dependencies to build the PLT file.

Rebuilding

Rebuilding is basically a completely clean rebuild and test of the system. You should run this code before you submit a PR or share code with your peers. It basically tries to ensure that you have not forgotten or left off anything that is needed.

Conclusion

You can, quite literally, drop this makefile into your project and use it today with only some very minor modification to the DEPS variable. If you are not already using something like this in your project I encourage you to add this Makefile now. It will save you a lot of tedious typing and make your build process much clearer to your users.

Alternatives

There are a few alternatives to this approach out there. These are quite good if somewhat more complex.

Deal of the Day - Half off Erlang and OTP in Action

Here is your chance to get our book Erlang and OTP in Action for half price on April 16th. Use code dotd0416au at www.manning.com/logan/

Running Opa Applications on Heroku

TL;DR

As I've mentioned before, Opa is a new web framework that introduces not only the framework itself but a whole new language. A lot has changed in Opa since I last posted about it. Now Opa has a Javascript-esque look and runs on Node.js. But it still has the amazing typing system that makes Opa a joy to code in.

The currently available Heroku buildpack for Opa only supported the old, pre-Node, support. So I've created an all new buildpack and here I will show both a bit of how I created that buildpack and how to use it to run your Opa apps on Heroku.

The first step was creating a tarball of Opa that would work on Heroku. For this I used the build tool vulcan. Vulcan is able to build software on Heroku in order to assure what is built will work on Heroku through your buildpacks.

vulcan build -v -s ./opalang/ -c "mkdir /app/mlstate-opa && yes '' | ./opa-1.0.7.x64.run" -p /app/mlstate-opa

This command is telling vulcan to build what is in the directory opalang with a command that creates the directory /app/mlstate-opa and then runs the Opa provided install script to unpack the system. This is much simpler than building Opa from source, but it is still necessary to still use vulcan to create the tarball from the output of the install script to ensure paths are correct in the Opa generated scripts.

After this run, by vulcan's default, we will have /tmp/opalang.tgz. I upload this to S3, so that our buildpack is able to retrieve it.

Since Opa now relies on Node.js, the new buildpack must install both Node.js and the opalang.tgz that was generated. To do this I simply copied from the Node.js buildpack.

If you look at the Opa buildpack you'll see, as with any buildpack, it consists of three main scripts under ./bin/: compile, detect and release. There are three important parts for understanding how your Opa app must be changed to be supported by the buildpack.

First, the detect script relies on there being a opa.conf to detect this being an Opa application. This for now is less important since we will be specifying the buildpack to use to the heroku script. Second, in the compile script we rely on there being a Makefile in your application for building. There is no support for simply running opa to compile the code in your tree at this time. Third, since Opa relies on Node.js and Node modules from npm you must provide a package.json file that the compile script uses to install the necessary modules.

To demostrate this I converted Opa's hello_chat example to work on Heroku, see it on Github here.

There are two necessary changes. One, add the Procfile. A Procfile define the processes required for your application and how to run them. For hellochat_ we have:

web: ./hello_chat.exe --http-port $PORT

This tell Heroku that our web process is run from the binary hellochat.exe_. We must pass the _$PORT _variable to the Opa binary so that it binds to the proper port that Heroku expects it to be listening on to route our traffic.

Lastly, a package.json file is added so that our buildpack's compile script installs the necessary Node.js modules:

{  
  "name": "hello_chat",  
  "version": "0.0.1",  
  "dependencies": {  
      "mongodb" : "*",  
      "formidable" : "*",  
      "nodemailer" : "*",  
      "simplesmtp" : "*",  
      "imap" : "*"  
  },  
  "engines": {  
    "node": "0.8.7",  
    "npm": "1.1.x"  
  }  
}

With these additions to hellochat_ we are ready to create an Opa app on Heroku and push the code!

$ heroku create --stack cedar --buildpack https://github.com/tsloughter/heroku-buildpack-opa.git  
$ git push heroku master

The output from the push will show Node.js and npm being install, followed by Opa being unpacked and finally make being run against hellochat. The web process in _Procfile will then be run and the output will provide a link to go to our new application. I have the example running at http://mighty-garden-9304.herokuapp.com

Next time I'll delve into database and other addon support in Heroku with Opa applications.

Projmake-mode: Flymake Replacement

There is a great new Emacs plugin from Eric Merritt that like FlyMake builds your code and highlights within Emacs any errors or warnings, but unlike FlyMake builds across the whole project. You can clone the mode from here projmake-mode

After cloning the repo to your desired location add this bit to your dot emacs file, replacing <PATH> with the path to where you cloned the repo.

[gist]3794732[/gist]

This Emacs code also uses add-hook to set projmake-mode to start for erlang-mode is loaded. Projmake by default knows how to handle rebar and Make based builds so there is no setup after this, assuming your project is built this way.

Here is my Makefile for building Erlang projects with rebar, replace PROJECT with the name of your project:

[gist]3795007[/gist]

Now you can load Emacs and a file from your project and if it is an Erlang file due to the add-hook function in our dot emacs file it will automatically load projmake-mode. You can add hooks for other modes or simply run M-x projmake-mode.

For more documentation and how to extend to other types of projects check out the documentation.

Maru Models: JSON to Erlang Record with Custom Types

Working with Erlang for writing RESTful interfaces JSON is the communication "language" of choice. For simplifying the process of JSON to a model the backend could work with efficiently I've created marumodels. This app decodes the JSON with _jiffy and uses functions generated by a modified version of Ulf's exprecs to create an Erlang record. The generated functions are created with type information from the record definition and when a property is set for the record through these functions it is first passed to the convert function of marumodeltypes to do any necessary processing.

I separated this application into a separate repo to simplify people trying the examples. But the real development will be done in the Maru main repo.