Rebar3 Features (part 4): Profiles

Running tests and need meck or proper? Building docs and want edown? Bundling up a target system and want to include erts and turn off relx's dev_mode? Rebar3 now has you covered for these scenarios through profiles.

Profiles can be named with any atom, can add new items to the configuration, or prepend new options to existing elements, and multiple profiles can be combined themselves.

The two special profiles are default and global. default is the profile everything is run under, with output going to _build/default/, unless another is specified in addition to default. When multiple profiles are used together the output directory is the profiles concatenated together with +, for example running rebar3 as test,prod <task> would produce _build/test+prod/, however the actual combination of profiles used in that run is default,test,prod, in the output default is always removed from the beginning unless it is the only profile in use.

The other special case for how profiles decide where output is written is global always refers to ~/.cache/rebar3/.

Providers are able to set profiles they will run under (in addition to default) with the {profiles, [atom()]} option to providers:create/1. Four providers that come with rebar3 specify a profile: eunit, ct and cover use test and edoc uses docs.

Examples of profile usage can help give an idea of how you might use them in your projects.

Since eunit, ct and cover run with the test profile adding deps specific to tests, like meck and eunit_formatters, which will be used when running rebar3 ct or any of the others, there is no need to include as test in the run, but to be clear, profiles are deduplicated so rebar3 as test ct will still be _build/test and not _build/test+test.

  [{test, [{deps,
             {eunit_formatters, {git, "git://", {branch, "master"}}}

            {eunit_opts, [
              {report, {eunit_progress, [colored, profile]}}

Another common dependency that in rebar2 would be included in the main dependency list and thus be fetched even when using used as a dependency is edown. With the docs profile that edoc runs with that is solved by moving edown under the profile:

{profiles, [{docs, 
            [{deps, [
              {edown, {git, "git://", {branch, "master"}}}

When developing a release, it is useful to use relx's dev_mode and to set include_erts to false. But when building a release for production you'll want the opposite. In this case, unlike with tests and docs, it is required to specify the profile you want to run the command with. Running rebar3 release will run as default, so with dev_mode true and include_erts false, while rebar3 as prod release pulls in the settings from the prod profile making dev_mode false and include_erts true.

{relx, [...
        {dev_mode, true},
        {include_erts, false},

  [{prod, [{relx, [
                   {dev_mode, false},
                   {include_erts, true}

The global profile is used in particular for plugins that the user defines in their personal rebar.config. For example I run rebar3 as global plugins upgrade to upgrade the two plugins in my ~/.config/rebar3/rebar.config:

{plugins, [rebar3_hex, rebar3_run]}.

Profiles are an important addition to the rebar configuration for making development and dependency management simpler. Please be a good Erlang citizen and separate your dependencies into the appropriate profiles, those who depend on your application will appreciate it.

Rebar3 features (part 3): Overrides

What do you do when a dependency has settings in its rebar.config that are causing you problems? Maybe it includes dependencies that are not needed in the general case, like meck or edown. Or it could have set a required OTP version that isn't accurate and you want to remove. Or the app could contain C code that needs compiling and had relied on rebar2's port compiler. These problems often lead to forks of projects, which isn't good for anyone, so in rebar3 we've added a feature called overrides.

Overrides allow any rebar.config at a higher level than a dependency to either replace or add to the configuration of all or an individual application at a lower level.

The type spec for overrides looks like:

{overrides, [{add, atom(), [{atom(), any()]}
             | {override, atom(), [{atom(), any()]},
             | {override, [{atom(), any()]} 

The bitcask application from Basho is configured to be built with rebar2, but instead of forking the project and patching the config, a user can instead simply add the below overrides section to their rebar.config:

    {override, bitcask,
      [{deps, []},
       {plugins, [pc]},
       {artifacts, ["priv/"]},
       {provider_hooks, [{post, 
                          [{compile, {pc, compile}},
                           {clean, {pc, clean}}]

These overrides for bitcask replace its deps entry with an empty list, this removes the meck dependency which is only needed if you were running tests and the cuttlefish dependency which isn't required. Plus, since the port compiler functionality was removed in rebar3, the port compiler plugin must be added and hooked in to compile and clean with provider_hooks.

The rest of the guts of the override will be covered in future posts, see the docs for more information on provider_hooks, artifacts and plugins

Rebar3 Features (part 2): Dependency Tree

rebar3 tree is a new command to allow the user to view what dependency pulled in each transitive dependency. This is especially useful with rebar3's dependency resolution strategy of "first wins".

Thanks for pushing for this feature goes to Heinz N. Gies and inspiration comes from leiningen's command lein deps :tree.

For an example I've cloned chef-server and built oc_erchef under src/oc_erchef. It is unique because it has both a top level app oc_erchef under src/ as well as additional project apps under apps/.

Additionally, I've added _checkouts/erlware_commons to show how checkout dependencies, an application linked to under _checkouts/, is moved to the top level and marked as (checkout app) and switched lager to a hex package dependency, the rest being git sourced dependencies.

$ rebar3 tree
├─ chef_db─12.1.2-6047c67 (project app)
├─ chef_index─12.1.2-6047c67 (project app)
├─ chef_objects─12.1.2-6047c67 (project app)
├─ chef_test─12.1.2-6047c67 (project app)
├─ depsolver─12.1.2-6047c67 (project app)
├─ erlware_commons─0.16.0 (checkout app)
├─ oc_chef_authz─12.1.2-6047c67 (project app)
├─ oc_chef_wm─12.1.2-6047c67 (project app)
└─ oc_erchef─12.1.2-6047c67 (project app)
   ├─ bcrypt─0.0.0+build.87.ref085eb59 (git repo)
   ├─ chef_authn─0.0.0+build.86.refe7850d0 (git repo)
   ├─ darklaunch─0.0.0+build.72.ref05881cb (git repo)
   │  └─ meck─0.8.3 (git repo)
   ├─ efast_xs─0.1.0 (git repo)
   ├─ ej─0.0.0+build.87.ref132a9a3 (git repo)
   ├─ envy─0.0.0+build.38.ref954c87a (git repo)
   ├─ eper─0.90.0 (git repo)
   ├─ folsom─0.0.0+build.335.ref38e2cce (git repo)
   │  └─ bear─0.0.0+build.32.ref1192345 (git repo)
   ├─ folsom_graphite─0.0.0+build.41.refd4ce9bf (git repo)
   ├─ gen_bunny─0.1 (git repo)
   │  ├─ amqp_client─0.0.0 (git repo)
   │  └─ rabbit_common─0.0.0 (git repo)
   │     └─ gen_server2─1.0.0 (git repo)
   ├─ ibrowse─ (git repo)
   ├─ jiffy─0.0.0+build.131.reff661ee9 (git repo)
   ├─ lager─2.1.1 (hex package)
   │  └─ goldrush─0.1.6 (hex package)
   ├─ mini_s3─0.0.1 (git repo)
   ├─ mixer─0.1.1 (git repo)
   ├─ neotoma─0.0.0+build.125.ref760928e (git repo)
   ├─ opscoderl_folsom─0.0.1 (git repo)
   ├─ opscoderl_httpc─0.0.1 (git repo)
   ├─ opscoderl_wm─0.0.1 (git repo)
   │  └─ webmachine─0.0.0+build.526.ref7677c24 (git repo)
   │     └─ mochiweb─2.9.0 (git repo)
   ├─ pooler─0.0.0+build.159.ref7bb8ab8 (git repo)
   ├─ sqerl─1.0.0 (git repo)
   │  └─ epgsql─3.1.0 (git repo)
   ├─ stats_hero─0.0.0+build.73.refff00041 (git repo)
   │  └─ edown─0.2.4+build.66.ref30a9f78 (git repo)
   ├─ sync─0.1.3 (git repo)
   └─ uuid─1.3.2 (git repo)
      └─ quickrand─1.3.2 (git repo)

Rebar3 Features (part 1): Local install and upgrade

Rebar3 comes with a lot of new and improved features. I'll be publishing posts here to highlight some of these features over the coming weeks.

Installing and Upgrading Rebar3

Rebar is an escript bundle, and this has been very important to its ease of use, mainly because they let Erlang users have a single file acting as an executable, regardless of the underlying Erlang installation, which can even be committed to a project's repository.

So, rebar3 is also an escript. Pre-built escripts can be downloaded from s3:

$ wget

or clone and bootstrap the git repo:

$ git clone
$ cd rebar3
$ ./bootstrap

However, escripts do have their drawbacks. They are slower to start, rely on the old Erlang io server, making rebar3 shell not act exactly the same as an erl shell, and it can lead to repos storing the escript along with the code but never upgrading it.

So in rebar3 we've introduced the ability to extract the escript archive along with a run script to ~/.cache/rebar3/, plus a command that will fetch and do the same for the latest escript release of rebar3 from s3.

The install command will be under a provider namespace, currently it lives under unstable, the namespace for experimental features that are likely to change in the near future and that aren't yet considered stable:

$ ./rebar3 unstable install
===> Extracting rebar3 libs to $HOME/.cache/rebar3/lib...
===> Writing rebar3 run script $HOME/.cache/rebar3/bin/rebar3...
===> Add to $PATH for use: export PATH=$HOME/.cache/rebar3/bin:$PATH

Follow the instructions for adding the rebar3 bin directory to your $PATH, and optionally add it to your shell's configuration, such as ~/.bashrc or ~/.zshrc.

To upgrade rebar3 use rebar3 unstable upgrade which will fetch the latest escript and extract it:

$ rebar3 unstable upgrade

We hope this new method for installing, upgrading and running rebar3 will allow people to be comfortable with not bundling rebar3 in their project's repositories, resolve any compilation speed differences between rebar3 and alternative Erlang build tools and help keep everyone up to date with the latest features and bug fixes.

Monolith vs Microservices: Where to start

There is a debate going around how to do the initial design of a new system in the context of Microservices. Should you start with a Monolithic approach initially and move to a Microservices later or use Microservices from the beginning?


Martin Fowler recently wrote article called 'Monolith First' that talks about how to get started on a new Microservice project. Stefan Tilkov wrote a response Don't start with a monolith arguing the reverse. As you would expect from the authors involved, both articles are well thought out and full of great information.

Mr. Fowler posits that breaking things into services is error prone. You can't know ahead of time what the correct break out is so you should postpone that as late as possible. His implication is the main benifit of Microservices are their ability to scale. Mr. Tilkov hits on the point that the best time to architect a system is when you are starting out. Both authors, seem to miss the miss the point that these two options are not mutually exclusive.

The Modeling failure

These Engineers are thinking about Microservices as a unit of distribution rather then a unit of decomposition. Instead of thinking about Microservices as a way to structure their programs they should think of them as a modeling construct for those programs. Conflating Microservices-as-modeling-construct with Microservices-as-unit-of-distribution is like conflating CORBA with an Object System itself. It only makes sense in a very narrow context.

I have the good fortune of having spent a significant part of my career writing systems in Erlang. Erlang is a distributed, fault tolerant language that uses processes as the fundamental unit of decomposition in the same way that Objects are the fundamental unit of decomposition in languages like Java and C++. The difference is that processes are Microservices. They are completely independent and have an explicit API and life cycle. In fact, I have been describing programming in Erlang, as modeling using very fine grained SOA, for almost fifteen years. I wish I had coined the term 'Microservices' instead, but the idea is the same. In Erlang There is no other approach to modeling. So you build your systems based on these tiny communicating services. It allows you to build your system as a group of small services that provide and receive work from one another. Erlang has another feature that makes all this possible. That feature is Location Transparency. Location Transparency means that I don't have to worry about where a service is to be able to communicate with it. I only need some name, a unique identifier. I can use the system facilities to communicate with the service identified by than name and have some high expectation that the service will receive it.

With those two features Erlang has given me enough flexibility so that I can get all the benefits of composing my system in Micorservices and all the benefits of having the system be monolithic when it makes sense for it to be monolithic. Essentially, I don't have to worry about distribution during design and implementation at all. Erlang allows me to develop, test and do the initial deployments on a single node then, as the scale increases add new hardware to the system. This allows me to push out taking the additional conceptual load of distribution to as late a point as possible.

Doing It Right in Other Languages

This isn't really a Microservice vs Monolith debate. Its a reaction to the lack of certain fundamental properties in existing Microservice platforms for other languages. One of the most important properties is Location Transparency. Often DNS or other location metadata is hard coded into the system, forcing the implementer to make decisions at compile time that constraint what is deployed at run time. Locational Transparency allows you to design your system using Microservices as a modeling construct without worrying about those services will be deployed.

In total, there are four properties that a Microservice framework needs to support. These are:

  • Microservices are cheap to define.
  • Microservices are Locationally Transparent.
  • Service calls are context aware.
  • Groups of services can be declaratively described for deployment.

A framework that fully supports thees for properties allows us to use Microservices as a unit of modeling in any language. Lets deep-dive into each property individually.

Micro services are cheap to define

Using Microservices must be as efficient to design and implement as possible. In the best case it's at least as efficient to implement as the native module constructs in a language, usually an Object or a Module. You also can't force the definition of a microservice to be in some foreign syntax like XML or Json. There will be a lot of Microservices and the creation of those services needs to flow with the language itself. So, in Java, we might take use attributes to take an object and expose its methods to a service compiler. This compiler could be run at compile time or at runtime. The details don't matter. In C++ we could use templates. In OCaml, the solution would be camlp4. All of these solutions use the same approach. They give the developer the ability to mark a Microservice as a Microservice and expose it's API without significant development overhead.

Microservices are Locationally Transparent

Microservices must be named in some unique way. Those names must be propagated throughout the system. That may be a single node on a developers desktop or ten thousand nodes spread through-out data centers in cities around the world. Systems that have this property allow names to be used as a composition and allows for useful Locational Transperancy.

Service calls are context aware

Service calls are the ubiquitous form of work delivery in Microservice based systems. These calls must be as inexpensive as possible. The best way for to accomplish this goal is to make the call infrastructure context aware. When the Provider and the Consumer of the service are both on the same node, then the call must be made in the most efficient way possible. Preferably this would devolve into a simple function call. At the very least, serialization and deserialization should be avoided. With out this feature, the performance of a system that uses Microservices as a modeling approach will suffer.

Declaratively describe groups of services as nodes

It is important that we can easily change the composition of a node. That we can, through some declarative means change which services run where in our system. This must also be done without recompiling the system. In the best case, it would also be done without redeploying the system, though in many cases that is to much to ask for.

So there must be some configuration based, preferably declarative approach to describing nodes in the network that comprises the distributed system.


The debate about how to design the initial architecture of a new system, Monolith vs Microservice, is predicated on the false assumption that these two approaches are mutually exclusive. That Microservices are a means of distribution rather than an architectural choice. By questioning that assumption and creating a Microservice framework that supports the key properties that we have talked about in this article, we can build flexible, scalable systems that are also easy to design test and build.