Rebar3 features (part 3): Overrides

What do you do when a dependency has settings in its rebar.config that are causing you problems? Maybe it includes dependencies that are not needed in the general case, like meck or edown. Or it could have set a required OTP version that isn't accurate and you want to remove. Or the app could contain C code that needs compiling and had relied on rebar2's port compiler. These problems often lead to forks of projects, which isn't good for anyone, so in rebar3 we've added a feature called overrides.

Overrides allow any rebar.config at a higher level than a dependency to either replace or add to the configuration of all or an individual application at a lower level.

The type spec for overrides looks like:

{overrides, [{add, atom(), [{atom(), any()]}
             | {override, atom(), [{atom(), any()]},
             | {override, [{atom(), any()]} 

The bitcask application from Basho is configured to be built with rebar2, but instead of forking the project and patching the config, a user can instead simply add the below overrides section to their rebar.config:

    {override, bitcask,
      [{deps, []},
       {plugins, [pc]},
       {artifacts, ["priv/"]},
       {provider_hooks, [{post, 
                          [{compile, {pc, compile}},
                           {clean, {pc, clean}}]

These overrides for bitcask replace its deps entry with an empty list, this removes the meck dependency which is only needed if you were running tests and the cuttlefish dependency which isn't required. Plus, since the port compiler functionality was removed in rebar3, the port compiler plugin must be added and hooked in to compile and clean with provider_hooks.

The rest of the guts of the override will be covered in future posts, see the docs for more information on provider_hooks, artifacts and plugins

Rebar3 Features (part 2): Dependency Tree

rebar3 tree is a new command to allow the user to view what dependency pulled in each transitive dependency. This is especially useful with rebar3's dependency resolution strategy of "first wins".

Thanks for pushing for this feature goes to Heinz N. Gies and inspiration comes from leiningen's command lein deps :tree.

For an example I've cloned chef-server and built oc_erchef under src/oc_erchef. It is unique because it has both a top level app oc_erchef under src/ as well as additional project apps under apps/.

Additionally, I've added _checkouts/erlware_commons to show how checkout dependencies, an application linked to under _checkouts/, is moved to the top level and marked as (checkout app) and switched lager to a hex package dependency, the rest being git sourced dependencies.

$ rebar3 tree
├─ chef_db─12.1.2-6047c67 (project app)
├─ chef_index─12.1.2-6047c67 (project app)
├─ chef_objects─12.1.2-6047c67 (project app)
├─ chef_test─12.1.2-6047c67 (project app)
├─ depsolver─12.1.2-6047c67 (project app)
├─ erlware_commons─0.16.0 (checkout app)
├─ oc_chef_authz─12.1.2-6047c67 (project app)
├─ oc_chef_wm─12.1.2-6047c67 (project app)
└─ oc_erchef─12.1.2-6047c67 (project app)
   ├─ bcrypt─0.0.0+build.87.ref085eb59 (git repo)
   ├─ chef_authn─0.0.0+build.86.refe7850d0 (git repo)
   ├─ darklaunch─0.0.0+build.72.ref05881cb (git repo)
   │  └─ meck─0.8.3 (git repo)
   ├─ efast_xs─0.1.0 (git repo)
   ├─ ej─0.0.0+build.87.ref132a9a3 (git repo)
   ├─ envy─0.0.0+build.38.ref954c87a (git repo)
   ├─ eper─0.90.0 (git repo)
   ├─ folsom─0.0.0+build.335.ref38e2cce (git repo)
   │  └─ bear─0.0.0+build.32.ref1192345 (git repo)
   ├─ folsom_graphite─0.0.0+build.41.refd4ce9bf (git repo)
   ├─ gen_bunny─0.1 (git repo)
   │  ├─ amqp_client─0.0.0 (git repo)
   │  └─ rabbit_common─0.0.0 (git repo)
   │     └─ gen_server2─1.0.0 (git repo)
   ├─ ibrowse─ (git repo)
   ├─ jiffy─0.0.0+build.131.reff661ee9 (git repo)
   ├─ lager─2.1.1 (hex package)
   │  └─ goldrush─0.1.6 (hex package)
   ├─ mini_s3─0.0.1 (git repo)
   ├─ mixer─0.1.1 (git repo)
   ├─ neotoma─0.0.0+build.125.ref760928e (git repo)
   ├─ opscoderl_folsom─0.0.1 (git repo)
   ├─ opscoderl_httpc─0.0.1 (git repo)
   ├─ opscoderl_wm─0.0.1 (git repo)
   │  └─ webmachine─0.0.0+build.526.ref7677c24 (git repo)
   │     └─ mochiweb─2.9.0 (git repo)
   ├─ pooler─0.0.0+build.159.ref7bb8ab8 (git repo)
   ├─ sqerl─1.0.0 (git repo)
   │  └─ epgsql─3.1.0 (git repo)
   ├─ stats_hero─0.0.0+build.73.refff00041 (git repo)
   │  └─ edown─0.2.4+build.66.ref30a9f78 (git repo)
   ├─ sync─0.1.3 (git repo)
   └─ uuid─1.3.2 (git repo)
      └─ quickrand─1.3.2 (git repo)

Rebar3 Features (part 1): Local install and upgrade

Rebar3 comes with a lot of new and improved features. I'll be publishing posts here to highlight some of these features over the coming weeks.

Installing and Upgrading Rebar3

Rebar is an escript bundle, and this has been very important to its ease of use, mainly because they let Erlang users have a single file acting as an executable, regardless of the underlying Erlang installation, which can even be committed to a project's repository.

So, rebar3 is also an escript. Pre-built escripts can be downloaded from s3:

$ wget

or clone and bootstrap the git repo:

$ git clone
$ cd rebar3
$ ./bootstrap

However, escripts do have their drawbacks. They are slower to start, rely on the old Erlang io server, making rebar3 shell not act exactly the same as an erl shell, and it can lead to repos storing the escript along with the code but never upgrading it.

So in rebar3 we've introduced the ability to extract the escript archive along with a run script to ~/.cache/rebar3/, plus a command that will fetch and do the same for the latest escript release of rebar3 from s3.

The install command will be under a provider namespace, currently it lives under unstable, the namespace for experimental features that are likely to change in the near future and that aren't yet considered stable:

$ ./rebar3 unstable install
===> Extracting rebar3 libs to $HOME/.cache/rebar3/lib...
===> Writing rebar3 run script $HOME/.cache/rebar3/bin/rebar3...
===> Add to $PATH for use: export PATH=$HOME/.cache/rebar3/bin:$PATH

Follow the instructions for adding the rebar3 bin directory to your $PATH, and optionally add it to your shell's configuration, such as ~/.bashrc or ~/.zshrc.

To upgrade rebar3 use rebar3 unstable upgrade which will fetch the latest escript and extract it:

$ rebar3 unstable upgrade

We hope this new method for installing, upgrading and running rebar3 will allow people to be comfortable with not bundling rebar3 in their project's repositories, resolve any compilation speed differences between rebar3 and alternative Erlang build tools and help keep everyone up to date with the latest features and bug fixes.

Erlang Postgres Connection Pool with Episcina

Almost exactly a year ago I was looking to merge the many forks of Will Glozer's Postgres client for use in a project at Heroku. Instead Semiocast released their client, I gave it a try and never looked back. (But note that David Welton, a braver person than me, is working on merging the forks of epgsql at this time). I found Semiocast's client to be clean, stable and I liked the interface better.

At the same time I was in the need of a connection pooler. Many have relied on poolboy or pooler for this purpose but neither actually fits the use case of connection pooling that well. Luckily Eric and Jordan were in the need at the same time and created the Erlware project episcina, which they based off of Joseph Wecker's fork of Will Glozer's epgsql pool. Episcina differs in that it is purely for connection pooling, it is not for pooling workers and it is not for pooling generic processes.

Here I'll show how I combined the two in a simple example.

To start we have a sys.config file to configure episcina:

{episcina, [{pools, [{primary,  
                        [{size, 10},                          
                         {timeout, 10000},  
                         {connect_provider, {pp_db, open,  
                                             [[{host, "localhost"}  
                                              ,{database, "postgres_pool"}  
                                              ,{port, 5432}  
                                              ,{user, "postgres"}  
                                              ,{password, "password"}]]}},  
                         {close_provider, {pp_db, close, []}}]}]  

A key thing to note here is the connect and close providers are function calls to modules within the project and not the Postgres client. Episcina requires a return value of {ok, pid()} and the Semiocast client returns {pgsql_connection, pid()}, so we wrap the connection calls to get around that:

-spec get_connection(atom()) -> {pgsql_connection, pid()} | {error, timeout}.  
get_connection(Pool) ->  
    case episcina:get_connection(Pool) of  
        {ok, Pid} ->  
            {pgsql_connection, Pid};  
        {error, timeout} ->  
            {error, timeout}  

-spec return_connection(atom(), {pgsql_connection, pid()}) -> ok.  
return_connection(Pool, {pgsql_connection, Pid}) ->  
    episcina:return_connection(Pool, Pid).  

-spec open(list()) -> {ok, pid()}.  
open(DBArgs) ->  
    {pgsql_connection, Pid} = pgsql_connection:open(DBArgs),  
    {ok, Pid}.  

-spec close(pid()) -> ok.  
close(Pid) ->  
    pgsql_connection:close({pgsql_connection, Pid}).  

And here is the query function to get a connection and return it to the pool after completion:

-spec query(string()) -> tuple().  
query(Query) ->  
    C = get_connection(primary),  
        pgsql_connection:simple_query(Query, [], infinity, C)  
        return_connection(primary, C)  

This example project uses relx to build a release which will start episcina on boot:

{release, {postgres_pool, "0.0.1"},  

{sys_config, "./config/sys.config"}.  
{dev_mode, true}.  

{include_erts, true}.  
{extended_start_script, true}.  

Boot the release to an interactive shell and play around:

λ _rel/bin/postgres_pool console  
([email protected])1> pp_db:query("SELECT 1").  

Some Thoughts on Go and Erlang

UPDATE: I'm seeing that I did not make the point of this post clear. I am not saying Go is wrong or should change because it isn't like Erlang. What I am attempting to show is the choices Go made that make it not an alternative to Erlang for backends where availability and low latency for high numbers of concurrent requests is a requirement. And notice I'm not writing this about a language like Julia. I have heard Go pitched as an alternative to Erlang for not only new projects but replacing old. No one would say the same for Julia, but Go and Node.js are seen by some as friendlier alternatives. And no, Erlang isn't the solution for everything! But this is specifically about where Erlang is appropriate and Go is lacking.

I'm going to attempt to leave out my subjective opinions for disliking parts of Go, such as syntax or lack of pattern matching, and explain objective reasons for the language and runtime not being fit for certain types of systems. But I'll start with the good.

Where Go Shines


As Rob Pike wrote, his biggest surprise was Go is mostly gaining developers from Python and Ruby, not C++. To me this trend has been great to see. No more slow clients installed through pip or gems! (Though for some reason Node.js for clients is growing, wtf Keybase?)

Go provides developers with a fast and easy to use high level, statically typed language with garbage collection and concurrency primitives. It would be great for C++ developers to move to Go as well, the programs that crash constantly on my machine are proprietary C++ that love to misuse memory -- Hipchat and Spotify. But Rob Pike pointed out that the C++ developers don't want the simplified, yet powerful, world of Go. Ruby and Python developers, rightly, do.


Getting up and going with building executables depending on third party libraries can be done easily without depending on third party tools, it all comes with Go. While the tools aren't perfect, there are tools to fill in some gaps like Godep, it is still a huge win for the language.

Where Go Falls Short

Some of Go's design decisions are detrimental when it comes to writing low-latency fault-tolerant systems.


Yes, I listed concurrency primitives as a plus in the first section. It is in the case of replacing Ruby or Python or C++ for clients. But when it comes to complex backends that need to be fault-tolerant Go is as broken as any other language with shared state.

Pre-emptive Scheduling

Here Go has gotten much better. Go's pre-emptive scheduling was done on syscalls but now pre-emption is done when a goroutine checks the stack, which it does on every function call, and this may be marked to fail (causing pre-emption) if the goroutine has been running for longer than some time period. While this is an improvement it still lags behind Erlang's reduction counting and newly added dirty schedulers for improved integration with C.

Garbage Collection

In Go garbage collection is global mark and sweep. This pauses all goroutines during the sweep and this is terrible for latency. Again, low latency is hard, the more the runtime can do for you the better.

Error Handling

This isn't just about having no exceptions and the use of checking if a second return value is nil. Goroutines have no identity which means Go lacks the ability to link or monitor goroutines. No linking (instead using panic and defer) and no process isolation means you can not fall back on crashing and restarting in a stable state. There will be bugs in production and a lot of those bugs will be Heisenbugs, so being able to layout processes, isolated from each other but linked based on their dependencies, is key for fault tolerance.

And on top of these major omissions in dealing with faults Go has nil. How in 2014 this is considered OK I haven't wrapped my mind around yet. I'll just leave it at that, with a befuddled look.


Not having a REPL is annoying for development, but no remote shell for running systems is a deal breaker. Erlang has impressive tracing capabilities and tools built on those capabilities like recon_trace. Erlang's introspection greatly improves development as well as maintenance of complex running systems.

Static Linking

Yes, another thing that was also in the positives but becomes a negative when used in systems that are expected to be long running. While having no linking does means slower execution it gives Erlang advantages in the case of code replacement on running systems. It is important to note that due to Erlang's scheduling and garbage collecting strategies many of these speed tradeoffs do not mean Erlang will be slower than an implementation in another language, especially if the Erlang implementation is the only one still running.

Code Organization

The OTP framework provides libraries for common patterns. OTP not only means less code to write and better abstractions but also improves readability. Following the OTP standards with applications, supervisors and workers (genserver, genfsm, gen_event) means a developer new to the program is able to work down through the tree of processes and how they interact. Go's channels, unidentifiable goroutines and lack of patterns to separate goroutines into separate modules leads to much harder code to reason about.

Can or Even Should Go Change?

Erlang has been around for decades and Go is the new kid on the block, so can Go improve in these areas? Some, yes, but most of it can not because of the choices made in the design of the language which were not fault tolerance and low latency.

This doesn't mean Go is "bad" or "wrong". It simply makes different choices and thus is better suited for different problems than a language like Erlang.