Adding JS To all Opa Resources: Use Case Google Analytics

I decided I wanted to add Google Analytics to OpaDo but had no idea how to easily tell each page to include the necessary Javascript. I asked on the Opa mailing list and got a quick and simple response. Frederic Ye pointed me to Resource.register_external_js

It couldn’t have been any easier. You simply place your google_analytics.js file in your project and use the Resource.register_external_js function to modify the default customization of all Resources. See code below or on the github repo.

package opado.main

import opado.user
import opado.admin
import opado.todo

urls : Parser.general_parser(http_request -> resource) =
  parser
  | {Rule.debug_parse_string(s -> Log.notice("URL",s))} Rule.fail -> error("")
  | "/todos" result={Todo.resource} -> result
  | "/user" result={User.resource} -> result
  | "/login" result={User.resource} -> result
  | "/admin" result={Admin.resource} -> result
  | (.*) result={Todo.resource} -> result

do Resource.register_external_js("/resources/js/google_analytics.js")
server = Server.of_bundle([@static_resource_directory("resources")])
server = Server.make(urls)

For a longer article/tutorial on dealing with external resources check out this blog post from the Opa team Dealing with External Resources.

Opa Database Migrations

Nicolas Glondu posted a comment on an earlier post detailing ways for doing database migrations with Opa. I thought it was useful enough that I should put up a post around it:

If you have complex changes in database structures in a OPA program, you have two choices :

1 – Keep both structure in your database and create a function which populates the new empty field from the other fields. Launch your program once with –db-force-upgrade and launch this function once (adding fields is safe but the new fields are empty). After this, clean removed fields and the function and re-run your program with –db-force-upgrade once more (removing fields is also safe). After this, you can keep using your program normally. A bit long but convenient for changes in small applications (with one instance running). You don’t have to relaunch your program immediately and you can spread ou migration on two program updates.

2 – Create another OPA program with both database definitions but with root nodes named differently [1]. Your program should take data from the original node and put it in the new node with your changes. That way you can have a standalone opa application for your migration. Which is more convenient if your program is used by other users. Once your program compiled, you can use command line options (–db-local) to use your true program’s database as the input root node an to put the output root node in a safe place. However, I don’t know if it works if your program has no specific root node defined.

[1] You can define a root node with `database my_root = @meta` and use it with `db /my_root/toto : stringmap(int)`. You can have multiple root nodes in one program.

OpaDo: Personal ToDo Lists

This is a continuation of two past posts (one, two) on my first application with Opa called OpaDo. You can try the live demo here and check out the full source code on Github

Updating OpaDo to add user accounts the project structure has been changed a bit and modularized. Below is the new project layout.

opado/
├── Makefile
├── README.md
├── dotcloud.yml
├── resources
│   ├── destroy.png
│   └── todos.css
└── src
    ├── main.opa
    ├── todo.opa
    └── user.opa

Now there is a main, todo and user module. The main module is the entry point for the app and looks like:

package opado.main

import opado.user
import opado.todo

urls : Parser.general_parser(http_request -> resource) =
  parser
  | {Rule.debug_parse_string(s -> Log.notice("URL",s))} Rule.fail -> error("")
  | "/todos" result={Todo.resource} -> result
  | "/user" result={User.resource} -> result
  | "/login" result={User.resource} -> result
  | (.*) result={Todo.resource} -> result

server = Server.of_bundle([@static_resource_directory("resources")])
server = Server.make(urls)

Here we define the name of this package and import the user and todo modules. Next is the url matching code. urls is a parser that takes an HTTP request and returns a resource. The matching is pretty straight forward. For example:

  | "/todos" result={Todo.resource} -> result

Here we are matching on URLs that begin with /todos but could have anything after that. What is contained after /todos is passed to the Todo.resource which the variable result is set to. And finally that result is returned.

The last two lines simple define the reource directory for the server and pass in the matching function for the HTTP requests.

The todo resource isn’t important to us in this post since its hardly changed. But there are a two important changes:

db /todo_items : stringmap(stringmap(todo_item))
db /todo_items[_][_]/done = false

Here we see that the /todo_items database is not longer simply a stringmap of todo_item‘s but a stringmap of a that. This is so we can reference the items by a user identifier. For example a user identified by the string “user01″ who has a todo item identified by “aaa” would be read from the data base as /todo["user01"]["aaa"].

There are a few other changes to the todo module so that items are properly inserted for the logged in user and deleting must be done in the second stringmap. But we’ll move on to the user module now.

Much of the user module was taken from Matthieu Guffroy’s OpaCMS code on github. But I’ve made a number of modification for my needs.

@abstract type User.password = string
@abstract type User.ref = string

type User.t =
  {
    username : string
    fullname : string
    password : User.password
  }

type User.status = { logged : User.ref } / { unlogged }
type User.info = UserContext.t(User.status)
type User.map('a) = ordered_map(User.ref, 'a, String.order)

db /users : User.map(User.t)

User_data = {{
  mk_ref( login : string ) : User.ref =
    String.to_lower(login)

  ref_to_string( login : User.ref ) : string =
    login

  save( ref : User.ref, user : User.t ) : void =
    /users[ref] <- user

  get( ref : User.ref ) : option(User.t) =
    ?/users[ref]
}}

Above we have the data, types and database definitions necessary to handle the users.

User.t provides the record for storing necessary user data. Next, we have types for checking the user status of if they are logged in or not.

UserContext is a module provided by Opa for dealing with associating the user values with the client — via cookies. And the data for that user can only accessed by the user that owns it.

User_data object provides functions for accessing and manipulating users.

Now we can look at the User module.

User = {{

  @private state = UserContext.make({ unlogged } : User.status)

  create(username, password) =
    do match ?/users[username] with
      | {none} ->
          user : User.t =
            { username=username ;
              fullname="" ;
              password = Crypto.Hash.sha2(password) }
          /users[username] <- user

      | _ -> void
    Client.goto("/login")

At the beginning of the User object we declare a UserContext and a function for creating new users. The function simply checks if the user exists already with the match statement and if not creates a new User.t record and inserts it to the users database.

If we wish to login we must also modify the UserContext

  login(login, password) =
    useref = User_data.mk_ref(login)
    user = User_data.get(useref)
    do match user with
     | {some = u} -> if u.password == Crypto.Hash.sha2(password) then
                       UserContext.change(( _ -> { logged = User_data.mk_ref(login) }), state)
     | _ -> void
    Client.goto("/todos")

The function attempts to read the user from the database and checks if the passwords match. If so, it will set the UserContext to logged in. The function then tells the client to go to /todos. If the login was unsuccessful, it doesn’t matter and will just redirect to the sign up page.

Obviously, better error handling and notification is the next step for the application.

The last interesting part for this I think is the request matching. The rest of the code is mostly just HTML and piecing together the functions I already described.

  resource : Parser.general_parser(http_request -> resource) =
    parser
    | "/new" ->
      _req -> Resource.styled_page("New User", ["/resources/todos.css"], new())
    | "/edit" ->
      _req -> edit()
    | "/view/" login=(.*) ->
      _req -> view(Text.to_string(login))
    | .* ->
      _req -> start()

The key match to look at is:

    | "/view/" login=(.*) ->
      _req -> view(Text.to_string(login))

This shows the request matching /view, which in this case comes after the main module matches ‘/user’ and routes to the User module resource. But then we have login=(.*), this is matching the variable login to the rest of the url. This variable login can then be used in view(Text.to_string(login)) to pass to the view function so it knows what user is being asked to be displayed.

There’ll be more to come. Next, I need to add some validation, an admin page and then the ability for users to have categories to organize their todo items under.

And let me know anything else people would like to see!

Announcing ErlangDC: An Epic One-Day Erlang Conference in the Washington, DC Area

We are happy to announce ErlangDC: An Epic One-Day Erlang Conference in the Washington, DC area.

New to Erlang? Learn the basics — and find out why Erlang should be in your programmer’s toolkit — during the morning bootcamp. Meet fellow DC-area Erlang enthusiasts at lunch. Learn advanced Erlang techniques in the afternoon tech talks. Swap Erlang war stories and make lifelong friends over pints at the post- conference Happy Hour.

It will be reliably awesome. Just like Erlang.

ErlangDC is organized by the local DC Erlang Meetup Group, with help from Erlang Solutions and Erlang Factory. The event will be hosted at the AOL Headquarters in Dulles, VA.

Sign up when you get a chance. The early-bird tickets have been sold out fast! Tickets are only $40.

OpaDo Data Storage

OpaDo (a port of the TodoMVC app to Opa) now persists todo items to the Opa database. The new version is up on dotcloud, http://opado-tristan.sloughter.dotcloud.com/

I’ve added a todo_item type which stores the item’s value and two other attributes we won’t use until the next post when we have user accounts for their own todo_item stores.

type todo_item = { user_id : string
                 ; value : string
                 ; created_at : string
                 }

To tell Opa where to store the records we’ll create, we provide a path to the Opa db function and set its type. For our todo items we use a stringmap since currently the id’s are randomly generated strings (I know, I know, but its just an example!). We can then reference a record in the database with the path /todo_item[some_id_string].

db /todo_items : stringmap(todo_item)

Now we can insert todo_item‘s to this db path as so:

/todo_items[id] <- { value=x user_id="" created_at="" }

For now user_id and created_at are empty, but I’ll be updating that when I add user accounts.

Since we are storing each item, we need to populate the list on page load with whats already stored:

add_todos() =
  items = /todo_items
  StringMap.iter((x, y -> add_todo_to_page(x, y.value)), items)

The first line of the function sets the variable items to all the todo_item records in the database. We use StringMap.iter to take each todo_item and add it to the page. The first argument to the anonymous function is the id the item is stored in the database with (the id we will use in the HTML as well) and the second is the actual todo_item, so we take its value field and pass that to the add_todo_to_page function along with the id.

To have the add_todos function when the list element is ready we add an on_ready attribute that will call add_todos:

<ul id=#todo_list onready={_ -> add_todos() } ></ul>

Lastly, we want to be able to delete a todo_item from the database:

remove_item(id: string) =
  do Dom.remove(Dom.select_parent_one(#{id}))
  do Db.remove(@/todo_items[id])
  update_counts()

remove_all_done() =
  Dom.iter(x -> remove_item(Dom.get_id(x)), Dom.select_class("done"))

The main piece to notice here is @/todo_items[id] in Db.remove(). The @ is saying that we are passing the path itself to remove() and not the value at that path.

Nice and easy! No database to setup or deploy, just Opa. Next time we’ll add user accounts, so we don’t have to all share the same todo list.

TodoMVC in Opa

Edit: I just learned that dotcloud supports Opa! So I’ve pushed OpaDo and you can see a demo here http://opado-tristan.sloughter.dotcloud.com/

I wanted something quick and simple to do in Opa to give it a try so I decided to implement the TodoMVC example that has been redone in almost all Javascript frameworks, https://github.com/addyosmani/todomvc.

The code can be found on GitHub here: https://github.com/tsloughter/OpaDo

Opa is unique in that it is not only a new language but also a new web server and database. While Opa’s page pushes the idea that its for the cloud and its easy distribution, I found the nicest part being the static typing and no need for Javascript.

The functions below handle interactions with the Todo items. It somewhat reminds me of Lift but taken even farther.

/** * {1 User interface} */
update_counts() =
  num_done = Dom.length(Dom.select_class("done"))
  total = Dom.length(Dom.select_class("todo"))
  do Dom.set_text(#number_done, Int.to_string(num_done))
  Dom.set_text(#number_left, Int.to_string(total - num_done))

make_done(id: string) =
  do if Dom.is_checked(Dom.select_inside(#{id}, Dom.select_raw("input"))) then Dom.add_class(#{id}, "done")
  else
    Dom.remove_class(#{id}, "done")

  update_counts()

remove_item(id: string) =
  do Dom.remove(#{id})
  update_counts()

remove_all_done() =
  do Dom.remove(Dom.select_parent_one(Dom.select_class("done")))
  update_counts()

add_todo(x: string) =
  id = Random.string(8)
  li_id = Random.string(8)
  line = <li id={ li_id }><div class="todo" id={ id }> <div class="display"> <input class="check" type="checkbox" onclick={_ -> make_done(id) } /> <div class="todo_content">{ x }</div> <span class="todo_destroy" onclick={_ -> remove_item(li_id) }></span> </div> <div class="edit"> <input class="todo-input" type="text" value="" /> </div> </div></li>
  do Dom.transform([#todo_list +<- line ])
  do Dom.scroll_to_bottom(#todo_list)
  do Dom.set_value(#new_todo, "")
  update_counts()

It is unique in combining the HTML into the language itself. Some have argued against this but when it works well it makes perfect sense. I don’t want to have to convert a designers HTML into some other representation! And being able to have type checked dynamic functionality within the HTML is a boon. Even with just this simple program, I found the usefulness of the type checker outstanding.

Next we have the main outline of the page and the entry part for the program.

start() =
  <div id="todoapp"> <div class="title"> <h1>Todos</h1> </div>
    <div class="content"> <div id=#create_todo> <input id=#new_todo placeholder="What needs to be done?" type="text" onnewline={_ -> add_todo(Dom.get_value(#new_todo)) } /> </div>
      <div id=#todos> <ul id=#todo_list></ul> </div>

      <div id="todo_stats"> <span class="todo_count"> <span id=#number_left class="number">0</span> <span class="word">items</span> left. </span> <span class="todo_clear"> <a href="#" onclick={_ -> remove_all_done() }> Clear <span id=#number_done class="number-done">0</span> completed <span class="word-done">items</span> </a> </span> </div>
    </div>
  </div>

/** * {1 Application} */

/** * Main entry point. */
server = Server.one_page_bundle("Todo",
       [@static_resource_directory("resources")],
       ["resources/todos.css"], start)

It won’t be able to replace my use of Erlang for the backend and Coffeescript for the frontend, but it looks very promising.

I’ll be extending this example to include persistence, sessions and users and will add posts as I complete those.

Centos6 : Chef Node Creation

I thought I’d share the scripts I use to take a fresh Centos6 install and have it configured to work with a Chef server. Maybe its not as easy as when running in a virtualized environment, but it saves plenty of time.

On the new node I run the setup_client.sh script which calls in the end client_gen.sh on the Chef server once everything is installed on the node. I left the version numbers for Ruby and Chef in the script so you know what versions I’ve tested this with.

setup_client.sh

#!/bin/bash
CHEF_IP=XXX.XXX.XXX.XXX
CHEF=http://$CHEF_IP:4000
CHEF_USER=XXXXX
NODE=XXXXXXXX
RUBY_VSN=1.3.7
CHEF_VSN=0.9.16

sudo rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm

sudo yum update

sudo yum install ruby ruby-shadow ruby-ri ruby-rdoc gcc gcc-c++ ruby-devel ruby-static

cd /tmp
wget http://production.cf.rubygems.org/rubygems/rubygems-$RUBY_VSN.tgz
tar zxf rubygems-$RUBY_VSN.tgz
cd rubygems-$RUBY_VSN
sudo ruby setup.rb --no-format-executable

sudo gem install chef -v $CHEF_VSN 

mkdir ~/.chef

cat > ~/.chef/knife.rb <<EOF log_level :info log_location STDOUT node_name '$NODE' client_key '/home/$USER/.chef/$NODE.pem' validation_client_name 'chef-validator' validation_key '/etc/chef/validation.pem' chef_server_url '$CHEF' cache_type 'BasicFile' cache_options( :path => '/home/$USER/.chef/checksums' ) EOF

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub $CHEF_IP

ssh $CHEF_IP "yes | knife client delete $NODE"
ssh $CHEF_IP "yes | /home/$CHEF_USER/client_gen.sh $NODE"
scp $CHEF_IP:/tmp/$NODE ~/.chef/$NODE.pem

client_gen.sh

#!/bin/bash
knife client create $1 -n -a -f /tmp/$1
knife node create $1 --no-editor

Mixed Erlang and Scala with Scalang

This is a summary of a talk by Cliff Moon @moonpolysoft given at Strangeloop about building mixed Erlang and Scala systems with Scalang. Boundary does network analytics as a service. Their architecture uses a mixture of Erlang and Scala. Erlang is very very good at doing things like no down time deploys. We can have very low downtime on public facing parts of the system and we don’t even have to go down for deploys. On the data processing side one of the things that erlang is very bad at is dealing with numbers, and generally anything where mutability has a high value.

Trying to make the scala side talk do the Erlang side was required to handle the language choices for the system. Turns out that Erlang ships with Jinterface which is just the thing – or so it seem. Unfortunately it ended up being really really cumbersome. Jinterface is at the wrong level of abstraction. Erlang is all about actors and Jinterface only exposes mailboxes. All the rich interface you get in erlang with actors goes away when you are stuck with only mailboxes. The other problem is it is not performant. Primitives end up getting wrapped twice, first by Jinterface and then by a case class in scala which is just want to heavy weight when trying to process millions of pieces of data.

They decided to take a step back, something that would be easier to use. They were looking for more correctness in behavior, things kind of behave like Erlang actors. They wanted performance, and then simplicity; not having to deal with custom serializers and other such cruft. The internal architecture is built on NIO sockets and Netty. There are also a bunch of codecs to do encoding and decoding between erlang and scala. There is also a delivery system wich deals with registration and actors which run in Jetlang – an actor framework for the JVM.

The main interface into the system on the JVM side is something called a Node – this should be very familiar to Erlangers. It takes a node name and a magic cookie. So, pretty much exactly what you would expect.

Once you have a node you want to make a process. Processes are spawned and messages are sent with the ! operator, just like in Erlang. You can send messages to a Pid, you can send messages to a local registered name, and you can send messages to a remote registered name by supplying a name node tuple. So basically just like Erlang.

Cliff Moon talking about processes in scalang

Error Handing in Scalang

Scalang fires a link breakage exit signal anytime a Scalang process throws an uncaught exception. It works between Erlang and JVM. The one problem is that this is not preemptive on the JVM side as lightweight preemptive actors on the JVM side seems hard to do.

Erlang to Scala Type Mappings

Most things are a one to one mapping for primitives. Anything that does not fit, like numbers, will be tured into something reasonable on the scala side. If just that is not quite good enough for you; you wnat to do rich type mappings. You can use a rich type mapping plugin to turn rich types into records and vice-versa.

Scalang Services

One of the big things about Erlang is OTP. You typically use gen_servers, behaviors. These behaviors give you messaging primitives for sync and async and lots of other good stuff. Scalang wants to be able to interact with gen_servers transparently on the other side. So three functions are implemented for Scalang processes:

handleCall
handleCast
handleInfo

These will look very familiar for most Erlangers (if you are coding in OTP like you should be). Scalang also supports anonymous processes. You can just spawn processes with funs for those times when you don’t want a gen_server.

Runtime Metrics

This is what boundary does, so they wanted to bake them into all of their JVM stuff. Scalang has a full suite of runtime metrics. You get things like meters showing how many messages have come across the wire for each process. Histograms for process performance. Time spent in serialization. Message queue sizes and quite a number of other metrics. The idea was to make it similar to pulling up a remote shell into an erlang instance and being able to query to see where the bottlenecks are.

Scalang JVM Performance Tuning

We are all about running fast here. Scalang aims to make things easily tunable. It turns out one of the best way to performance tune is to screw around with the thread pools. The ThreadPoolFactory lets you screw around with different implementation. There are 4 kinds

Boss Pool – initial connection and accept handling
Worker Pool – non blocking reads and writes
Actor Pool – process callbacks
Batch executor – per process execution logic

Editorial: The system really looks to be quite powerful. It allows for the features I describe above as well as easy remote shell invocation of JVM nodes. It actually interacts nicely with EPMD for native feeling messaging. The system seems to be abstracted more appropriately than any Erlang to X intercoms library I have run across. I look forward to hearing about experiences using it.

Martin Logan (@martinjlogan) also, if you are into distributed systems and metrics you should check out Camp DevOps Conf in Chicago this Oct

Here is where you can find the code and example usage information: https://github.com/boundary/scalang

Batman.js vs Knockout.js

The following is NOT a tutorial for either Batman.js or Knockout.js. But, it is instead a sort of side-by-side comparison of the two for creating a user creation form that POSTs the new user’s data as JSON to the backend.

The method of web development I’ve come to find the best is based on heavy frontend Javascript (though written in Coffeescript) communicating with a backend via a RESTful interface. This is appealing, because you are not cluttering the application logic with view related code. This allows me to use Erlang, my language of choice, which is great at implementing RESTful interfaces with Webmachine, but not too great for trying to build a site like you would with Rails.

I recently began using Knockout.js and found it to be a great fit for my development paradigm. When Batman.js came out, I saw that it could provide me with what Knockout.js does but also already take care of pieces I would develop myself on the frontend, like RESTful persistence.

First, we have the Knockout.js logic, written in Coffeescript, for setting up a User class and object that observers the input fields.

class @User
  constructor : ->
    firstname : ko.observable ""
    lastname : ko.observable ""
    email : ko.observable ""
    username : ko.observable ""
    password : ko.observable ""

  save : ->
    if $("form").validate().form()
      $.post('/user', ko.toJSON(this), (data) -> window.location = "/login.html"; return false ;).error () -> alert("error"); return false;

user = new User
ko.applyBindings user

Now for the HTML for displaying the fields, configuring the submit handler, binding the inputs to Knockout.js observables and setting validators.

<form data-bind="submit: save">
    <div class="new_user_box">
    <label>First Name: </label>
    <input type=text name=firstname data-bind="value: firstname" minlength=2 maxlength=25 class="required" />

    <label>Last Name: </label>
    <input type=text name=lastname data-bind="value: lastname"  minlength=2 maxlength=25 class="required" />

    <label>Username: </label>
    <input type=text name=username data-bind="value: username" remote="/user/check"  minlength=6 maxlength=25 class="required" />

    <label>Password: </label>
    <input type=password id="password1" data-bind="value: password, uniqueName: true" minlength=8 class="required password" />

    <label>Retype Password: </label>
    <input type=password data-bind="uniqueName: true"  id="password1" equalto="#password1" class="required" />

    <label>Email Address: </label>
    <input type=email name=email data-bind="value: email"  remote="/user/email_check"  class="required email" />

    <input type=submit value="Save" id="saveSubmit" />
    </div>
</form>

With Batman.js our validation will be configured in the user model. Here we only need to set @persist to Batman.RestStorage and when a model is saved with the save method it will POST the encoded fields as JSON to /users.

class CT extends Batman.App
    @global yes
    @root 'users#index'

class CT.User extends Batman.Model
    @global yes
    @persist Batman.RestStorage
    @encode 'firstname', 'lastname', 'username', 'email', 'password'
    @validate 'firstname', presence: yes, maxLength: 255
    @validate 'lastname', presence: yes, maxLength: 255
    @validate 'username', presence: yes, lengthWithin: [6,255]
    @validate 'email', presence: yes
    @validate 'password', 'passwordConfirmation', presence: yes, lengthWithin: [6,255]

class CT.UsersController extends Batman.Controller
    user: null

    index: ->
        @set 'user', new User
        return false

    create: =>
        @user.save()
        return false

CT.run()

We see below that the Batman.js HTML is a bit cleaner than that in the Knockout.js example above.

        <form data-formfor-user="controllers.users.user" data-event-submit="controllers.users.create">
          <div class="new_user_box">
          <label>First Name: </label>
          <input type=text name=firstname data-bind="user.firstname" />

          <label>Last Name: </label>
          <input type=text name=lastname data-bind="user.lastname" />

          <label>Username: </label>
          <input type=text name=username data-bind="user.username" remote="/user/check" />

          <label>Password: </label>
          <input type=password data-bind="user.password" />

          <label>Retype Password: </label>
          <input type=password data-bind="user.passwordConfirmation" />

          <label>Email Address: </label>
          <input type=email data-bind="user.email" remote="/user/email_check" />

          <input type=submit value="Save" id="saveSubmit" />
          </div>

        </form>

Property based testing for unit testers with PropEr – Part 1

This tutorial is brought to you by ErlangCamp 2011 (click here) – Boston, August 12th and 13th – It’s gonna be totally sweet!

Main contributors: Torben Hoffmann, Raghav Karol, Eric Merritt

The purpose of the short document is to help people who are familiar
with unit testing understand how property based testing (PBT) differs,
but also where the thinking is the same.

This document focusses on the PBT tool
PropEr for Erlang since that is
what I am familiar with, but the general principles applies to all PBT
tools regardless of which language they are written in.

The approach taken here is that we hear from people who are used to
working with unit testing regarding how they think when designing
their tests and how a concrete test might look.

These descriptions are then “converted” into the way it works with
PBT, with a clear focus on what stays the same and what is different.

Testing philosophies

A quote from Martin Logan (@martinjlogan):

For me unit testing is about contracts. I think about the same things
I think about when I write statements like {ok, Resp} =
Mod:Func(Args). Unit testing and writing specs are very close for me.
Hypothetically speaking lets say a function should return return {ok,
string()} | {error, term()} for all given input parameters then my
unit tests should be able to show that for a representative set of
input parameters that those contracts are honored. The art comes in
thinking about what that set is.

The trap in writing all your own tests can often be that we think
about the set in terms of what we coded for and not what may indeed be
asked of our function. As the code is tried in further exploratory
testing and in production new input parameter sets for which the given
function does not meet the stated contract are discovered and added to
the test case once a fix has been put into place.

This is a very good description of what the ground rules for unit
testing are:

  • Checking that contracts are obeyed.
  • Creating a representative set of input parameters.

The former is very much part of PBT – each property you write will
check a contract, so that thinking is the same.

xUnit vs PBT

Unit testing has become popular for software testing with the advent
of xUnit tools like jUnit for Java. xUnit like tools typically
provide a testing framework with the following functionality

  • test fixture setup
  • test case execution
  • test fixture teardown
  • test suite management
  • test status reporting and management

While xUnit tools provide a lot of functionality to execute and manage
test cases and suites, reporting results there is no focus on test
case execution step, while this is the main focus area of
property-based testing (PBT).

Consider the following function specification

sort(list::integer()) ---> list::integer() | error

A verbal specification of this function is,

For all input lists of integers, the sort function returns a sorted
list of integers.

For any other kind of argument the function returns the atom error.

The specification above may be a requirement of how the function
should behave or even how the function does behave. This distinction
is important; the former is the requirement for the function, the
latter is the actual API. Both should be the same and that is what our
testing should confirm. Test cases for this function might look like

assertEqual(sort([5,4,3,2,1]), [1,2,3,4,5])
assertEqual(sort([1,2,3,4,5]), [1,2,3,4,5])
assertEqual(sort([]         ), []         )
assertEqual(sort([-1,0, 1]  ), [-1, 0, 1] )

How many tests cases should we write to be convinced that the actual
behaviour of the function is the same as its specification? Clearly,
it is impossible to write tests cases for all possible input values,
here all lists of integers, the art of testing is finding individual
input values that are representative of a large part of the input
space. We hope that the test cases are exhaustive to cover the
specification. xUnit tools offer no support for this and this is where
PBT and PBT Tools like PropEr and QuickCheck come in.

PBT introduces testing with a large set of random input values and
verifying that the specification holds for each input value
selected. Functions used to generate input values, generators, are
specified using rules and can be simply composed together to construct
complicated values. So, a property based test for the function above
may look like:

FOREACH({I, J, InputList},  {nat(), nat(), integer_list()},
    SUCHTHAT(I < J andalso J < length(InputList),
    SortedList = sort(InputList)
    length(SortedList) == length(InputList)
    andalso
    lists:get(SortedList, I) =< lists:get(SortedList, J))

The property above works as follows

  • Generate a random list of integers InputList and two natural numbers
    I, J, such that I < J < size of InputList
  • Check that size of sorted and input lists is the same.
  • Check that element with smaller index I is less than or equal to
    element with larger index J in SortedList.

Notice in the property above, we specify property. Verification of
the property based on random input values will be done by the property
based tool, therefore we can generated a large number of tests cases
with random input values and have a higher level of confidence that
the function when using unit tests alone.

But it does not stop at generation of input parameters. If you have
more complex tests where you have to generate a series of events and
keep track of some state then your PBT tool will generate random
sequences of events which corresponds to legal sequences of events and
test that your system behaves correctly for all sequences.

So when you have written a property with associated generators you
have in fact created something that can create numerous test cases -
you just have to tell your PBT tool how many test cases you want to
check the property on.

Shrinking the bar

At this point you might still have the feeling that introducing the
notion of some sort of generators to your unit testing tool of choice
would bring you on par with PBT tools, but wait there is more to
come.

When a PBT tool creates a test case that fails there is real chance
that it has created a long test case or some big input parameters -
trying to debug that is very much like receiving a humongous log from
a system in the field and try to figure out what cause the system to
fail.

Enter shrinking…

When a test case fails the PBT tool will try to shrink the failing
test case down to the essentials by stripping out input elements or
events that does not cause the failure. In most cases this results in
a very short counterexample that clearly states which events and
inputs are required to break a property.

As we go through some concrete examples later the effects of shrinking
will be shown.

Shrinking makes it a lot easier to debug problems and is as key to the
strength of PBT as the generators.

Converting a unit test

We will now take a look at one possible way of translating a unit
test into a PBT setting.

The example comes from Eric Merritt and is about the add/2 function in
the ec_dictionary instance ec_gb_trees.

The add function has the following spec:

-spec add(ec_dictionary:key(), ec_dictionary:value(), Object::dictionary()) ->
          dictionary().

and it is supposed to do the obvious: add the key and value pair to
the dictionary and return a new dictionary.

Eric states his basic expectations as follows:

  1. I can put arbitrary terms into the dictionary as keys
  2. I can put arbitrary terms into the dictionary as values
  3. When I put a value in the dictionary by a key, I can retrieve that same value
  4. When I put a different value in the dictionary by key it does not change other key value pairs.
  5. When I update a value the new value in available by the new key
  6. When a value does not exist a not found exception is created

The first two expectations regarding being able to use arbritrary
terms as keys and values is a job for generators.

The latter four are prime candidates for properties and we will create
one for each of them.

Generators

key() -> any().

value() -> any().

For PropEr this approach has the drawback that creation and shrinking
becomes rather time consuming, so it might be better to narrow to
something like this:

key() -> union([integer(),atom()]).

value() -> union([integer(),atom(),binary(),boolean(),string()]).

What is best depends on the situation and intended usage.

Now, being able to generate keys and values is not enough. You also
have to tell PropEr how to create a dictionary and in this case we
will use a symbolic generator (detail to be explained later).

sym_dict() ->
    ?SIZED(N,sym_dict(N)).

sym_dict(0) ->
    {'$call',ec_dictionary,new,[ec_gb_trees]};
sym_dict(N) ->
    ?LAZY(
       frequency([
                  {1, {'$call',ec_dictionary,remove,[key(),sym_dict(N-1)]}},
                  {2, {'$call',ec_dictionary,add,[value(),value(),sym_dict(N-1)]}}
                 ])).

sym_dict/0 uses the ?SIZED macro to control the size of the
generated dictionary. PropEr will start out with small numbers and
gradually raise it.

sym_dict/1 is building a dictionary by randomly adding key/value
pairs and removing keys. Eventually the base case is reached which
will create an empty dictionary.

The ?LAZY macro is used to defer the calculation of the
sym_dict(N-1) until they are needed and frequency/1 is used
to ensure that twice as many adds compared to removes are done. This
should give rather more interesting dictionaries in the long run, if
not one can alter the frequencies accondingly.

But does it really work?

That is a good question and one that should always be asked when
looking at genetors. Fortunately there is a way to see what a
generator produces provided that the generator functions are exported.

Hint: in most cases it will not hurt to throw in a
-compile(export_all). in the module used to specify the
properties. And here we actually have a sub-hint: specify the
properties in a separate file to avoid peeking inside the
implementation! Base the test on the published API as this is what the
users of the code will be restricted to.

When the test module has been loaded you can test the generators by
starting up an Erlang shell (this example uses the erlware_commons
code so get yourself a clone to play with):

$ erl -pz ebin -pz test
1> proper_gen:pick(ec_dictionary_proper:key()).
{ok,4}
2> proper_gen:pick(ec_dictionary_proper:key()).
{ok,35}
3> proper_gen:pick(ec_dictionary_proper:key()).
{ok,-5}
4> proper_gen:pick(ec_dictionary_proper:key()).
{ok,48}
5> proper_gen:pick(ec_dictionary_proper:key()).
{ok,'36\207_là ´?\nc'}
6> proper_gen:pick(ec_dictionary_proper:value()).
{ok,2}
7> proper_gen:pick(ec_dictionary_proper:value()).
{ok,-14}
8> proper_gen:pick(ec_dictionary_proper:value()).
{ok,-3}
9> proper_gen:pick(ec_dictionary_proper:value()).
{ok,27}
10> proper_gen:pick(ec_dictionary_proper:value()).
{ok,-8}
11> proper_gen:pick(ec_dictionary_proper:value()).
{ok,[472765,17121]}
12> proper_gen:pick(ec_dictionary_proper:value()).
{ok,true}
13> proper_gen:pick(ec_dictionary_proper:value()).
{ok,<<>>}
14> proper_gen:pick(ec_dictionary_proper:value()).
{ok,<<89,69,18,148,32,42,238,101>>}
15> proper_gen:pick(ec_dictionary_proper:sym_dict()).
{ok,{'$call',ec_dictionary,add,
        [[114776,1053475],
         'fª20\227\215',
         {'$call',ec_dictionary,add,
             ['',true,
              {'$call',ec_dictionary,add,
                  ['2^Ø¡',
                   [900408,886056],
                   {'$call',ec_dictionary,add,[[48618|...],<<...>>|...]}]}]}]}}
16> proper_gen:pick(ec_dictionary_proper:sym_dict()).
{ok,{'$call',ec_dictionary,add,
        [10,'a¯\21431fõC',
         {'$call',ec_dictionary,add,
             [false,-1,
              {'$call',ec_dictionary,remove,
                  ['d·ÉV÷[',
                   {'$call',ec_dictionary,remove,[12,{'$call',...}]}]}]}]}}

That does not look too bad, so we will continue with that for now.

Properties of add/2

The first expectation Eric had about how the dictionary works was that
if a key had been stored it could be retrieved.

One way of expressing this could be with this property:

prop_get_after_add_returns_correct_value() ->
    ?FORALL({Dict,K,V}, {sym_dict(),key(),value()},
         begin
             try ec_dictionary:get(K,ec_dictionary:add(K,V,Dict)) of
                    V ->
                        true;
                    _ ->
                        false
             catch
                   _:_ ->
                       false
             end
          end).

This property reads that for all dictionaries get/2 using a key
from a key/value pair just inserted using the add/3 function
will return that value. If that is not the case the property will
evaluate to false.

Running the property is done using proper:quickcheck/1:

proper:quickcheck(ec_dictionary_proper:prop_get_after_add_returns_correct_value()).
....................................................................................................
OK: Passed 100 test(s).
true

This was as expected, but at this point we will take a little detour
and introduce a mistake in the ec_gb_trees implementation and see
how that works.