Invoking Builds: Deployment and Project Task Patterns

Using Fabric and the new Invoke to simplify and codify both development and deployment patterns.

We’ve been using Fabric to manage deployments for a while and lately Invoke for adding similar functionality where SSH based deployment isn’t required. It’s been fun to compare notes with developers in other dev shops about workflow especially around deployment, so we thought we’d share some of the what we do.

Anywhere an Invoke task is provided as an example, a Fabric task could be used in the same way.

Running tests

By wrapping your test commands in a task you gain a modicum of simplicty, but you can also wrap other options in that command.

invoke test

@task
def test():
    """Run Django tests and linters."""
    run("python manage.py test --with-coverage "
        "--cover-package=app1,app2,app3")
    print("Running static analysis...")
    r = run("flake8")
    if not r.return_code:
        print("Code checks out!")

Updating the local environment

You’ve just pulled down fresh changes from the remote repository that includes changes across several branches, involving new dependencies, static files, database migrations. Just run one command and get up to speed.

In many of our projects we use Vagrant to manage virtual machines for development, getting something closer to production parity and simplifying configuration across developer workstations. But for some projects its just as simple to make sure Postgres.app is running and just use a virtualenv on your laptop.

invoke update

@task
def update():
    """Update local project based on upstream changes."""
    print("Updating requirements...")
    with open("brews.txt", "r") as brews:
        for brew in brews:
            run("brew install {0}".format(brew))
    run("pip install -r dev-requirements.txt")
    run("python manage.py syncdb")
    run("python manage.py migrate")
    run("python manage.py collectstatic --noinput")
    run("python manage.py compress")

This task installs from a development mode pip requirements file which includes the primary requirements file but adds in development-only dependencies (like Sphinx).

Last note: this actually takes an extra step and assumes your team is using Homebrew on Mac. That particular step could be removed or replaced with something else. This is rarely necessary unless you have C dependencies like libmemcached. Of course if these start piling up it probably makes more sense to use a virtual machine.

Building documentation

Presuming your project documentation is compiled using Sphinx, this is a pretty simple task, just “make html”. Using an Invoke or Fabric task there’s not need to specify or change directories. And we’ll make it easier to access the results.

invoke docs

This default task builds the docs and then opens the documentation index in your default browser. If you just want to build them you can do that of course.

invoke docs.build

And if you want to start perusing the documentation without building, you can skip that step.

invoke docs.browse

Here’s the simple code.

@task
def build(clean='no'):
    with lcd('docs'):
        local('make html')
@task
def browse():
    """Open the current dev docs in the default browser."""
    local("open docs/_build/html/index.html")
@task(default=True)
def build_browse():
    """Build the docs and open them in the browser"""
    build()
    browse()

Deploying (Capistrano style)

What we’ll call the Capistrano style deploy works like so: update a remote fork of the repository (e.g. Git or Hg), then copy the app files into a new release directory. Run necessary deployment commands against this release location and upon completion, symlink the active or latest app directory to the latest release.

fab production deploy

This calls several tasks in order, so as to update the remote cached copy of the repository, create a new release directory, run remote tasks required for the release, and then restart the application server using the latest release.

def notify\_hipchat(message="(present) {user} deployed {sha} to {env}", **kwargs):
    sha = kwargs.get('sha', 'N/A')
    message = message.format(user=getpass.getuser(), env=env.environment[0],
            sha=sha)
    data = {"from": "myapp", "auth_token": env.hipchat_auth_token,
            "message_format": "text", "color": "green",
            "room_id": env.hipchat_room_id, "message": message}
    env.hipchat_from = "Fab deployer"
    r = requests.post("http://api.hipchat.com/v1/rooms/message", data=data)
    if r.status_code != 200:
        print "There was a problem sending your message:\n\n{0}\n\n".format(
                r.text)


@task
def refresh():
    """
    Updates the source cache by pulling from the remote repository

    Returns the SHA of the current commit
    """
    with cd(env.code_dir):
        sudo("git checkout {0}".format(env.branch))
        sudo("git pull origin {0}".format(env.branch))
        sudo("git submodule init")
        sudo("git submodule update")
        return sudo("git rev-parse HEAD")


@task
def restart():
    """Restarts the application"""
    sudo("service myapp-gunicorn restart")
    sudo("service myapp-celery restart")


def release(provision=False, db=True, pip=True, notify=True, ):
    """Creates a new release on the remote server and restarts the server"""

    # Force a repository update and get the SHA value of the repo HEAD
    release_log = {'sha': refresh()}

    # Create the release directory
    with cd(env.code_dir):
        timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
        # ensure dir is there
        cuisine.dir_ensure("{0}/releases".format(env.app_path))
        release = "{path}/releases/{ts}".format(path=env.app_path, ts=timestamp)
        sudo("cp -R . {release}".format(release=release))

    # Run release tasks
    with cd(release):
        print "Creating release"
        sudo("chown -R myapp:www-data {release}".format(release=release))
        if pip:
            dj.pip(path=release)
        dj.static(path=release)
        if db:
            dj.update(path=release)

    # Update current app directory
    cuisine.file_link(release, env.latest, symbolic=True, owner=env.app_user,
            group="www-data")

    restart()
    return release_log


@task(default=True)
def full(provision=False, notify=True):
    """Creates a new release on the server, checks requirements, udpates DB"""

    if 'production' in env.environment:
        if not confirm('Deploy to production?', default=False):
            abort('Production deployment aborted.')

    summary = release(provision=provision, notify=notify)

    if notify:
        notify_hipchat(**summary)

The release task uses Cuisine, a “Chef-life” library for Fabric in order to simplify the directory updates.

Deploying (to Heroku)

For when deploying to Heroku consists of more than just pushing a commit.

git push heroku master

Works until you need to run additional tasks, like migrations or static asset generation. We’ll replace the Git push command with this Invoke task.

invoke deploy

This task will push to our Heroku remote and then run the additional tasks like migrating database schema changes.

@task
def deploy():
    """Push to Heroku and runs any ancillary tasks necessary."""
    print("Pushing latest changes to Heroku...")
    run("git push heroku master")
    run("heroku run python manage.py syncdb")
    run("heroku run python manage.py migrate")
    run("heroku run python manage.py compress")

If you deploy frequently to Heroku then a custom buildpack integrating these steps might prove superior.

Viewing logs

Sometimes you want to be able to watch the logs on a server.

fab logs

This is just a simple task for tailing a default or specified log file.

from fabric.api import task, run, sudo


@task
def tail(filename, watch=True, sudoer=False):
    """Tail the specified file"""
    executor = sudo if sudoer else run
    flags = '-f' if watch else ''
    executor("tail {flags} {filename}".format(flags=flags,
            filename=filename))


@task(default=True)
def gunicorn(watch=True):
    """Tail the application file"""
    tail('/var/log/myapp/myapp.log', watch=watch)

Executing remote commands

A great example of this is accessing the remote shell if need be. The Heroku CLI lets you attach a command to the remote shell, e.g. a Django management command.

fab staging dj:shell_plus fab staging dj:update_index,–remove fab staging dj:import_locations,http://dataurl.com/locations.csv

@task(default=True)
def manage(command, *args, **kwargs):
    user = kwargs.pop('user', None)
    path = kwargs.pop('path', None)
    user = user if user else env.app_user
    path = path if path else u"{0}/bdmbooks".format(env.code_dir)
    cmd_args = u" ".join(args)
    cmd_kwargs = u" ".join([u"{k}={v}".format(k=k, v=v) for k, v in kwargs.iteritems()])
    with cd(path):
        run({python} manage.py {command} {args} {kwargs}".format(
            python=env.python, command=command, args=cmd_args,
            kwargs=cmd_kwargs), user=user)

Applying Puppet manifests

For systems with only a few servers it’s really simple to just apply Puppet scripts locally rather than use a master-slave setup. One thing this tasks does depend on is that the application repo has already been set up on the server and Puppet already installed.

This can be run upon making system changes by updating the repo cache and reapplying the Puppet configuration.

fab production db deploy.refresh puppet

@task(default=True)
def apply(system=None):
    """Applies the Puppet configuration currently in the source cache"""
    if not system:
        system = env.system
    sudo("puppet apply --modulepath={0}/puppet/modules"
        " {0}/puppet/manifests/{1}.pp".format(env.code_dir, system))

system would specify the manifest for the node type, e.g. “web”, “db”, “search”, “dev” (a box running all services).

For this last use case at least we’re looking to completely replace. Our exploration with Ansible has been pretty basic so far but the feedback from very different corners has been so enthusiastically positive that we expect this to be the direction we take.