Puppet profile with acceptance test

At work, my team agreed to improve puppet modules. This means providing ideally unit and acceptance tests, especially if you are updating a module. The process is tedious, however, when working on new modules gets a bit easier.

Recently we started rolling out our new monitoring solution, based on SignalFX. This requires us to install the signalfx agent on each node.

Puppet profiles

The idea behind a profile is to gather all the pieces that define a stack. In this case, I built the profile_signalfx, that at the same time relies on the signalfx_agent puppet module provided by SignalFX.

PDK

Nowadays the best way to build a module from scratch is using PDK ( Puppet Development Kit), this provides a basic scaffold although no without some nuances, more on this later.

$ pdk new module profile_signalfx

I know part of my dependencies, therefore the first thing to do is lock them in the metadata file.

  • metadata.json: careful with the dependencies you define here. The first mistake I made was to add a wrong version of a module.
"dependencies": [
      {"name":"apt","version_requirement":">= 4.20.0 < 5.0.0"},
      {"name": "signalfx/signalfx_agent"}
  ],
  "operatingsystem_support": [
    {
      "operatingsystem": "Ubuntu",
      "operatingsystemrelease": [
        "18.04"
      ]
    }
  ],
...

In this case, I pinned apt to an older version without knowing that signalfx_agent depends on a newer version. This led to several failures while trying to install the signalfx_agent.

Profile

This profile is pretty straightforward, installs the signalfx_agent and sets some basic parameters.

# @summary A profile to configure SignalFX agent.
#
# A description of what this class does
#
# @param  cluster
#   cluster name refers to the environment. i.e: "q"
#
# @param enable_filtering
#  Enables or disables filtering.
#
# @param token
#   SignalFX token
#
# @param agent_version
#   SignalFX agent version
#
# @param realm
#   SignalFX realm
#
# @param
#   SignalFX realm
#
# @example
#   include profile_signalfx_agent
#
#
class profile_signalfx (
  Boolean $enable_filtering,
  String $version = '4.20.2',
  String $token = undef,
) {
  $config = {
    'signalFxAccessToken' => $token,
    'enableBuiltInFiltering' =>  $enable_filtering,
    'observers' => [{'type'=> 'host'}],
    'monitors' => [ {'type' => 'collectd/cpu'},
      {'type' => 'collectd/cpufreq'},
      {'type' => 'collectd/df'},
      {'type' => 'collectd/disk'},
      {'type' => 'collectd/interface'},
      {'type' => 'collectd/load'},
      {'type' => 'collectd/memory'},
      {'type' => 'collectd/protocols'},
      {'type' => 'collectd/signalfx-metadata'},
      {'type' => 'host-metadata'},
      {'type' => 'collectd/uptime'},
      {'type' => 'collectd/vmem'} ],
      'signalFxRealm' => 'us1',
  }

      class {'signalfx_agent':
            config => $config,
            agent_version => $version,
      }
}

In my opinion, the key points around building modules are:

  • provide a summary doc
  • define default values
  • encrypt sensitive data (hiera-eyaml)

Acceptance test

This is one of I things I did not enjoy. My team decided to use Beaker the acceptance test. Unfortunately, there is a lot of boilerplate that is not generated with the PDK. You need to create the structure by hand.

profile_signalfx
├── spec
│   ├── acceptance
│   │   ├── nodesets
│   │   │   └── default.yml
│   │   └── profile_signalfx_spec.rb
│   ├── default_facts.yml
│   ├── spec_helper.rb
│   └── spec_helper_acceptance.rb

Almost all the above files, except spec_helper.rb that it is created by PDK, the others need to be created. Once you have a few modules with tests, you may replicate easily.

It’s worth mentioning that in spec_helper_acceptance you can define any special variable that Beaker provides, such as BEAKER_PUPPET_COLLECTION. If you want to define a specific version use BEAKER_PUPPET_AGENT_VERSION.

# frozen_string_literal: true

require 'beaker-puppet'
require 'puppet'
require 'beaker-rspec/spec_helper'
require 'beaker-rspec/helpers/serverspec'
require 'beaker/puppet_install_helper'
require 'beaker/module_install_helper'
require 'beaker-task_helper'
require 'pathname'

def deploy_hiera_fixture(host, from, dest = '/etc/puppetlabs/code/environments/production/data')
  raise(ArgumentError, _('host must be a Beaker::Host')) unless host.is_a? Beaker::Host
  raise(ArgumentError, _('from must be a path')) unless from.is_a? String
  raise(ArgumentError, _('dest must be a path')) unless dest.is_a? String
  raise(IOError, _('file ' + from + ' does not exist')) unless Pathname(from).exist?

  scp_to(host, from, dest)
end

ENV['BEAKER_PUPPET_COLLECTION'] = 'puppet5'

raise(ArgumentError, _('must set BEAKER_PUPPET_COLLECTION')) unless ENV.key? 'BEAKER_PUPPET_COLLECTION'

run_puppet_install_helper
configure_type_defaults_on(hosts)

install_module_on(hosts)
install_module_dependencies_on(hosts)

RSpec.configure do |c|
  # c.filter_run focus: true

  # Readable test descriptions
  c.formatter = :documentation

  c.before :suite do
    hosts.each do |host|
      hiera_eyaml_keys = "#{__dir__}/../keys"
      hiera_eyaml_dest = '/etc/puppetlabs/puppet/eyaml'
      scp_to(host, hiera_eyaml_keys, hiera_eyaml_dest)
    end
  end
end

shared_examples 'a idempotent resource' do |debug|
  it 'applies with no errors' do
    apply_manifest(pp, debug: debug, catch_failures: true)
   end

  it 'applies a second time without changes', :skip_pup_5016 do
    apply_manifest(pp, debug: debug, catch_changes: true)
  end
end

For this profile, I added encryption with the hiera-eyaml backend yaml. I created a symlink to the keys’ directory, so these are injected inside vagrant when the test is running. Ideally, these keys should be stored in a safe place such as Vault, this requires Puppet 6.X and unfortunately, we are still using 5.5.

hiera_eyaml_keys = "#{__dir__}/../keys"
hiera_eyaml_dest = '/etc/puppetlabs/puppet/eyaml'
scp_to(host, hiera_eyaml_keys, hiera_eyaml_dest)

Next, you need to create a spec/acceptance/nodesets directory and add a default.yaml configuration with the operating systems to run the acceptance test.

---
HOSTS:
  ubuntu-1804-x64.vagrant.local:
    platform: ubuntu-18.04-amd64
    box: ubuntu/bionic64
    hypervisor: vagrant
    roles:
      - master
CONFIG:
  type: aio
  log_level: verbose
  trace_limit: 200

And last but not least a profile_signalfx_spec.rb test:

# frozen_string_literal: true

require 'spec_helper_acceptance'

describe 'profile_signalfx' do
  context 'this does not work' do
    pp = <<~PP
       class { 'profile_signalfx':
           enable_filtering => true,
       }
    PP
    let(:pp) { pp }

    it_behaves_like 'an idempotent resource'
  end
end

Running tests

Obviously, the first time I had to run the tests everything failed. The machine was destroyed. After delving into it, I discovered that you need to set an env variable to avoid beaker destroying the test environment.

BEAKER_PUPPET_COLLECTION=puppet BEAKER_destroy=no BEAKER_provision=yes bundle exec rspec spec/acceptance

This is my most useful advice if your test fails, set BEAKER_destroy=no, cd into your .vagrant/default.yaml directory at the top level of your module and run vagrant ssh.

└── beaker_vagrant_files
    └── default.yml
        ├── Vagrantfile
        └── ubuntu-bionic-18.04-cloudimg-console.log

Although I defined BEAKER_provision=yes after a test failed, I didn’t manage to avoid getting my current environment destroyed. This is mostly annoying due to the time that takes to run a single test.

Rake tasks

A colleague suggested that would be nice to have a Rake task that executes all the processes involed during testing. I came up with this Rake task:

require 'puppet-lint/tasks/puppet-lint'

... 

PuppetLint::RakeTask.new :lint do |config|
    config.disable_checks = 'arrow_alignment'
  end

  desc 'Run Full CI test'
  task ci: [:validate, :lint, :metadata_lint, :rubocop, 'strings:generate', :beaker]
...

Now I just have to run $ bundle exec rake ci and wait until it fails or succeeds.