Puppet Managing vCenter and vShield

Today, we released a set of open source Puppet modules for managing vCenter Server Appliance 5.1 and vCloud Network and Security 5.1 (vCNS previously known as vShield). They provide a framework for managing resources within vCenter and vCNS via Puppet1.

The modules can be obtained from forge.puppetlabs.com:

1
2
3
$ puppet module install vmware/vcsa
$ puppet module install vmware/vcenter
$ puppet module install vmware/vshield

For development use github repos which can be installed via the following librarian Puppetfile:

1
2
3
4
5
6
mod "puppetlabs/stdlib"
mod "nanliu/staging"
mod "vmware_lib", :git => "git://github.com/vmware/vmware-vmware_lib.git"
mod "vcsa",       :git => "git://github.com/vmware/vmware-vcsa.git"
mod "vcenter",    :git => "git://github.com/vmware/vmware-vcenter.git"
mod "vshield",    :git => "git://github.com/vmware/vmware-vshield.git"

The puppet management host needs to have connectivity with vCenter and vCNS appliance. We are currently using a custom version of RbVmomi which has been included in the module. The management host should deploy all dependent software packages before managing any vCenter/vCNS resources:

1
2
3
node 'management_server' {
  include 'vcenter::package'
}

One of the gems in the package requires nokogiri. If you use Puppet Enterprise, install the pe-rubygem-nokogiri package on the management host (it’s not typically installed for agents). See Nokogiri documentation for additional information for open source puppet agents.

In last week’s sneak preview, I showed the debugging output for ssh transport. For the observant readers, those commands were the steps to initialize vCenter Server Appliance2:

This is the corresponding Puppet manifests3:

1
2
3
4
5
6
7
vcsa { 'demo':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  db_type  => 'embedded',
  capacity => 'm',
}

If we dig into the define resources type, it simply passes the user account to the ssh transport and initialize the device in the appropriate sequence:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
define vcsa (
...
) {
  transport { $name:
    username => $username,
    password => $password,
    server   => $server,
  }

  vcsa_eula { $name:
    ensure    => accept,
    transport => Transport[$name],
  } ->

  vcsa_db { $name:
    ensure    => present,
    type      => $db_type
  ...
}

Once vCenter Server appliance is initialized we can manage vCenter resources using the vSphere API. The example below specifies a vSphere API transport, along with datacenter, cluster, and an ESX host4:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
transport { 'vcenter':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  # see rbvmomi documentation for available options:
  options  => { 'insecure' => true },
}

vc_datacenter { 'dc1':
  ensure    => present,
  path      => '/dc1',
  transport => Transport['vcenter'],
}

vc_cluster { '/dc1/clu1':
  ensure    => present,
  transport => Transport['vcenter'],
}

vc_cluster_drs { '/dc1/clu1':
  require   => Vc_cluster['/dc1/clu1'],
  before    => Anchor['/dc1/clu1'],
  transport => Transport['vcenter'],
}

vc_cluster_evc { '/dc1/clu1':
  require   => [
    Vc_cluster['/dc1/clu1'],
    Vc_cluster_drs['/dc1/clu1'],
  ],
  before    => Anchor['/dc1/clu1'],
  transport => Transport['vcenter'],
}

anchor { '/dc1/clu1': }

vcenter::host { 'esx1':
  path      => '/dc1/clu1',
  username  => 'root',
  password  => 'esx_password',
  dateTimeConfig => {
    'ntpConfig' => {
      'server' => 'us.ntp.pool.org',
    },
    'timeZone' => {
      'key' => 'UTC',
    },
  },
  transport => Transport['vcenter'],
}

The next task is to connect vCloud Network and Security appliance to a vCenter appliance to form a cell:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
transport { 'vshield':
  username => 'admin',
  password => 'default',
  server   => '192.168.1.11',
}

vshield_global_config { '192.168.1.11':
  # This is the vcenter connectivity info. See vShield API doc:
  vc_info   => {
    ip_address => '192.168.1.10',
    user_name  => 'root',
    password   => 'vmware',
  },
  time_info => { 'ntp_server' => 'us.pool.ntp.org' },
  dns_info  => { 'primary_dns' => '8.8.8.8' },
  transport => Transport['vshield'],
}

In vShield API, all vCenter resources are referred by the vSphere Managed Object Reference (MoRef). ‘esx-13’ might be understandable to a computer, but for configuration purpose the name of the ESX host would make much more sense to an admin. For this reason, we developed the transport resource to support multiple connectivity during a single puppet execution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
transport { 'vcenter':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  options  => { 'insecure' => true },
}

transport { 'vshield':
  username => 'admin',
  password => 'default',
  server   => '192.168.1.11',
}

vshield_edge { '192.168.1.11:dmz':
  ensure             => present,
  datacenter_name    => 'dc1',
  resource_pool_name => 'clu1',
  enable_aesni       => false,
  enable_fips        => false,
  enable_tcp_loose   => false,
  vse_log_level      => 'info',
  fqdn               => 'dmz.vm',
  vnics              => [
    { name          => 'uplink-test',
      portgroupName => 'VM Network',
      type          => "Uplink",
      isConnected   => "true",
      addressGroups => {
        "addressGroup" => {
          "primaryAddress" => "192.168.2.1",
          "subnetMask"     => "255.255.255.128",
        },
      },
    },
  ],
  transport  => Transport['vshield'],
}

This should a provide general overview for the module capabilities. Additional resources are available beyond what’s covered in this post, however some of them such as vc_vm are not operational yet, and currently the modules do not offer comprehensive coverage of the vSphere and vShield API. I hope other users will find this module useful for your environment.

Thanks again for the support from R&D team at VMware, and especially Randy Brown and Shawn Holland for contributing the vCenter and vShield module. Also thanks to Rich Lane for releasing RbVmomi, and support from Christian Dickmann for resolving an issue in that library.

Reference:

VMware github repository:

Video walkthrough:

  1. See Nick’s blog post for more info.

  2. Thanks to Will Lam’s post on vCenter appliance.

  3. In the module test manifests “import ‘data.pp’” is a pattern to simplify testing for developers in different environments, please do not use import function for your production puppet manifests.

  4. Also the resources should work against a vCenter installation on Windows, however it hasn’t been tested.