Wednesday, March 20, 2013

Cloudstack 4.1 - Regions are coming to town!

 







Here a list of new features in Cloudstack 4.1 from a presentation that Chip Chandler gave.











 


There are a few new features, but I though I would talk about AWS style regions:

Uhem.. AWS style regions you say? This is a BIG move. It's taking Cloudstack zones and adding a new layer on top. Well why would you want to do that? Cloudstack zones do not spread well geographically. In fact multiple Cloudstack zones are often deployed within a data center to work around the 4095 vlan limit by keep the vlans within a rack and using L3 routing to move to another rack. That is ok, but customers cannot easily route between zones (yet). Having regions will allow additional segregation, with one important new feature which is splitting up the Cloudstack management server, so each CSM will manage one region. This is perfect is your customers are spread geographically.

Here are the requirements around regions in Cloudstack:

• Admin should be able to install a management server for a Region and then connect to management servers in other Regions. Each Management Server in each region should know about all other regions and their management servers. The following operation allows for each CSM server to know about the other CSM in the other region:
 

Connect To Region
 

• The CS Admin should be able to create multiple Regions within a CloudStack cloud. I.e., Cloudstack should manage multiple regions within a cloud. The following operations should be supported:
 

Create Region
Delete Region
List Regions
 

• Admin should be able to create multiple zones within a Region. Management server cluster manages all the zones within the region. Following Zone operations should be supported on per regions basis:

Create Zone
Delete Zone
List Zones

• Management Server cluster for each region should only control the infrastructure within
that region. In other words, if a management server cluster fails, then only the region managed by that server cluster should be inaccessible for admin and users. It should not have any impact on the other Regions in the cloud
 

• Each Region should have access to an object store. The object store should be common to all Zones within the Region. I.e., any object stored from a Zone should be visible across
all the other zones within the region
 

• EIP function should be available across both basic and advance Zones within a Region
 

• ELB function should be available across both basic and advance Zones within a Region
 

• The administrative hierarchy  (Users, Accounts, Domains, Projects) - should be valid across all the regions. I.e., if admins create domains, accounts and users etc. in one region it should be reflected in other regions as well.

This is key. I create a user in AZ 1 and I want that user to be able to create a VM in AZ2 using the same credentials, api and secret key.

• Authentication credentials – username, password, keys – should be valid across all
regions


^ Oh, I just said that! ^

• Switching between Regions should not require the user to sign-on again (SSO should be supported).

• Resource management should be extended to Region level
 

Available compute, storage, network (IPs, VLANs etc.) resources that are currently tracked
should be aggregated and displayed at region level


Appropriate global parameters should be available at Region level while the remaining would be available at Zone level
 

• Usage: All the (per account) usage records should be aggregated to Regions level

This is important for billing purposes.

• Global Configurations: All of the administrative hierarchy (Domains, Accounts, Users, Projects) related Global Configurations should have consistent values across all regions and has to be propagated when a user modifies a configuration on one of the regions.
• Each region should maintain a list of all other regions and their region endpoints.


As you can see, this really is a big change. How people will be able to upgrade will also be a big area of work. However this is way the product needs to move to ensure that it can keep delivering a top class service.

Here are a couple of links that you can look at to find out how the guys did it. This is why opensource is so great. No hiding, which means we can all get a better idea of how this beast will work

https://issues.apache.org/jira/browse/CLOUDSTACK-241

https://issues.apache.org/jira/browse/CLOUDSTACK-815

Wednesday, February 6, 2013

Running multiple puppet masters




When running multiple puppet masters, you need to dedicate 
a single puppet master to be your CA server. If you are adding additional puppet master, there is some key config that needs to be set which disables the puppet CA functionality from running on the puppet master. Additionally, you need to set up a proxy pass to forward certificate requests to the puppet CA server. This is detailed below.

 


1.    Install puppet agent and puppet master

Install the puppet and epel repo

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh http://yum.puppetlabs.com/el/6/products/x86_64/puppetlabs-release-6-6.noarch.rpm


Install the puppet agent and puppet master

yum install puppet puppet-server -y

2.    Edit the puppet.conf file and turn off the puppet master from acting as a CA

vi /etc/puppet/puppet.conf

[master]

    ca = false
    ca_server = puppet-CA-server.domain.net


3.    Start the agent and create and sign the puppet master cert

•    On the additional puppet master

puppet agent --test


•    On puppet CA server

puppet cert sign -- <puppet-master-server>


•    On the puppet master

puppet agent --test


4.    Configure iptables

iptables -I INPUT 5 -s 10.10.10.0/24 -m tcp -p tcp --dport 8140 -j ACCEPT

service iptables save
service iptables restart


Note: If you are adding multiple bridges, assign the correct iptables rules to each vnic adapter and the the applicable source network address.

5.    Install the required packages for the puppet master

yum install sudo mod_ssl rubygem-passenger mod_passenger policycoreutils-python rsync -y

6.    Copy the example puppet virtual host config to /etc/httpd/conf.d/

cp /usr/share/puppet/ext/rack/files/apache2.conf /etc/httpd/conf.d/puppet-master.conf

7.    Edit the puppet-master.conf file and update accordingly

# you probably want to tune these settings
PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

Listen 8140

<VirtualHost *:8140>

        SSLProxyEngine On
        ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet.myorg.net:8140/$1

        SSLEngine on
        SSLProtocol -ALL +SSLv3 +TLSv1
        SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP

        SSLCertificateFile      /var/lib/puppet/ssl/certs/
puppet-server2.myorg.net.pem
        SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/
puppet-server2.myorg.net.pem
        SSLCertificateChainFile /var/lib/puppet/ssl/certs/ca.pem
        SSLCACertificateFile    /var/lib/puppet/ssl/certs/ca.pem

        # If Apache complains about invalid signatures on the CRL, you can try disabling
        # CRL checking by commenting the next line, but this is not recommended.
        SSLCARevocationFile     /var/lib/puppet/ssl/crl.pem
        SSLVerifyClient optional
        SSLVerifyDepth  1

        # The `ExportCertData` option is needed for agent certificate expiration warnings
        SSLOptions +StdEnvVars +ExportCertData

        # This header needs to be set if using a loadbalancer or proxy
        RequestHeader unset X-Forwarded-For

        RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

        DocumentRoot /usr/share/puppet/rack/puppetmaster/public/
        RackBaseURI /
        <Directory /usr/share/puppet/rack/puppetmaster/>
                Options None
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>
</VirtualHost>


Note: We are using the ProxyPassMatch. This is matching any inbound requests on /certificate and redirecting them to the puppet CA server. Change the config accordingly.

Note: As the puppet master you are building is not a CA server, note the SSLCertificateChainFile, SSLCACertificateFile, SSLCARevocationFile paths are different to a puppet master running the CA server. This reflects where the CA certs reside when the PM is not a CA server itself.

8.    Create rack directories

mkdir -p /usr/share/puppet/rack/puppetmaster/{public,tmp}

9.    Copy config.ru rack file to rack web directory

cp /usr/share/puppet/ext/rack/files/config.ru /usr/share/puppet/rack/puppetmaster/

10.    Change ownership of config.ru rack file to puppet

chown puppet:puppet /usr/share/puppet/rack/puppetmaster/config.ru

11.    Set httpd to start on boot and puppet master to not start

chkconfig httpd on
chkconfig puppetmaster off


12.    Set up a puppet master to connect to puppetdb

•    Run the following on each of your puppet masters:

sudo puppet resource package puppetdb-terminus ensure=latest

•    Add this to /etc/puppet/puppetdb.conf. Note: you may have to create this file.

[main]

  server = <puppetdb>
  port = 8081


•    Add this to /etc/puppet/puppet.conf

[master]

  storeconfigs = true
  storeconfigs_backend = puppetdb


•    Add this to /etc/puppet/routes.yaml. Note: you may have to create this file.

master:
  facts:
    terminus: puppetdb
    cache: yaml


13.    Enable puppet-dashboard reports on the puppet master

•    Add to each puppet master in the puppet environment in /etc/puppet/puppet.conf:

 [master]

    reports = store, https, puppet_dashboard
    reporturl = https://<puppet-dashboard-server>/reports/upload


14    Copy the following script to the puppet dashboard server and all the puppet masters

•    Create /usr/lib/ruby/site_ruby/1.8/puppet/reports/https.rb with the following code

require 'puppet'
require 'net/http'
require 'net/https'
require 'uri'

Puppet::Reports.register_report(:https) do

  desc <<-DESC
  Send report information via HTTPS to the `reporturl`. Each host sends
  its report as a YAML dump and this sends this YAML to a client via HTTPS POST.
  The YAML is the `report` parameter of the request."
  DESC

  def process
    url = URI.parse(Puppet[:reporturl].to_s)
    http = Net::HTTP.new(url.host, url.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_NONE

    req = Net::HTTP::Post.new(url.path)
    req.body = self.to_yaml
    req.content_type = "application/x-yaml"

    http.start do |http|
      response = http.request(req)
      unless response.code == "200"
        Puppet.err "Unable to submit report to #{Puppet[:reporturl].to_s} [#{response.code}] #{response.msg}"
      end
    end

  end
end


•    Remove the following file from the puppet dashboard and puppe masters

/usr/lib/ruby/site_ruby/1.8/puppet/reports/http.rb

rm -f /usr/lib/ruby/site_ruby/1.8/puppet/reports/http.rb


15    Edit the auth.conf file on each puppet master so inventory can pick up facts.

vim /etc/puppet/auth.conf

path /facts
method find
auth any
allow *

path /inventory
auth any
method search, find
allow dashboard

# this one is not stricly necessary, but it has the merit
# of showing the default policy, which is deny everything else

path /
auth any


Note: The config for /facts and /inventory must go above the config for `path /` - otherwise you may get an access forbidden 404 error message when running the inventory service on puppet-dashboard.