Tuesday, 11 February 2014

DR: vSphere to AWS using vSphere snapshots and S3 sync

Background

Disaster Recovery has always been a hot topic for business and for the longest time DR was an exclusive feature for those with huge IT budgets and expensive SANs doing replication at the array level.

Fortunately things have moved on and there are a number of purely software-based technologies which bring disaster recovery functionality within reach of even small users; for VMware-based VMs there are many options including pure vSphere ReplicationSite Recovery Manager, Veeam and Zerto.

I'm a big fan of both VMware and Amazon Web Services and the lack of any useful product to link these technologies together has often been a point of discussion in client meetings; an on-site VMware solution needs somewhere else to replicate to - AWS should be a perfect fit for this!

At $dayjob we've often quoted customers for just this, it's been based around Veeam backups. However, I have often played through a scenario in my head that would allow regular vSphere snapshots to be replicated to a 'pilot light' environment in AWS using only native tools. 

Last weekend, I made it happen.

Theory

In my head at least, it's quite straightforward. Take an initial snapshot, replicate it to AWS and then make use of vSphere Changed Block Tracking so that with every future snapshot, only the differences between each snapshot will need to be replicated, thus vastly reducing replication overhead when I need to get that data into AWS!

Step One: Take a snapshot


I use VMware's PowerCLI PowerShell cmdlets to connect to the vCenter Server (a single ESXi host should also work) and take a snapshot of the VM I want to protect.

The first time I do this, pass the 'QueryChangedDiskAreas' function a ChangeID of "*" - vSphere knows this to mean 'give me a list of all the active extents for that disk'

Many thanks to Scott Herold, the VMguru for this great post which told me how QueryChangedDiskAreas worked and how to get the ChangeId for the each virtual hard disk!

Scott makes a great job of writing function-based code that handles errors. For brevity, the necessary code can be distilled right down to this:


 Connect-VIServer -Server ... -User ... -Password ....
 $vm = Get-VM -Name dc02  
 $vmview = $vm | Get-View  
 $newSnap = $vm | New-Snapshot -Name "Karma Sync"  
 $newSnapview = $newSnap | Get-View  
 $changes=$vmview.QueryChangedDiskAreas($newSnapview.MoRef, 2000, 0, "*")  
 $changes.ChangedArea | Format-Table -HideTableHeaders > thefile.txt  
 $thischange=($newsnapview.Config.Hardware.Device | where { $_.Key -eq 2000 }).backing.changeid  
 #$newSnap | Remove-Snapshot -Confirm:$false #run at end of Step Two
  
To give a little context, 'thefile.txt' will contain something like this after running the PowerShell code:


              0       2097152  
        7340032      15728640  
       31457280       5242880  
      104857600      23068672  
      133169152       8388608  
      145752064      24117248  
      173015040      176160768  
      351272960      49283072  
      401604608      319815680  
      724566016       1048576  
      728760320       5242880  
      735051776     2194669568  
     2930769920      328204288  
     3260022784       8388608  
     3280994304      108003328  
     3536846848     2985295872  
     6525288448       6291456  
     6532628480     1857028096  
     8398045184       8388608  
     8412725248      15728640  
     8431599616      20971520  
     8455716864       3145728  
     8459911168      11534336  
     8474591232       7340032  
     8484028416       5242880  
     10209984512      158334976  
     42947575808       1048576  

This list describes the byte offsets and data lengths of every area of the vmdk which is actually in use - this way we don't need to transfer every byte of the hard disk (most of which will be zeroes) 

Step Two: Retrieve the block payload



thefile.txt tells us the parts of the disk we should be interested in and we now need to get those bytes. I amended the 'vixDiskLibSample.cpp' sample program which comes with VMware's Virtual Disk Development Kit.

I edited the 'DoDump' function which allowed you to specify an offset and byte count for it to spit out a hex-dump of that portion of the disk. I removed the hex-dump feature and changed the function so it would instead write a series of files to the current directory.

For every line of 'thefile.txt', the DoDump function will now write a file whose name is the byte offset. The length of that file is, you guessed it, exactly the size of the change.


After running the code, you end up with an 'ls -l' output like this:
-rw-r--r-- 1 root root   2097152 Feb  9 22:21 0
-rw-r--r-- 1 root root  15728640 Feb  9 22:21 7340032
-rw-r--r-- 1 root root   5242880 Feb  9 22:21 31457280
-rw-r--r-- 1 root root  23068672 Feb  9 22:21 104857600
-rw-r--r-- 1 root root   8388608 Feb  9 22:21 133169152
-rw-r--r-- 1 root root  24117248 Feb  9 22:21 145752064
-rw-r--r-- 1 root root 176160768 Feb  9 22:21 173015040
... and so on
You should now remove the vSphere snapshot using the PowerCLI 'Remove-Snapshot' cmdlet


Step Three: Sync to AWS


Amazon Web Services recently launched a unified CLI tool to bring together a ton of individual tools. This is great news since the configuration and usage for each of those tools was different - you'd end up with a whole stack of Python, Ruby and Java and duplicated configs. Oh dear lord so much Java. I digress.

I create an S3 bucket called 'gdh-vm-replication' in which to store the change sets and then run 'aws s3 sync' to push the changes up:

# aws s3 mb s3://gdh-vm-replication/
# aws s3 sync ./ s3://gdh-vm-replication/
upload: ./10266279936 to s3://gdh-vm-replication/10266279936
upload: ./10223616 to s3://gdh-vm-replication/10223616
upload: ./10255335424 to s3://gdh-vm-replication/10255335424
Completed 3 of 508 part(s) with 497 file(s) remaining
... and so on

Step Four: Sync to disk file


Now in AWS, I have an EC2 instance with an initially empty directory /tmp/spool. I'm dealing with a 40GiB VMware disk which means 40960MiB (i.e. power-of-two mebibytes rather than marketing megabytes) , so I created a 40GiB sparse file just to get things going:
# dd if=/dev/zero of=/mnt/dc02.raw bs=1M seek=40959 count=1
The instance performs two simple tasks in normal operations:
  • Pull down the changes

  • # aws s3 sync s3://gdh-vm-replication/ /tmp/spool/
    • Apply those changes to a local disk file
    # ls -1 /tmp/spool | awk '{print "dd if=/tmp/spool/" $1 " seek=" $1/65536 " of=/mnt/dc02.raw bs=64k "}'  | sh
    
    Using the same 'thefile.txt' as before, the output that gets piped into 'sh' looks like this:
    dd if=/tmp/spool/0 seek=0 of=/mnt/dc02.raw bs=64k
    dd if=/tmp/spool/7340032 seek=112 of=/mnt/dc02.raw bs=64k
    dd if=/tmp/spool/31457280 seek=480 of=/mnt/dc02.raw bs=64k
    dd if=/tmp/spool/104857600 seek=1600 of=/mnt/dc02.raw bs=64k
    

    Step Five: Repeat as needed

    Repeat all of the above steps every 15 minutes, or as often as you wish. The only thing of particular note is that back in Step One when you retrieved the value of $thischange, you must ensure that the NEXT time you take a snapshot, you pass that value into the QueryChangedDiskAreas function instead of "*", otherwise you will be syncing the entire disk each time!

    Step Six: Invoke disaster recovery

    Ah yes. That nice unified CLI I mentioned earlier? It's not quite completely unified yet, so the older EC2 API tools are still necessary :/
    I hope they will be incorporated into the unified AWS CLI soon - I logged a support case / feature request about that, since the console output of the ec2-import-instance command is an utter fright - look:

    # ec2-import-instance -O AKIAxxxxxxxxxxxxxx  -o AKIAxxxxxxxxxxxxxxx -W cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxW -w cxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxW -b gdh-vmimport --region eu-west-1 -f RAW /mnt/dc02.raw
     
    Requesting volume size: 40 GB
    TaskType        IMPORTINSTANCE  TaskId  import-i-fh6yum0c       ExpirationTime  2014-02-16T22:36:58Z    Status  active  StatusMessage   Pending     InstanceID      i-760d4437
    DISKIMAGE       DiskImageFormat RAW     DiskImageSize   42949672960     VolumeSize      40      AvailabilityZone        eu-west-1c      ApproximateBytesConverted   0       Status  active  StatusMessage   Pending
    Creating new manifest at gdh-vmimport/41f1f52b-62e8-45d4-a3f9-9d3de98b5749/image.rawmanifest.xml
    
    Uploading the manifest file
    Uploading 42949672960 bytes across 4096 parts
    0% |--------------------------------------------------| 100%
       |===========
    
    The bucket name 'gdh-vmimport' is somewhere for the ec2-import-instance to store a manifest file which details the S3 location of all of the 4096 10MB chunks of data which AWS will use to 'Amazonify' your disk image.
    This will take ages to upload - the 40GiB took approx 45 minutes on an m3.medium. After that, Amazon will boot the instance behind the scenes, install EC2Config and the Citrix PV SCSI and Network drivers.

    This takes approximately 15 minutes. That brings the minimum achievable 'Recovery Time Objective' to no less than one hour, which is a bit sucky.

    Instance i-760d4437 (mentioned in the output from ec2-import-instance) will be ready to start - you did remember to open port 3389 in the Windows host firewall, didn't you? ;)

    All comments welcome - this is a Proof of Concept piece of work and while it works, you'll want to add a wrapper for monitoring, reporting, alerting and configuration.


    Code reference

    Here's a patch of the changes I made the VMware's vixDiskLibSample application. I am not a C++ programmer!

     --- vixDiskLibSample.cpp.orig  2014-02-08 01:45:58.289800491 +0000  
     +++ vixDiskLibSample.cpp    2014-02-08 16:04:53.599361218 +0000  
     @@ -17,6 +17,7 @@  
      #include <sys/time.h>  
      #endif  
       
     +#include <fstream>  
      #include <time.h>  
      #include <stdio.h>  
      #include <stdlib.h>  
     @@ -1300,19 +1301,37 @@  
      DoDump(void)  
      {  
        VixDisk disk(appGlobals.connection, appGlobals.diskPath, appGlobals.openFlags);  
     -  uint8 buf[VIXDISKLIB_SECTOR_SIZE];  
     +  uint8 buf[65536];  
        VixDiskLibSectorType i;  
     +  std::ifstream infile("thefile.txt");  
       
     -  for (i = 0; i < appGlobals.numSectors; i++) {  
     -    VixError vixError = VixDiskLib_Read(disk.Handle(),  
     -                      appGlobals.startSector + i,  
     -                      1, buf);  
     -    CHECK_AND_THROW(vixError);  
     -    DumpBytes(buf, sizeof buf, 16);  
     +  using namespace std;  
     +  
     +  long long a, b;  
     +  while (infile >> a >> b)  
     +  {  
     +    appGlobals.startSector = a / 512;  
     +    appGlobals.numSectors = b / 65536;  
     +    char outfilename[40];  
     +    int n;  
     +  
     +    n=sprintf (outfilename, "delta/%llu", a);  
     +    std::ofstream outputBufferHere(outfilename, std::ios::binary|std::ios::out);  
     +  
     +    for (i = 0; i < appGlobals.numSectors; i++) {  
     +      VixError vixError = VixDiskLib_Read(disk.Handle(),  
     +                      appGlobals.startSector + (i * 128),  
     +                      128, buf);  
     +  
     +      CHECK_AND_THROW(vixError);  
     +//      DumpBytes(buf, sizeof buf, 16);  
     +      outputBufferHere.write((const char*)buf, 65536);  
     +    }  
     +    outputBufferHere.close();  
        }  
     +  infile.close();  
      }  
       
     -  
      /*  
      *--------------------------------------------------------------------------  
      *  
    

    Friday, 17 January 2014

    Masterless Puppet on Windows in AWS using Elastic Beanstalk

    Sam Bashton gave a great talk at PuppetConf 2013 about how Bashton, Ltd. deploy Linux-based EC2 instances and configure them using Puppet where all the modules and manifests are 'baked in.'

    In the world of Puppet this is a very convenient way to handle the problem of running a puppet-master and its requisite SSL key signing. Puppet's SSL signing is suitable for running a house of pets, but falls some way short when dealing with a farm of cattle.

    During the Christmas break I tried to reimplement an existing customer's AWS Elastic Beanstalk deployment with Puppet using Sam Bashton's talk as the inspiration. I've now been prompted to write it up after reading Masterzen's great article this morning.


    Current State


    The customer needs to run more than a single application on each EC2 instance - the cost to deploy a separate Beanstalk for each tier is simply too high for them, and their code is not yet ready to consume a single 'service layer.'

    Hence I made use of .ebextensions config files in Elastic Beanstalk to configure multiple IIS websites on each instance. I'll save you from the full travesty of this file (e.g. abusing the HOSTS file so the services hostname resolves to 127.0.0.1) and provide this potted version of the 'beanstalk.config':

         sources:
           "c:/scripts/ipfudge": http://admin01/scripts/ipfudge.zip

           "c:/program files/amazon/AWSCLI": http://admin01/scripts/AWSCLI.zip
           "c:/inetpub/newwebservices": http://admin01/sites/custname.newwebservices.zip 
    files:
           "c:/scripts/s3-mobile-appdata-sync.cmd":
             content: |
              c:
              cd \inetpub\mobile\App_Data
              "C:\program files\amazon\awscli\aws" --region eu-west-1 s3 sync s3://cust-mobile-appdata/ . --delete >%TEMP%\s3sync4.log 2>&1

    container_commands:
      newwebservices01:
        cwd: c:\windows\system32\inetsrv
        command: appcmd add apppool /name:"CustomerName.NewWebServices"
        ignoreErrors: true
        waitAfterCompletion: 0 
      newwebservices02:
        cwd: c:\windows\system32\inetsrv
        command: appcmd add site /name:"CustomerName.NewWebServices" /id:2 /bindings:http://web.customername.com:80 /physicalPath:"c:\inetpub\newwebservices"
        ignoreErrors: true
        waitAfterCompletion: 0 
      newwebservices03:
        cwd: c:\windows\system32\inetsrv
        command: appcmd set app "CustomerName.NewWebServices/" /applicationPool:"CustomerName.NewWebServices"
        ignoreErrors: true
        waitAfterCompletion: 0
           schtasks01:
             cwd: c:\windows\system32
             command: schtasks /create /ru SYSTEM /TN Sync_S3_App_Data /TR "cmd /c c:\scripts\s3-appdata-sync.cmd >%TEMP%\s3sync1.log 2>&1" /SC MINUTE /MO 30 /ST 00:18 /F
             ignoreErrors: true 

             waitAfterCompletion: 0 

           zzfinal01:
             cwd: c:\windows\system32\inetsrv
             command: appcmd set config /section:urlCompression /doDynamicCompression:True
             ignoreErrors: true
             waitAfterCompletion: 0


           zzfinal02:
             cwd: c:\windows\system32\inetsrv
             command: cmd /c c:\scripts\ipfudge\ipfudge.cmd >%TEMP%\ipfudge.log 2>&1
             ignoreErrors: true
             waitAfterCompletion: 0 


    Problems


    There's a whole ton of pain going on there. Every time new code is deployed through Beanstalk (the customer uses the AWS Toolkit for Visual Studio to publish directly from their development workstations) every command in this Beanstalk configuration will be executed.

    That means at least the following things happen:

    • The c:\inetpub\directory containing the 'main' application at 'Default Web Site' will be emptied and populated according to whatever was pushed from Visual Studio
    • The 'sources' section in the beanstalk.config will remove the destination directory, download the ZIP file from the provided URL and unpack it at the destination directory
    • All of the 'container_commands' will be executed in lexicographical order (hence 'zzfinalxx' to ensure they are run last)
    • The commands in 'ipfudge.cmd' are run each time (a long series of 'appcmd set config' commands to allow/deny access to the NewWebServices site based on IP address)

    The repeated use of 'ignoreErrors: true' makes the associated output in 'cfn-init.log' very noisy, e.g.:

    2014-01-17 09:15:55,444 [ERROR] Command newwebservices01 (appcmd add apppool /name:"CustomerName.NewWebServices") failed
    2014-01-17 09:15:55,444 [INFO] ignoreErrors set to true, continuing build
    2014-01-17 09:15:56,490 [ERROR] Command newwebservices02 (appcmd add site /name:"CustomerName.NewWebServices" /id:2 /bindings:http://web.customername.com:80 /physicalPath:"c:\inetpub\newwebservices") failed
    2014-01-17 09:15:56,490 [INFO] ignoreErrors set to true, continuing build
    2014-01-17 09:15:57,648 [INFO] Command newwebservices03 succeeded


    Desire

    I would far rather define this in terms of a Puppet manifest - that way only deviations from the described state will be changed, rather than churning through every change every time.

    I had something like this in mind:



    Bootstrap


    First we need to take the Windows Server 2012 AMI used with Elastic Beanstalk and install Puppet and other tools. It's common to bake your own AMI however I'm quite keen on avoiding work so I prefer to configure the existing Windows Server 2012 AMI (ami-396a784d at time of writing).

    So, first we need a hugely simplified beanstalk.config:

    packages:
      msi:
        Puppet: https://downloads.puppetlabs.com/windows/puppet-3.4.0.msi

    sources:
      "c:/scripts/gopuppet": http://admin01/scripts/gopuppet.zip
      "c:/programdata/puppetlabs/puppet/etc/modules": http://admin01/scripts/puppetmodules.zip

    container_commands:
      gopuppet:
        cwd: c:\windows\system32
        command: cmd /c powershell -executionpolicy remotesigned -file c:\scripts\gopuppet\gopuppet.ps1 >c:\progra~1\amazon\elasticbeanstalk\logs\puppet.log 2>&1
        ignoreErrors: true
        waitAfterCompletion: 0


    This will download 'gopuppet.zip' and puppetmodules.zip from an existing server in the environment called 'admin01' - the source could just as easily be on S3.

    Redirecting the output for the 'gopuppet' command allows us to see what's going on by using Beanstalk's 'Snapshot Logs' command in the web UI.


    What Am I?


    gopuppet.zip contains a single PowerShell script:, gopuppet.ps1:

    import-module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"
    Set-DefaultAWSRegion eu-west-1

    $my_instance=(New-Object System.Net.WebClient).DownloadString("http://169.254.169.254/latest/meta-data/instance-id")

    #i-beec85f2

    $my_instance2=New-Object 'collections.generic.list[string]'
    $my_instance2.add($my_instance)

    $filter= New-Object Amazon.EC2.Model.Filter -Property @{Name = 'resource-id'; Value = $my_instance2}

    #Name                                                        Values
    #----                                                        ------
    #resource-id                                                 {i-beec85f2}

    $my_tags= Get-EC2Tag -filter $filter
    # | where { $_.Key -eq "elasticbeanstalk:environment-name" }
    # the piped where clause works on my home laptop but not here - maybe AWS Tools 3.5 versus 3.0 on the AMI?

    #ResourceId                    ResourceType                  Key                           Value
    #----------                    ------------                  ---                           -----
    #i-beec85f2                    instance                      elasticbeanstalk:environme... customername-taupe

    foreach ($tag in $my_tags)
        {
        if ($tag.Key -eq "elasticbeanstalk:environment-name")
            {
            $envname= $tag.Value
            }
        }

    #customername-taupe

    # Now we know who we are and what environment we're part of, we need to grab the current config for this environment

    $tempfile = Read-S3Object -BucketName cust-deployment-data -Key "$envname.pp" -File "$env:temp\$envname.pp"

    # Check that the file was written to less than 10 million ticks ago (10 seconds)

    if ((((Get-Date) - $tempfile.LastWriteTime).ticks) -lt 10000000)
     {
     write "file $tempfile was written to recently"

     # Well this is a terrible fudge and it's the only way I can make it work at first-run
     # Using the windows_path module in Puppet doesn't work until the *second* Puppet run
     $env:Path += ";C:\Program Files (x86)\Git\cmd"
     # Finally run puppet and apply this config
     & "C:\Program Files (x86)\Puppet Labs\Puppet\bin\puppet" apply --debug $tempfile

     if ($LASTEXITCODE -ne 0)
        {
        write "Execution failed"
        }
     }


    Modules


    The puppetmodules.zip contains many off-the-shelf modules from the Puppet Forge:


    I had to hack the download_file module around to fix assumptions made about the location of temporary directories, and also to include a new download_unpack_file function which uses the Explorer Shell to unzip files without needing 7-Zip or other binary dependencies.


    Make It So


    Now we're really getting somewhere and we just need the manifest itself. Here's a simplified version, since the real one is an epic 400 lines.

    # Install Windows features Start
      dism { 'IIS-ASPNET':
        ensure => present,
        require => Dism['IIS-ISAPIFilter'],
      }
      dism { 'IIS-ISAPIFilter':
        ensure => present,
        require => Dism['IIS-ISAPIExtensions'],
      }
      dism { 'IIS-ISAPIExtensions':
        ensure => present,
        require => Dism['IIS-NetFxExtensibility'],
      }
      dism { 'IIS-NetFxExtensibility':
        ensure => present,
        require => Dism['NetFx4Extended-ASPNET45'],
      }
      dism { 'NetFx4Extended-ASPNET45':
        ensure => present,
      }
      file {'c:/scripts':
        ensure          => directory,
        require         => Iis_vdir ["CustName.NewWebServices/"]
      }

    # Install Windows features End

    # NewWebServices Start
      file {'c:/inetpub/newwebservices':
        ensure          => directory,
      }

      download_unpack_file { 'custname.newwebservices.zip':
        url => 'http://admin01/custname.newwebservices.zip',
        destination_directory => 'c:\inetpub\newwebservices',
        temp_directory => 'c:\windows\temp',
        require => File['c:/inetpub/newwebservices'],
      }

      iis_apppool {'CustName.NewWebServices':
        ensure                => present,
        managedpipelinemode   => 'Integrated',
        managedruntimeversion => 'v4.0',
      }

      iis_site {'CustName.NewWebServices':
        ensure          => present,
        bindings        => ["http/*:80:web.custname.com"],
      }

      iis_app {'CustName.NewWebServices/':
        ensure          => present,
        applicationpool => 'CustName.NewWebServices',
      }

      iis_vdir {'CustName.NewWebServices/':
        ensure          => present,
        iis_app         => 'CustName.NewWebServices/',
        physicalpath    => 'c:\inetpub\newwebservices',
        require         => Download_unpack_file["custname.newwebservices.zip"]
      }
    # NewWebServices End

    # S3 Mobile Sync Start
      file {'c:/scripts/s3-mobile-appdata-sync.cmd':
        content =>
    'c:
    cd \inetpub\mobile\App_Data
    "C:\program files\amazon\awscli\aws" --region eu-west-1 s3 sync s3://cust-mobile-appdata/ . --delete >%TEMP%\s3sync4.log 2>&1
    ',
        require => File["c:/scripts"]
      }

     scheduled_task { 'Sync_S3_Mobile_App_Data':
         ensure    => present,
         enabled   => true,
         command   => 'c:\scripts\s3-mobile-appdata-sync.cmd',
         trigger   => {
           schedule   => daily,
           start_time => '00:17'
         },
         require   => File["c:/scripts/s3-mobile-appdata-sync.cmd"]
       }



    # This is a weak fudge because Puppet's scheduled_task support 
    # does not include 'every N minutes'


      exec { 'Sync_S3_Mobile_App_Data':
        command => 'c:\windows\system32\schtasks.exe /change /TN "Sync_S3_Mobile_App_Data" /RI 30',
        subscribe   => Scheduled_task["Sync_S3_Mobile_App_Data"],
        refreshonly => true
      }

    # S3 Mobile Sync End

    # AWS CLI Tools Start
      download_unpack_file { 'AWSCLI.zip':
        url => 'http://admin01/scripts/AWSCLI.zip',
        destination_directory => 'c:\program files\amazon\AWSCLI',
        temp_directory => 'c:\windows\temp',
        require => File['c:/program files/amazon/AWSCLI'],
      }

      file {'c:/program files/amazon/AWSCLI':
        ensure          => directory,
      }


    # AWS CLI Tools End 


    # IP Security Fudge Start
      dism { 'IIS-IPSecurity':    
        ensure => present,
      }

      exec { 'install-ipfudge':

        command   => "c:\\windows\\temp\\ipfudge.cmd",
        logoutput => true,
        refreshonly => true,
        subscribe => Download_file['ipfudge'],
        require => Dism ['IIS-IPSecurity']
      }

      download_file { 'ipfudge':
        url => 'http://admin01/scripts/ipfudge.cmd',
        destination_directory => 'c:\windows\temp',
        temp_directory => 'c:\windows\temp',
        require => File['c:/scripts'],
      }

    # IP Security Fudge End

    # More weakness since there is no IIS module which can manipulate this data, so we break out to a dirty exec()
      exec { 'Dynamic_Compression':
        command => 'c:\windows\system32\inetsrv\appcmd.exe set config /section:urlCompression /doDynamicCompression:True',
      }
        refreshonly => true
      }


    Reflection

    This was an interesting technical challenge and not one I would put into production for the following reasons:

    • Deployment time was far greater during the 'puppet apply' run. This was typically over three minutes for each new run compared with one minute for the 'do it all' beanstalk.config approach.
    • Initial EC2 instance build was far slower since the Puppet MSI took several minutes (I appreciate a pre-baked MSI would help here)
    • Puppet's support for Windows is still sketchy:
      • No scheduled_task 'every N minutes'
      • No manipulation of the IIS config structure
      • Much more verbose definition of IIS sites cf. three 'appcmd' lines
      • Most of the modules on the Forge will not work on Windows
    • No obvious way to get a signal to re-run 'puppet apply' other than uploading or redeploying existing code through Beanstalk. 
      • The full config included several Git-based vcsrepo sources - I wanted to be able to 'bump' the config rather than do a full environment update.
    Comments welcome! :)