Thursday 22 August 2013

Shrinking a Windows boot drive in AWS

I recently deployed a couple of Windows domain controllers from a CloudFormation template and only after I'd done reasonable work to them did I realise that the boot drives were 100GB rather than a more sensible 40GB.

Now, that additional 60GB isn't exactly going to break the bank (it works out at about £50 per year) however I'm committed to provisioning the correct size. 

I investigated a few different options for shrinking the Windows boot drive in AWS and this is the one that I found to be the most clear.

In this walkthrough I'll be working on hostname DC01 as the Windows machine in question

AWS Console

  • Take a snapshot of DC01

Windows

  • Start -> Administrative Tools -> Computer Management -> Storage -> Disk Management
  • Right-click on Volume C: -> Shrink Volume so the 'Total size after shrink' is no more than 39000 MB



  • Start -> Shut down. 

AWS Console


  •  Determine which Availability Zone DC01 is in. In this case it's zone 1a in eu-west:

  • Create a new Amazon Linux EC2 instance in the same zone.
    • I chose an EBS-optimised m1.large to improve the disk speed
  • Detach the 100GB root volume from DC01
  • Attach the 100GB root volume to the new Amazon Linux EC2 instance as device /dev/sdf (the default as of writing)
  • Create a new 40GB standard EBS volume in the same zone and attach it to the Amazon Linux EC2 instance as device /dev/sdg (be careful - it defaults to /dev/sdf)

Amazon Linux EC2 instance

  • Check that the devices are mounted correctly with 'fdisk':

  • Yes, /dev/sdf is our 100G source drive and /dev/sdg is the new 40G destination
  • Use 'dd' to copy the contents of the source to the destination
    • dd if=/dev/sdf of=/dev/sdg bs=1M
    • The 'bs=1M' tells dd to use a block size of one megabyte - this is not only faster but also reduces the number of I/O operations. AWS charges per I/O operation :)
    • BONUS: this will also do a full block format to maximise EBS performance
  • The 'no space left on device' message is perfectly normal - 100GB of data will not fit into 40GB - the important part is the useful data is occupying less than 40GB

AWS Console

  • Stop the Amazon Linux EC2 instance
  • Detach both the 100G and 40G volumes
  • Attach the 40G volume as device /dev/sda1 of DC01
  • Start DC01

Windows

  • Verify that the instance comes up and the root device is 40GB as expected:


AWS Console

  • delete the 100GB snapshot
  • delete the 100GB original volume from DC01
  • terminate the Amazon Linux EC2 instance

Tada!

    25 comments:

    1. Hello Gavin, thanks a lot for yout post, very simple to follow.
      In appreciation for your generous post I will try to humbly complement somethings that arrised to me when following your instructions:
      * Some custom linux kernels do not use names like "/dev/sdf" for the volumes, so if you put those names in the EC2 Management Console, the linux box instead will use names like "/dev/xvdf". Use a simple "mount" command at the linux box to know what type or kernel you have when attaching the volumnes (so you are secure of what results you will get when following Gavin instructions). This issue is documented in the EC2 User Guide in the Storage chapter.
      * The "dd" command may take very long time to copy the volume, and you as the user will look for a way to enquire about the progress of the copy operation. A simple way could be with the command:
      % sudo kill -USR1 >pid of dd<
      from another linux command shell. Get the process id with a command like
      % ps -ef | grep dd
      This won´t kill the dd process, it will send it a signal, seconds later from issuing the signal, dd will report its progress, with a format like:
      0+14 records in
      0+14 records out
      204 bytes (204 B) copied, 24.92 seconds, 0.0 kB/s
      I used it after waiting 3 hours for the completion of the copy with dd. Obviously I was worried about the succes of the copy operation.
      I hope this helps and make your post even more usefull.
      Regards,
      Eduardo Yánez.

      ReplyDelete
      Replies
      1. Hey Eduardo,

        Many thanks for your kind words about the blog post and also for your follow-up improvements. You're right about the device letters - certainly the classic 'sd' device names in the Amazon Linux AMI are just symlinks to the real block devices on /dev/xvdN:

        [root@ip-10-0-0-181 ~]# ls -l /dev/sda1
        lrwxrwxrwx 1 root root 5 Oct 10 12:08 /dev/sda1 -> xvda1
        [root@ip-10-0-0-181 ~]# ls -l /dev/xvda1
        brw-rw---- 1 root disk 202, 1 Oct 10 12:08 /dev/xvda1


        On the topic of 'dd' progress, I often tend to open another PuTTY window and just run 'vmstat 5' to see the 'bi' and 'bo' columns telling me how quickly the transfer progress is running.

        The great thing about Linux (and any OS, really..) is there are so many different ways to solve the same problem! :)

        Cheers,
        Gavin.

        Delete
    2. For the non-linux guys out there, there is a way to do it and stay within Windows. Please see the technical article on my Evernote account here: https://www.evernote.com/shard/s2/sh/2801b360-994c-4b6e-8aab-3743f9662fe9/7403918691bb67b82fc20310a71bab0a

      ReplyDelete
    3. I had been looking for a method for shrinking Windows root volumes in AWS for quite a while and your works like a charm. Thank you Gavin!

      ReplyDelete
    4. You need to run the dd command using sudo.

      ReplyDelete
      Replies
      1. Also to get the status, run this command in another shell prompt and view the result in the first shell prompt:

        sudo kill -USR1 $(pgrep ^dd)

        Delete
      2. or even (to get status updates every 60 seconds):
        until [ ! $(pgrep ^dd) ]; do sudo kill -USR1 $(pgrep ^dd) && sleep 60; done

        Delete
    5. Hi Gavin,

      Thanks so much for the walk-through. Very easy to follow and worked like a charm!

      An observation, though. Contrary to ~3hrs reported by Eduardo Yánez, it took less than 30mins to shrink a 200GB Windows instance to 50GB using m1.large, you suggested. (Amazon might be much more faster, nowadays).

      Thanks once again for saving me hundred of bucks!


      Regards,


      ADEBISI Foluso.

      ReplyDelete
    6. Great! Thanks for the detailed instructions. It works.

      ReplyDelete
    7. dd is wonderful as it will do block level copies, etc. However as noted here it will run until there is no more disk space (40GB disk) to write to. Fine. What if there are data on the original 100GB volume that is located AFTER the 40GB mark. You will be in a load of hurt (missing/corrupt files). Disk defrag'ing and making sure all data are within the first 39GB of the original 100GB drive would be necessary

      ReplyDelete
      Replies
      1. Hi Charles,

        Thanks for the comments - if you have a look at the article again, you'll see there is a dedicated step (the second screenshot) for shrinking the volume size below 40GB. The 'dd' is just to copy that remaining portion.

        Cheers,
        Gavin.

        Delete
      2. YES!
        thanks for re-pointing that out to me. That is something I over-looked.

        Cheers

        Delete
    8. Superb explanation & it's too clear to understand the concept as well, keep sharing admin with some updated information with right examples.Keep update more posts.

      Java Training in Chennai

      Salesforce Training in Chennai

      ReplyDelete
    9. It worked great! Thanks much for the steps.

      ReplyDelete
    10. This post is perfect! Just what I was looking for. Thanks!

      ReplyDelete
    11. Superb. I really enjoyed very much with this article here. Really it is an amazing article I had ever read. I hope it will help a lot for all. Thank you so much for this amazing posts and please keep update like this excellent article. thank you for sharing such a great blog with us.
      best rpa training in bangalore
      rpa training in pune | rpa course in bangalore
      RPA training in bangalore
      rpa training in chennai

      ReplyDelete
    12. Do you understand there's a 12 word phrase you can communicate to your partner... that will trigger intense feelings of love and impulsive appeal for you deep inside his heart?

      That's because deep inside these 12 words is a "secret signal" that fuels a man's impulse to love, idolize and care for you with all his heart...

      12 Words That Fuel A Man's Desire Response

      This impulse is so hardwired into a man's genetics that it will make him work harder than ever before to make your relationship the best part of both of your lives.

      As a matter of fact, triggering this powerful impulse is so important to having the best possible relationship with your man that once you send your man one of the "Secret Signals"...

      ...You will instantly notice him open his soul and mind to you in a way he never expressed before and he'll identify you as the one and only woman in the universe who has ever truly fascinated him.

      ReplyDelete
    13. Great post! Despite is a little bit old still works like a charm! Thank you very much.

      ReplyDelete
    14. The provided information’s are very useful to me. It’s a wonderful site for learning web application.
      Thank you for sharing this wonderful blog.
      Java training in Chennai

      Java training in Bangalore

      Java training in Hyderabad

      Java Training in Coimbatore

      Java Online Training

      ReplyDelete
    15. This post is so interactive and informative.keep update more information...
      AWS Training in Velachery
      AWS Training in Chennai

      ReplyDelete
    16. Great post. keep sharing such a worthy information.
      AWS course in Chennai

      ReplyDelete