Automate a ScaleIO lab setup with Vagrant and VirtualBox

If you’ve read the other blog posts on ScaleIO you might be interested in running it yourself. However, you might not have your own hardware lab to run it on, but you do have a laptop or desktop, right? Awesome! That’s all you need, and we’ll go through how to get it up and running by using some really smart tools.

If you just want to see how it runs without installing anything, here’s the entire automated setup captured in asciinema, one of my new favourite tools:

http://asciinema.org/a/6543

First tool we’ll use is VirtualBox, a freely available and open source virtualization solution (yes, no money needed to get it but please contribute to the development!) for Windows, OS X, Linux and Solaris. Download it, install it, and that’s it. No configuration needed unless you want to change any of the defaults we’ll be working with. It is a really good virtualization solution and I’ve been using it for years next to my VMware Workstation and VMware Fusion installations.

Next up is Vagrant, an awesome tool for automating creation and configuration of VMs running in VirtualBox, VMware Fusion, AWS and others. It runs on Windows, OS X and Linux as well, so no matter how you spell your favourite OS you’ll be able to use it. Download it, install it and you’re ready to go. No configuration needed there either, as all the settings we’ll use with Vagrant will be in a so called Vagrantfile.

If you want to try Vagrant and VirtualBox before we get to the ScaleIO deployment, you can create a folder called “vagrant”, open your terminal/command window into that folder, and run the following commands to install and start a recent Ubuntu distribution automatically:

vagrant box add saucy http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-amd64-vagrant-disk1.box
vagrant init
vagrant up

Vagrant will now download a “Cloud Image” of Ubuntu 13.10, initialize the Vagrant directory with a Vagrantfile and start the VM. After it’s booted, you can ssh to it with the following command:

vagrant ssh

That’s it! Now you have a fully installed Ubuntu 13.10 VM where you can do whatever you want, and all you’ve done is issue a few commands :)

Ok, now that you’ve become somewhat comfortable with VirtualBox and Vagrant, let’s move on to the ScaleIO lab setup. All you need for this is the ScaleIO installation package that you can find on support.emc.com, unpack it and you’ll find a folder called CentOS_6.X, and there you’ll see a file called “ECS-1.20-0.357.el6-install“, which is the most recent version by the time of this writing.

Create a new directory called “scaleio” somewhere on your computer and copy the installation file there. As you saw in the example above, you will also need a Vagrantfile to actually get your VMs up and running, and instead of letting you figure out by yourself how to do that I am providing such a Vagrantfile for your use here. Coming with no warranty and I’m not responsible if your computer breaks in any way :)

When you have all that, your “scaleio” folder should look like this:

$ ls
ECS-1.20-0.357.el6-install Vagrantfile

That’s all you need! Crazy, I know. But if you look at the Vagrantfile, you’ll see that we are in fact doing a lot of things in there. First, we’re defining three VMs (3 nodes is the minimum for a ScaleIO environment), setting static IPs on them, and running a really long shell command on each node which will automatically install and configure ScaleIO to use a truncated 100GB file as the SDS devices, and create an 8GB volume on it. There are no clients defined outside the ScaleIO environment, I’m leaving that as an exercise for you, dear reader.

One thing that needs to be changed in the Vagrantfile is the string called YOURLICENSEHERE in the long string of commands in the bottom of the file. Add your own ScaleIO license there and you’re done, and now run the following command to bring up the entire ScaleIO environment:

vagrant up

This will take a while so go grab a coffee and relax. I highly recommend using an SSD drive for this, if you don’t have one already isn’t it time you get one? Anyway, after the environment has been setup and is running, you can do the following to connect to the first MDM:

vagrant ssh mdm1

Then issue this command to verify that the install was completed correctly:

sudo scli --query_all --mdm_ip=192.168.50.10

You should see output similar to this:

[vagrant@mdm1 ~]$ sudo scli --query_all --mdm_ip=192.168.50.10
ScaleIO ECS Version: R1_20.0.357
Customer ID: XXXXXX
Installation ID: XXXXXXXXXXXXX
The system was activated 0 days ago
Rebuild network data copy is unlimited
Rebalance network data copy is unlimited
Query all returned 1 protection domains

Protection domain pdomain has 1 storage pool, 3 SDS nodes, 1 volumes and 112 GB (114688 MB) available for volume allocation
Rebuild/Rebalance parallelism is set to 3
Storage pool Default has 1 volumes and 112 GB (114688 MB) available for volume allocation

SDS Summary:
3 SDS nodes have Cluster-state UP
3 SDS nodes have Connection-state CONNECTED
3 SDS nodes have Remove-state NONE
3 SDS nodes have Device-state NORMAL
276.3 GB (283026 MB) total capacity
229.7 GB (235268 MB) unused capacity
0 Bytes snapshots capacity
16 GB (16384 MB) in-use capacity
16 GB (16384 MB) protected capacity
0 Bytes failed capacity
0 Bytes degraded-failed capacity
0 Bytes degraded-healthy capacity
0 Bytes active-source-back-rebuild capacity
0 Bytes pending-source-back-rebuild capacity
0 Bytes active-destination-back-rebuild capacity
0 Bytes pending-destination-back-rebuild capacity
0 Bytes pending-rebalance-moving-in capacity
0 Bytes pending-fwd-rebuild-moving-in capacity
0 Bytes pending-moving-in capacity
0 Bytes active-rebalance-moving-in capacity
0 Bytes active-fwd-rebuild-moving-in capacity
0 Bytes active-moving-in capacity
0 Bytes rebalance-moving-in capacity
0 Bytes fwd-rebuild-moving-in capacity
0 Bytes moving-in capacity
0 Bytes pending-rebalance-moving-out capacity
0 Bytes pending-fwd-rebuild-moving-out capacity
0 Bytes pending-moving-out capacity
0 Bytes active-rebalance-moving-out capacity
0 Bytes active-fwd-rebuild-moving-out capacity
0 Bytes active-moving-out capacity
0 Bytes rebalance-moving-out capacity
0 Bytes fwd-rebuild-moving-out capacity
0 Bytes moving-out capacity
16 GB (16384 MB) at-rest capacity
8 GB (8192 MB) primary capacity
8 GB (8192 MB) secondary capacity
Primary-reads:                        0 IOPS 0 Bytes per-second
Primary-writes:                       0 IOPS 0 Bytes per-second
Secondary-reads:                      0 IOPS 0 Bytes per-second
Secondary-writes:                     0 IOPS 0 Bytes per-second
Backward-rebuild-reads:               0 IOPS 0 Bytes per-second
Backward-rebuild-writes:              0 IOPS 0 Bytes per-second
Forward-rebuild-reads:                0 IOPS 0 Bytes per-second
Forward-rebuild-writes:               0 IOPS 0 Bytes per-second
Rebalance-reads:                      0 IOPS 0 Bytes per-second
Rebalance-writes:                     0 IOPS 0 Bytes per-second

Volume Summary:
1 volume. Total size: 8 GB (8192 MB)

I would also recommend you to point the ScaleIO dashboard, found in mdm1 and mdm2 at /opt/scaleio/ecs/mdm/bin/dashboard.jar, to your new cluster. Just copy the dashboard.jar file to your desktop, and if you haven’t changed the IP addresses set in the Vagrantfile you can point it to 192.168.50.10, and get the following dashboard image:

Screen Shot 2013-12-06 at 2.58.17 pm

And there you go, you now have a complete three node ScaleIO cluster up and running on your own computer, where you can try to put data, read data, fail nodes etc. Play around with it, and please comment on improvements you would like to see and if you’re editing or adding functionality, please let me know. Enjoy!

About these ads

About Jonas Rosland

Solutions architect at Office of the CTO at EMC
This entry was posted in Automation, EMC, Experiment, How to, Installation, ScaleIO and tagged . Bookmark the permalink.

11 Responses to Automate a ScaleIO lab setup with Vagrant and VirtualBox

  1. ***Performance from a cluster living on a single disk on a single laptop should not be relied upon for mission critical workloads.

    LOL. Cool stuff, Jonas.

  2. Having never used vagrant before, do you need to do “vagrant init saucy”?

  3. Pingback: Free ScaleIO licenses for EMC Elect! | pureVirtual

  4. dbastorage says:

    Hi Jonas, finally got some time to look at this again today, one question… Having mounted the vol1 to mdm1, can you help me understand why the root volume (/) uses up space as I add files to the ScaleIO vol1 filesystem? Having reconfigured vol1 to be 32GB, ScaleIO stops working once the root volume fills as it is only 8GB. Any ideas?

    • dbastorage says:

      [vagrant@tb ~]$ ls -l /home/vagrant
      total 6401992
      -rw-r–r– 1 root root 100000000000 Jul 20 21:46 scaleio1

      It’s the “fake” device from the Vagrantfile:
      # fake device
      device = “/home/vagrant/scaleio1″

      HTH

      • Jonas Rosland says:

        First off, this setup is not meant for production or real storage for anything else than a demo.
        If you want to make the ScaleIO environment set up by the Vagrant script use another device, such as /vagrant/device(id), you can do that by changing the parameters in the Vagrantscript. That way the storage would be located outside the VirtualBox VM, and you wouldn’t have the problem you’re describing. Just make sure you have a large enough drive to store them on, you don’t want to run out of physical space :)

  5. dbastorage says:

    Have you tried using /vagrant for the device files, not working for me, add_sds steps fail with “Error: MDM failed command. Status: Error opening SDS device” … I have tried ensuring the files and dir are permissioned as root:root, but no luck! It’s a shame, I want to encourage folks to work with this, but need an automated method. Any way to increase the box-disk1.vmdk to 40GB?

    • Jonas Rosland says:

      Easiest way of doing this would be to add a new virtual disk to each the nodes and use those as SDS devices. Expanding a VMDK is currently not supported in Vagrant afaik.

      But again, this is only meant for demo purposes, nothing else. Feel free to add more functionality to it though, and create a pull request when you’re done :)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s