EMC ScaleIO – The new kid on the Elastic Converged Storage Block

ScaleIO delivers something that’s all the rage right now, namely a highly scalable storage solution for things like web scale apps, databases and virtual desktops. ScaleIO ECS is software that uses application servers to create an elastic, scalable and resilient virtual SAN at a fraction of the cost and complexity of traditional storage solutions.

ScaleIO is really easy to implement, as it can be installed on both new and existing servers, not only dedicated storage servers but real servers that also run databases, hypervisors or other applications. These servers can then aggregate their disk capacity and performance into a large pool of elastic storage. Adding new storage capacity and compute resources can be done on the fly without any downtime.

ScaleIO can scale out to thousands of servers, making it possible to have a wall-to-wall resilient storage solution. Storage capacity and compute power grow incrementally. When necessary, you can add a single device for increased storage or a multitude of servers for business growth.

ScaleIO also provides:

  • Writeable snapshots, where instant copies can be created with full performance
  • Multi-tenancy, with segregation of data using data at rest encryption
  • Multi-tiering, using PCIe Flash, SSD & HDD to create multiple tiers, and auto-rebalance the tiers to optimise performance and resource usage when new capacity is added
  • Quality of Service, with the possibility of limiting IOPS per application
  • Mesh mirroring, protecting the stored data against node failure

To handle a storage system at this scale, ScaleIO also have a massive I/O parallelism, where parts of an application’s data can be spread out across all nodes in a ScaleIO cluster, giving a huge performance increase compared to a single storage array. All servers participate in the I/O workload, generating massively parallel performance with high bandwidth. When you add servers and storage to the system, performance increases, and ScaleIO rapidly rebuilds and rebalances the data.

ScaleIO can be used in several different setups, one being the use of a client software on the applications servers, and a storage software on the storage nodes.

What’s really cool though is when you blend them together, letting all your application servers also being storage servers:

Just imagine the possibilities of scale in your already existing application server environment!

ScaleIO is hardware agnostic to the server and storage resources. It may be installed on physical or virtual environments and supports CentOS, RedHat, ESX, SuSe, and Xen. Version 1.20 was recently released and can be used in your production environment already today!

To download the ScaleIO code and documentation, please visit https://support.emc.com/products/33925_ScaleIO.

Please also join the ScaleIO Community, your central location for information and discussions on ScaleIO, on http://one.emc.com/clearspace/community/active/scaleio.

Advertisements
Posted in Big Data, Converged Infrastructure, EMC, IT Transformation, News, ScaleIO | Tagged , , | 1 Comment

VMworld 2013 San Francisco and Barcelona – all in one post!

IMG_1123

This is the first year where I’ve attended both VMworld conferences, and my first over the US. There are a lot of similarities, and a lot of differences. I’ve gotten the question many times during the last couple of weeks “Which one is better?” and I’d like to offer my view on the both, and perhaps you can decide which one you’ll attend next year.

IMG_0935

Let’s start with VMworld San Francisco. It’s huge. And by huge, I mean really huge. The amount of people inside the three buildings is just crazy, 22500+ people attended this year. The amount of attendees of course draw in more partners and vendors as well to show off their goods and services, and I spent a good deal of the week talking to people around the Solutions Exchange. I even got to try out some very rare gear, the Google Glass, which was very cool 🙂

IMG_1087

Lots of great partners, vendors and startups in the Solutions Exchange, so make sure you walk around for a few hours to learn what everyone has to offer (and for lots of chances to win tablets, phones, laptops and other gear). It was great to see so many NSX partners and automation tools vendors out there, and their booths were filled pretty much all the time.

IMG_1015

The keynotes were great. By now I think you all know what was said, but basically it was all around the SDDC and how to build it, vCloud Hybrid Service and Automation. One thing I really liked was that they spent maximum 2 minutes on what’s new in vSphere 5.5. It really shows that the hypervisor is stable, secure and what’s been added is mostly making it stronger and better. Features? Sure. Let’s handle them in a breakout session, we have more interesting things to cover. Love it.

San Francisco is a great city to be in, with lots to see and do when out of the conference. Getting around after the conference though, around 6PM, is a nightmare. Prepare to be stuck in traffic, so plan ahead and book your hotel early near the conference, preferably within walking distance.

IMG_1149

The party in San Francisco was just nuts. As this was the 10th year of VMworld they went all out. An entire stadium filled to the brim with virtualization aficionados listening to two(!) bands, Imagine Dragons and Train. It was beautiful 🙂

IMG_1207

I also took the time to visit the VMware facilities in Palo Alto on the last day of VMworld. It was very empty, but beautiful. Lots of recycled material, new buildings being built, and I got to hug the VMware sign 🙂

Let’s move over to Barcelona. This was the 6th year that VMworld has been held in Europe, first two was in Cannes, number three and four in Copenhagen and now the last two in Barcelona. Where will we end up next year? Who knows?

IMG_2130

VMworld Barcelona was larger that any earlier VMworld in Europe, approximately 8500 attendees! I think it’s amazing that so many people from so many countries covering Europe, the Middle East and Africa take the time to get to one of the largest events of all year to network, contribute and learn. Great stuff 🙂

IMG_2122

IMG_2126

EMC of course had a large presence, with a presentation area, a Genius Bar, and a Hands-On Labs (Interactive Demos is the politically correct naming) area with lots of demos that you could play around with. Since I’ve now moved to the Office of the CTO at EMC I was responsible to show some of the innovative solutions that we’ve been involved in, such as automated NSX bandwidth scaling for VPLEX and the Hadoop Starter Kit with Hadoop, VMware Big Data Extensions (Serengeti) and Isilon.

In Barcelona there were not as many startups as in San Francisco, but I did find a very interesting VMware R&D Booth (perhaps it was present in SF as well but in that case I must’ve missed it). Four really cool solutions were showcased there, out of which two really piqued my interest.

IMG_2124

The first one handled automated application scaling (cool) using Hyperic to get application metrics (cooler) together with machine learning (ok now we’re moving towards Skynet territory). The machine learning part measures the application metrics fed by Hyperic and decides if the choices that were made earlier improved or worsened the situation, and learns from it’s mistakes and successes so it knows what to do next time a similar situation arises. Just amazing!

IMG_2136

The second one was around using graph databases (my favourite type of database) to map the virtual infrastructure relationships in a proper manner instead of relying of massive amounts of CSV-files and Excel-hacking. With this, it is easily shown that an ESXi-host currently have access to 80+ VLANs, you can get the CPU count on all VMs that have 20GB+ storage, and make a list of all the virtual networks that are configured with Jumbo Frames on VLAN 65. Very powerful stuff, and I was just thinking “Why hasn’t this been done before?!”. Can’t wait to see this incorporated into a product somewhere. Oh, and if you want to see some live examples of what a graph database is, have a look here.

IMG_2178

During VMworld I was also able to get a great EUC and Automation team discussion going, with the ByteLife team that has created an automated VMware View Disaster Recovery engine that I’ve written about earlier, together with EUC experts @erikzandboer, @theSANzone and @Jon_2vcps.

IMG_2235

Also, who can forget the labs? In Barcelona over 35000 VMs were created over the course of four days (crazy!), with lots of people choosing the BYOD area for relaxed lab taking, and I’ve heard one of the most popular lab was around VMware NSX, the network virtualization product that was launched in San Francisco. It really shows the interest that the attendees have in this new part of the Software-Defined Data Center.

IMG_1146

Last but not least, it was also the first year me and Magnus Nilsson (@swevm) got to present at VMworld. We did a session called “VCM5472 – vCO say hi to Razor and Software-Defined Storage” where we showed how to use vCO’s workflow engine to tie together PuppetLabs Razor to provision compute resources and EMC’s ViPR to provision storage resources, all in one complete workflow started from the vSphere WebGUI. It was a lot of fun, and lots of great questions and discussions were had during and after the session.

So which one should you attend? I’d say both as they are so much fun. But I you can only visit one, I’d say take the one that gets you least jetlagged, as you want to be up early and connect, learn and discuss as much as possible. Both are really good conferences, and you don’t really miss out if you’re not going to both.

Finally, thank you so much to everyone that attended our session, had a discussion with me or were great to me at a bar, you made VMworld in both San Francisco and Barcelona incredible events, and I hope to meet you all again next year!

Posted in EMC, IT Transformation, News, Razor, VCE, vCenter Orchestrator, VMware, VMworld | Leave a comment

New role – New location

As most of you readers know, I’ve been a part of the vSpecialist team at EMC for almost exactly 3 full years now. It’s been an amazing ride, lots of interesting challenges and changes in the industry, I’ve met extremely smart people (one actual rocket scientist!) and had more fun than I could ever imagine when I first reached out to Chad Sakac after his blog post about the team getting together. It was especially the last part that got me hooked, which still rings true today for the EMC presale teams:

If you are passionate about these technologies, good in front of people, like working hard when it’s something you believe in, and feel like we’re at the cusp of a wave of technological change – I want to know you.

So I sent him an email, and a few months later I joined the ranks as a vSpecialist covering EMEA North. Thanks Chad, Wade O’Harrow, and Bertrand LaLanne for building an awesome tribe!

And now it’s time for something new and exciting again 🙂

As a few of you already know, I will transition to a new role soon to become a part of something called the Open Innovations Lab team at the Office of the CTO. At OIL, I’ll be engaging customers, partners and EMC business units to create first in kind solutions that we see a need for in the market, that will then end up in our customers’ environments to solve their challenging problems. In the team I’ll be joining former vSpecialists Ed Walsh and Jim Ruddy, among others. Ed actually did a really nice writeup on what the OIL team does and how you can utilize us if you have a problem but no solution here: http://veddiew.typepad.com/blog/2013/08/oilintro.html

When doing this role transition I will also relocate together with my wife to Boston, so some of you will have me close by and for some of you I will be further away. If you live or happen to be near the Boston area be sure to let me know all the places you’d recommend for eating/coffee/cinema/outings/astronomy/arcade/etc 🙂 

I’m looking forward to letting everyone reading this know of all the cool solutions we’re building at the Open Innovations Lab, and be sure to reach out if you think we can help you.

Posted in EMC, Experiment, News | 8 Comments

Help! There’s a yellow elephant in my server room!

That could be what’s on an admin’s mind during their first try to deploy Hadoop. It’s not necessarily that hard to install, but to understand how to scale it and how to work with it you need to put some proper time into it.

How about we try to make Hadoop easier for everyone to understand and use? That’s what the team in the Open Innovations Lab at EMC thought, and they’ve now released a full whitepaper called “EMC Hadoop Starter Kit – EMC Isilon and VMware Big Data Extensions for Hadoop”. Now you might wonder what Isilon and VMware has to do with Hadoop, and I’ll come to that in just a bit.

Hadoop + Serengeti + Isilon = AWESOME

Hadoop + Serengeti + Isilon = AWESOME

First, let’s look at what type of Hadoop distribution we’re talking about deploying here. There are different distributions (or versions) of the lovely elephant Hadoop out in the wild. The most notable ones are Pivotal HD, Hortonworks, Cloudera and of course the original open source Apache one. For the purpose of this whitepaper, the Open Innovations Lab team has decided to start with the Apache Hadoop distribution.

Now what about VMware and Hadoop? We’re actually talking about virtualizing Hadoop here, something that’s usually a big “heck no” in Hadoop circles. Actually, for most companies that have an existing VMware virtualization environment, you’re sure to find a lot of resources just sitting there idle and ready to use. Why not use them for Hadoop and help your organization in getting some good, real information out of all that data you’re already storing? Other benefits of virtualization Hadoop are:

  • Rapid provisioning – quickly creating a new cluster or node when needed
  • High availability – Protecting the Single Points Of Failure like the NameNode with the help of VMware HA
  • Elasticity – Scale your Hadoop cluster to the size you want it to be with resources still shared with other applications in your virtualized environment
  • Multi-tenancy – Run multiple Hadoop clusters in the same environment, dividing up data but centralizing management
  • Portability – Use and mix any of the popular Hadoop distributions (Apache, Pivotal HD, Cloudera, Hortonworks) with no data migration

Some of you might now wonder how we can achieve zero data migration, as the data is usually tied to an Hadoop cluster by the use of HDFS? Well, that’s been taken care of as well thanks to the inclusion of EMC Isilon in the whitepaper. Isilon is the only scale-out NAS platform with HDFS natively integrated, meaning we can create and mount HDFS filesystems to any new cluster or node that’s created.

By separating compute and data, we achieve elasticity in both. Want more compute? Scale up your VMs. Need more data? Scale up your storage. This gives you an unprecedented ability to start your Journey to Big Data in a more cost-effective and efficient manner. So, how do we piece it all together? By using the vSphere Big Data Extensions, powered by something called Project Serengeti (Serengeti is a large area in Africa, home to large animals like elephants, get the reference? :)), it gives you as an administrator an easy to use interface to create, manage, scale and decommission Hadoop clusters in your environment.

For the full whitepaper including all the step-by-step instructions on how to get your own Hadoop Starter Kit going, have a look here:

http://community.emc.com/docs/DOC-26892

Posted in Automation, Big Data, EMC, How to, Installation, IT Transformation, Pivotal, VMware | Tagged , , , | 1 Comment

LeapMotion unboxing and quick review

Today I got a really nice surprise when I got to the office, our LeapMotion devices had been delivered! Yes, I ordered eight of them and no, not all are for me 🙂

IMG_0125

LeapMotion is a small and incredible smart device that creates a virtual air space where it can detect your hand and finger movements and translate that into controlling apps on your PC or Mac. A really nice intro video is at their website here.

So I thought I’d share with you a few pictures of the unboxing of the LeapMotion devices and a quick review after using it for a few hours.

IMG_0127

The LeapMotion comes in Apple-esque packaging, very neat and thought through. You get the device itself and two USB cables, one short for what I would guess is mostly laptop use and one long to make it easy to use even if your computer is located far from your screen. The picture below will get you a feel of the size of the device, it’s very small but feels very rigid. It also has a rubber bottom which makes sure it stays put on pretty much any surface.

IMG_0133

To start using it you install the software that you download from their website and plug it in. Automatically an introduction to the device will start and walk you through some of the features such as where you can use your hand, what the device sees and how you click & draw using gestures.

IMG_0137

My colleague Mattias Söderberg is trying out LeapMotion and you can easily see the level of detail in the hands that the device is detecting.

After the introduction is done you’ll create an account for the Airspace Store, where you can easily find LeapMotion-enabled apps and purchase them. Some are free, and the rest are pretty cheap. I would highly recommend the free app “Touchless” for both Mac and Windows to control your mouse pointer and scroll wheel using normal hand gestures, and the app “Exoplanet” which guides you around the universe and gives you a great look at all the planets and exoplanets that we have discovered so far. It’s beautiful!

Screen Shot 2013-07-25 at 2.35.41 PM

The Airspace Store

The feel of the device is astounding. It’s very easy to use and you get used to it pretty quickly. Scrolling, zooming and clicking might feel a bit awkward when you start using it, but apps that are designed with LeapMotion support are a charm to use.

Right now I’m hoping for more applications to come out with LeapMotion support, and if you’d like one yourself just head to their website and order one 🙂

Posted in Experiment, Installation | Tagged , , , | Leave a comment

Installing Rasplex on a Raspberry Pi – SCART and HDMI

This is something quite different from the normal content here at pureVirtual, but I thought I’d share my success story on how to get Rasplex on a Raspberry Pi to work over both SCART and HDMI.

First you might ask, what’s Rasplex? It’s a port of the Plex Home Theater media player that’s made to work on the Raspberry Pi. It looks like the picture above, pretty cool!
Ok, so what’s a Raspberry Pi, you might ask. It’s an amazing small and cheap computer that can be used for a multitude of things, such as learning programming, creating multi-node clustered supercomputers, MAME-gaming and media playing. It looks like this:

I bought my Raspberry Pi a few months ago, playing around with it for a week getting different distributions running, playing some games on and then forgetting about it for a while. But last weekend I thought “Hey, why not install Plex on this thing for our old TV?”. So let’s get started with how I did it.

IMG_9927

First, to install anything on a Pi you need an SD card. I bought an 8GB SDHC card (class 10), you don’t really need a card that big but I thought I might use it for other purposes later so why not. To get Rasplex onto the SD card, you can use the installers available at the Rasplex website or just use “dd” or a similar command to write the content of the downloaded image to the card. Here’s how I did it on my Mac using the very fast “raw disk” or “rdisk” method. Of course, use your own disk numbering here.

  1. Open Disk Utility, choose your SD card and erase it. You’ll see a new partition called “UNTITLED” created.
  2. List the partitions of your Mac to see the disk ID of the SD card using the commandline tool diskutil: “diskutil list
  3. Unmount the “UNTITLED” partition without unmounting the entire disk: “diskutil umount /dev/disk2s1
  4. Now that we have the raw disk but no partitions mounted we can access the raw disk and write the Rasplex image to the SD card very quickly using the following command: “sudo dd if=rasplex-0.2.1.img of=/dev/rdisk2 bs=1m

Done! You now have Rasplex on your card and we’re getting ready to set up the hardware.

The TV I wanted to hook the Raspberry Pi up to was an old “fat” TV, with nothing but SCART inputs for devices like VHS and DVD players. The Pi has an RCA output for video and a 3.5mm jack for audio. So how to connect these to my TV?

IMG_9931

Ah, the joy of shopping at the local electronics shop. I bought an RCA-to-RCA cable for video, a 3.5mm-to-2xRCA cable for audio and an RCA-to-SCART converter. I connected them all together and plugged it into the Pi and to the TV. Let’s boot it up!

IMG_9933

Unfortunately, I got no picture, just a blank screen saying I have no signal. Luckily I found out that the Rasplex install forces HDMI out, even if there’s no HDMI connected, and this can be changed.

So plug in your SD card into your computer again, and open the file “config.txt” on the card. You should find two line like this:

hdmi_force_hotplug=1
hdmi_drive=2

Comment them out or remove them completely, and add the following into the file (choose from your own standard, PAL/NTSC/etc):

sdtv_mode defines the TV standard for composite output (default=0)
sdtv_mode=0    Normal NTSC
sdtv_mode=1    Japanese version of NTSC – no pedestal
sdtv_mode=2    Normal PAL
sdtv_mode=3    Brazilian version of PAL – 525/60 rather than 625/50, different subcarrier
sdtv_aspect defines the aspect ratio for composite output (default=1)
sdtv_aspect=1  4:3
sdtv_aspect=2  14:9
sdtv_aspect=3  16:9

As the TV I was installing Rasplex for was a PAL 4:3 TV, I put in these lines:

sdtv_mode=2
sdtv_aspect=1

Now when I booted it up again it worked! I got picture and sound, and streaming media worked like a charm.  After a while we wanted to try it out on another TV that was using HDMI, and to get it working with HDMI all I needed to do was to revert the changes I made to the config.txt file, and it was up and running again.

All in all, the Raspberry Pi is definitely a worthy component in a modern home theater setup, it’s incredibly easy to use and very powerful even if it’s tiny. Try it out!

For those wondering about the other gear that was used in building this, of course there’s also a media center remote, a power supply and a wireless D-Link DWA-121 adapter that works great with the Raspberry Pi.

IMG_9918IMG_9922 IMG_9925

There are a few tweaks I’d like to add to this post as well, that has helped me in streaming different quality media across multiple rooms with sometimes spotty wifi.

First, overclocking the Pi. When overclocking it the Rasplex GUI runs smoother and media streaming works better. Of course, overclocking should always be done in moderation, and you can find out more about overclocking here. And here’s my overclocking settings that have proven to be very stable:

arm_freq=900
core_freq=333
sdram_freq=450
over_voltage=2

Secondly, I’d recommend increasing the cache on Rasplex. It’s set at default to 5%, but I’ve increased it to 30% (one of the benefits of a larger and faster SD card). You can find the cache setting at Preferences->System->Cache. If you’re having issues (like I had) with your remote not working properly when using up/down/right/left arrows, try enabling “Remote control sends keyboard presses” in Preferences->System->Input Devices.

Now go buy yourself a Raspberry Pi and get to it 🙂

Sources:

http://forums.plexapp.com/index.php/topic/62956-composite-output/
http://forums.plexapp.com/index.php/topic/72022-rasplex-overclock/

http://www.kjell.com/sortiment/ljud-bild/kablar-adaptrar/rca-video/rca-kablar/1x-rca-kabel-p39048
http://www.kjell.com/sortiment/ljud-bild/kablar-adaptrar/rca-audio/rca-till-3-5-mm/anslutningskabel-svart-1-5-m-p39209
http://www.kjell.com/sortiment/ljud-bild/kablar-adaptrar/scart/scart-till-rca/scart-adapter-komposit-p37004

 

Posted in Experiment, How to, Installation | Tagged , , | 6 Comments

VMware View Failover Automation – Solved!

Last week I spent some time talking to a partner, ByteLife, about a solution that they’ve created for a customer. The customer needed an automated failover solution for their VMware View environment and this is the story on how they solved it.

VMware-View-Diagram-large-edited

First some background to what the customer saw as their problem. Customers using VMware View sometimes face issues trying to handle fault tolerance in case of site disaster. Even though VMware has a solution for failing over virtual machines to a secondary site, if you are lucky enough to have one, it does not support virtual desktop infrastructures (see here for the Compability Matrix, no SRM support for View).

As a result, VMware View is often installed either separately in both datacenters, guaranteeing that at least half of the desktops would survive the site failure, or even as a single environment that could break down in whole during a large outage.

The customers, whose business and workers usually don’t like losing access to their applications and/or desktops during a site failure, can choose a more complex setup and use specific manual failover tasks during the site failure. The good thing is that it is possible using solutions such as this from VMware or this from EMC. On the other hand – during a site failover, IT personnel already have a tremendous load and pressure to bring the site or the services back online – any additional service to worry about just adds to the unnecessary complexity of the crisis. Having an automated failover that can be initiated by few clicks in the remaining datacenter will free up the IT staff’s time, when they need it the most.

So for this specific customer, ByteLife has developed a solution called “VMware View Failover Automation” with the following key functionalities:

  • Failover desktop pools and virtual machines in case of site crash
  • Migrate desktop pools and virtual machines during maintenance, tests, and rebalancing the load between sites or as failback after disaster
  • Restore storage synchronization between datacenters after the outage
  • Integration with vSphere WebClient

But wait, there’s more!

For this, all you need is vCenter Orchestrator, no SRM. Yes, you read that right, no SRM. What’s even cooler is that you can actually use this for several sites, you’re not limited to just two sites! Imagine that, being able to failover any VMware View site, without SRM, within minutes.

Failover of the VMware View environment takes only minutes, depending on the number as well as nature of desktops and the components that are failed over. It’s been proven that the first users can restart their work in new site in less than 5 minutes after the failover is initiated, which I find pretty amazing compared to the other solutions I’ve seen on this subject. So how this work, what makes it so fast?

View Pools

Looking at the picture above, let’s assume your current Linked clone pools is called  *-A-Pool and your pool that’ll be used in a failover scenario is called *-A-Pool-Recovery. The pools are exactly the same, uses the same VM as base image, and some VMs are already pre-provisioned. So when failing over, all that’s done is registering the users to the *-A-Pool-Recovery pool, removing them from *-A-Pool and then they can reconnect. Same desktop Pool ID, same everything, so it’s fully transparent to the users. Some other settings are automated as well, like maximum amount of desktops per pool. All pools are enabled all the time, to make sure it’s possible to do changes and things like recompose on all pools to have a consistent image version across the entire environment. All automated, and seeing it live is really impressive.

But what about the manual pools? Well, they’re handled a bit differently. In case of a failure, the vCO workflow shuts down all manual VMs (if they are still reachable and running), removed from vCenter inventory, the datastores dismounted, replication flow of the datastore is then switched and the now primary datastore is attached to the secondary site, the VMs readded to the inventory, powered on, then the manual pool is modified in the AD LDS database to be moved from A to B. And of course, all user assignments are preserved. All automated, frickin awesome IMHO!

VMware View vCenter Orchestrator Failover Workflow

As this is based on vCO workflows, there’s no hardcoded input on pools or available sites, everything is collected using the Status Report, Migration, Failover and Restore Synchronization workflows. The vCO workflows only lists the pools and sites that actually have entitlements and are active, everything else is hidden meaning you can focus on getting your stuff up and running quickly instead of having to trawl through all the possible environments that *might* be used.

So, this can be used for failover, but also planned migration of VMs from one site to another if you want to balance the workload between sites for instance.

Another cool feature that came up during the discussion is that you could actually use this for recomposing large environments with very little downtime. Let’s say you’re currently using *-A-Pool as in our previous example, you could recompose the virtual desktops in *-A-Pool-Recovery, and just migrate your users over there. Instead of recomposing all existing VMs, you’d move your users to already recomposed images with fresh patches and everything installed, how cool is that?!

I found it very refreshing to see a totally new take on the failover methods for VMware View environments, and I’m certain it would benefit your environment.

And lastly, some technical info:
The solution is based on VMware vCenter Orchestrator workflows. The current version of VMware View Failover Automation is supported with VMware View 5.1 and up; and EMC VNX with MirrorView. The network latency between two sites must not exceed 5 ms.

Contact info for the solution:
Alar Kuuda (Project Manager) – alar.kuuda@bytelife.com
+372 5097873

Posted in Automation, EMC, SRM, vCenter, vCenter Orchestrator, View, VMware, VNX, vSphere5 | Tagged , , | 2 Comments

EMC World 2013 – Best one yet?

HeroImage2013

I’ve spent the last week in Vegas, meeting incredible people and learning a lot at this year’s amazing EMC World. Amazing, just like every year, but always a bit different. Time for an EMC World 2013 wrap-up don’t you think?

Let’s start with some interesting numbers:

For the event, 93 trucks holding 604,335 lbs arrived with stuff. All that stuff filled 2.25 Million sq ft space, which also included 3.4 miles network cables. The 15,000 attendees booked 42,618 hotel nights, drank 174,480 cups of coffee, walked on average 5,523 steps or 2.3 miles a day, looked at 8,768 PowerPoint slides and listened to the keynote sessions through a 190,000 Watt PA system. There were over 500 breakout sessions covering everything that EMC and it’s partners can deliver, and at the same time there were 31 Hands-On Labs where all attendees could have a closer look at all the cool tech that was announced throughout the week.

EMC World 2013 Hands-On Labs

And of course, some social media badge hacking was done as well 🙂

EMC World 2013 Badge Hack

During the first keynote sessions, Joe Tucci welcomed and thanked us all, customers, partners, employees, and then laid out the strategy for EMC going forward.

This strategy is very focused on enabling customers to run their own Software-Defined DataCenter (SDDC) by leveraging the intelligence built into today’s and tomorrow’s smarter hardware, abstracting the management to easier create services that consume virtual resources.

Leverage Intelligence

One of the biggest announcements at EMC World this year was definitely around Software-Defined Storage and a product called ViPR that makes it all possible. It abstracts the heterogenous management for different storage functions like block, file and object, and ties it all together into a single API with integrations to VMware, Microsoft and OpenStack (and yes, a GUI is of course available too). We’ve already done two blog posts on ViPR here and here so please go there for more information on it.

Software-Defined Storage

Another interesting “announcement” was the view of EMC’s four brands. At the top level is EMC², with EMC II, VMware, RSA and Pivotal as separate brands that are free to execute their own missions, but still are strategically aligned. I think it’s great to see that this strategy is working well for all four brands, and that it’s still the strategy going forward.

EMCs four brands with photos

As you might have noticed EMC has a new member of the family, Pivotal. Announced in late April, Pivotal’s mission is to take customers on their journey for a new platform for their Big Data, Fast Data and New Apps with full Cloud Independence. The last part is very crucial, as Pivotal wants to make sure customers can use any Cloud available or already in use, be it a VMware Private Cloud, AWS, OpenStack or something else. Pivotal is owned by EMC, VMware and GE, where GE sees Pivotal as a crucial part for their telemetry data collection of all their future products like jet engines, trains, PACS and so on. Now THAT is really cool!

Pivotal One Fabric Ingredients

In parallel to EMC World there was also a conference for SEs, including partner SEs. All in all, we were approximately 3000 people at the SE Conference, where 500+ were from partners! During this part of the conference, SEs from all EMCs partners were invited to join us in separate keynotes where they got the opportunity to participate in full transparency discussions with our top-level management, and technical breakouts (some of them NDA). More info on what you could see this year can be found at Chad Sakac’s blog post here. If you’re an EMC/VMware/RSA/Pivotal partner, you should definitely be there next year!

Outside the keynotes and breakout sessions there were also a large Solutions Pavilion, where the superhero theme was very prominent (I LOVE superheroes, so I thought it was really cool :)). Some of the figures you could find there were Captain Scaleout (Isilon Hero!) and the X-Men, and there were a lot of games/contests where you could win everything from iPads to Syncplicity-branded sneakers. A great number of our partners were there to show off their solutions and how they integrate with the four brands on EMC, very cool to see that it wasn’t all focused on the storage part of EMC but rather the SDDC.

Jonas Rosland and X-Men

Another area where a lot of interesting things happened were over at the (HUGE!!!) EMC Square Social Media space. EMC TV were livestreaming interviews with a ton of interesting people, there were performances done by magicians and gymnasts, places where you could get your photo taken as a “DataCenter Hero” and more.

EMC Square Social Space

One part where I spent a lot of time as well was in the Blogger’s Space, where there were a dedicated space for all the EMC Elects (fancy!). The EMC Elects that attended EMC World were also invited for a tour of the SuperNAP in Las Vegas, the world largest and most powerful datacenter, so if you needed another reason why to get yourself listed as an EMC Elect, there you go 🙂 Want to enlist as an EMC Elect? Add yourself or nominate someone you think should be on this list here.
So, what else happens in the Blogger’s Space during events like this? Well, mostly extremely interesting discussions with focused people, and to mention a few I’d like to point out @CommsNinja, @mjbrender, @colinmcnamara and @VirtualChappy. There are more but I won’t make this into an #FF post 🙂

Jonas Rosland DataCenter Hero
Other stuff that happened in the EMC Square Social Media space were several recordings of #EngineersUnplugged, and of course a ton of whiteboarding between customers, partners, bloggers and employees. It was pure awesome 🙂

To end this great EMC World 2013, we had Bruno Mars rockin’ it for a special private performance for all of us on Wednesday night, and MAN that was a great show. The whole band really filled the room and everyone in it with a ton of energy, and everyone really pitched in with singing, dancing and jumping around 🙂

Bruno Mars at EMC World 2013

Lastly, I’d like to extend a thank you to all the attending customers, partners and employees, you made it all an awesome experience for me and my colleagues and friends. So THANK YOU!

EMC World Thank You

Posted in Automation, Big Data, EMC, EMC World, IT Transformation, News, Pivotal, Social Media, VMware | Tagged , , , | Leave a comment

EMC ViPR give your SDDC superpowers

vipr_small_logoIts no news that IT operations are transforming with the help of software. VMware talk about the Software Defined Datacenter and how everything is becoming Software Defined. What do this mean in a storage context and how does EMC ViPR integrate with the SDDC to further extend on its capabilities. One of the questions being talked about a lot today is about storage being turned into software based appliances. With EMC ViPR we can deliver the best of both worlds and adapt as the market transform.

When adding physical storage to your SDDC today there are different questions that need to be answered before the storage resource end up being consumed by your SDDC. A lot of the questions need interaction with several different teams which in the end mean you lose valuable time before the ordered resources can be utilized.

manual_storage_provisioning_process

If we break down the process into storage related tasks one quickly understand that the provisioning task is not as easy as it may sound like. The administrator of the SDDC order storage resources and then have to answer a few questions sent back from the storage team and architects. When that´s done several different configuration tasks need to happen in different physcial hardware, switches, array etc.

What_make_storage_so_complex

A substantial part of this interaction between different teams and personnel can be automated with EMC ViPR meaning operations resources is relieved from repetitive manual tasks instead automated by software pushing speed of delivery as well as quality up substantially while at the same time standardizing storage operations, reporting and more.

With the ViPR vCenter Orchestrator plugin we can deliver full storage lifecycle management into your own SDDC. Instead of opening a storage provisiong request, waiting, answering questions, waiting ,likely answering more questions and waiting we can integrate the storage provisioning process directly into vCenter WebUI with approval processes and automatic provisioning.

Below is an example of how this could look like from a VM admin perspective when in the need to add additional datastore to the cluster named “homecluster”.

Step 1.

Mylab_Add_physcial_datastore_step1_masked

We click on “Add physical datastore” and follow the vCO workflow wizard.

Step 2.

Mylab_Add_physcial_datastore_step2

Here we can choose storage type, FC, NFS and decide on size of datastore and name.

Step 3.

Mylab_Add_physcial_datastore_step3

Next we choose which virtual storage array and pool to consume resources from.

Step 4.

Click finish and wait for your ordered storage resource to appear in your vSphere cluster. Depending on if there is an approval process attached in viPR to this resource it may take shorter or longer before your can start to consume the new resource.

Below is how this workflow look like. I will add another post later covering the details behind the workflow as well as how additional functionality can be added that extend your SDDC functionality further.

ViPR_vCO_workflow

Posted in Automation, vCenter, vCenter Orchestrator, vSphere5 | Tagged , | 1 Comment

Today, ViPR is Bo(u)rn(e) – Virtualize Everything, Compromise Nothing

Today is the first day of EMC World here in Las Vegas, and just a few moments ago during the keynote a new product was announced called EMC ViPR. Some of you might have heard about a project called Bourne (one of our worst kept secrets apparently), and this project now has not only a proper name but also a really cool logo 🙂


ViPR

So just what is this ViPR thing? I’ll explain in just a bit, but let’s start by looking at the current state of the Software-Defined Data Center (SDDC) functionality across the board first. The past decade and a half has seen virtualization technology transform applications, servers and networks into software abstractions that enable data center and IT managers to build adaptive and agile data centers. The rise of the SDDC promises to build on the progress of virtualization by completely abstracting every component of the data center from its underlying hardware so that IT can truly deliver IT resources as customizable, on-demand services. This is the transformative potential. However, the reality is that storage hasn’t really transformed into an easily managed entity for a truly virtual data center. Unlike applications, servers and networking, storage and its valuable data is still too often tied to proprietary hardware. And that sucks.

Storage hardware and operating systems still vary much more than current server, client or network platforms. Storage platforms are incredibly diverse – even different arrays from the same vendor can feature different operating systems, proprietary APIs and unique feature sets. Every new IT endeavor might require a new storage array – be it block, file or object-based – optimized for that purpose. Out of necessity, storage administrators have become storage managers who spend most of their time managing arrays rather than optimizing information storage for the business. If enterprises and service providers are going to break from this pattern and be part of the evolution to a SDDC they need to fundamentally rethink storage.

And this is where ViPR comes in, to disrupt the status quo. EMC ViPR brings the same virtualization benefits enjoyed by the compute and network elements of SDDC to storage. EMC ViPR is a revolutionary approach to storage automation and management that transforms existing and new heterogeneous physical storage environments into a simple, extensible and open virtual storage platform. The value proposition of the SDDC and cloud computing – easily consumed IT services, simple API access, and single management view – is now finally available for storage. This also means that your management of storage, no matter if you’re using VMware, Hyper-V or OpenStack will look the same, behave the same and provide the same functionality.

Software-Defined Storage integreates with VMware, Hyper-V and OpenStack

At it’s core, EMC ViPR is a storage virtualization software platform that abstracts storage from physical arrays – whether file, block or object-based – into a pool of virtual shared storage resources enabling an easy storage consumption model across physical arrays and the delivery of applications and innovative data services. ViPR abstracts the storage control path from the underlying hardware arrays so that access and management of multi-vendor storage infrastructures can be centrally executed in software.

Screen Shot 2013-05-06 at 4.53.11 PM

Of course, this is not just for EMC storage. I really hope you weren’t thinking that, as one of the coolest parts about ViPR is that it makes a multi-vendor storage environment look like one, big virtual array. ViPR uses software adapters that connect to the underlying arrays, and also exposes the APIs so any vendor, partner or customer can build new adapters to add new arrays. This creates an extensible “plug and play” storage environment that can automatically connect to, discover and map arrays, hosts and SAN fabrics.

Add support for new arrays

And, because ViPR is software-defined, it can easily extend to support non-EMC arrays and integrate with cloud stacks. ViPR is the first truly open software-defined storage platform. Through de-facto industry standard APIs including Amazon S3, OpenStack Swift and EMC Atmos, ViPR frees data and applications from storage dependencies and enables IT to meet new workloads and use cases with existing infrastructure.

Support for existing standards

So to sum it up, ViPR is:

  • Simple – providing easy storage provisioning, delivery and management across all arrays
  • Extensible – maintaining the unique capabilities of underlying storage arrays (EMC and third-party) making it possible to seamlessly migrate application data across private, public and hybrid cloud environments
  • Open – Everyone’s free to participate in the ViPR community, to help us and you to deliver the best services possible to your business

Stay tuned for more info on this awesome solution in the near future 🙂

Posted in Uncategorized | Leave a comment