Wednesday, October 10, 2012

vSphere 5.1 review




I recently took time out to review the new features of vsphere 5.1 and this is what I found:


  • So we can have larger virtual machines

Virtual machines can grow two times larger than in any previous release to support even the most advanced applications. Virtual machines can now have
up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM).

Why would we want that size of VMs. Well I am not sure or have ever seen a virtual machine that will utilise 64 vcpus, but large applications that run in oracle farms may just have this requirement and it's another reason why you would use VMware over the other competitors, XenServer, Hyper-V for example, to run your mission critical intensive application on a the vmware hypervisor.

usefulness : 5/10


  • A new virtual machine format

New features in the virtual machine format (version 9) in vSphere 5.1 include support for larger virtual machines, CPU performance counters and virtual shared graphics acceleration designed for enhanced performance.

Well surely this goes hand in hand with the first feature but notice we have a nice shared graphics acceleration feature. NVIDIA added vSGA (Virtual Shared Graphics Acceleration) that allows the presentation of a physical graphics processing unit (GPU) from the underlying host to virtual desktops guests. By virtualizing the physical GPU, its resources can be allocated and shared across several virtual desktop instances.
This provides several different benefits. Using the physical GPU and vRAM frees the underlying CPU and memory from the host to be used for other tasks. Using a GPU for hardware-accelerated graphics also allows customers to provide a more rich and interactive graphical experience across an even broader set of use cases, especially implementation of vmware view.

 
Usefulness 6/10
 
  • Storage enhancements

Flexible, space-efficient storage for virtual desktop infrastructure (VDI). A new disk format enables the correct balance between space efficiency and I/O throughput for the virtual desktop.


  • vSphere Distributed Switch enhancements

Enhancements such as Network Health Check, Configuration Backup and Restore, Roll Back and Recovery, and Link Aggregation Control Protocol support and deliver more enterprise-class networking functionality and a more robust foundation for cloud computing.

Anything distributed switch is good and useful - it is a great part of the product and really helps define network policies more efficiently, especially when you have large numbers of hosts.

usefulness 7/10

  • Single-root I/O virtualization (SR-IOV) support

Support for SR-IOV optimizes performance for sophisticated applications. SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. Here is a great video explain SR-IOV and how resources are assigned to each particular function. This all helps with over performance and with point 1 and larger machines, all feeds back in to the fact that VMware ESX can handle bigger workloads

usefulness 7/10


  • Availability vSphere vMotion enhancements

Leverage the advantages of vMotion (zero-downtime migration) without the need for shared
storage configurations. This new vMotion capability applies to the entire network.

This is great - This means you can migrate virtual machines live without needing “shared storage”. In other words you can vMotion virtual machines between ESXi hosts with only local storage

usefulness 10/10

  • vSphere Data Protection changes

Simple and cost effective backup and recovery for virtual machines. vSphere Data Protection is a newly architected solution based on EMC Avamar technology
that allows admins to back up virtual machine data to disk without the need of agents and with built-in deduplication.

This feature replaces the vSphere Data Recovery product available with previous releases of vSphere.

A great white paper from VMware regarding this: http://www.vmware.com/files/pdf/techpaper/Introduction-to-Data-Protection.pdf

usefulness: 9/10


  • vSphere Replication

vSphere Replication enables efficient array-agnostic replication of virtual machine data over the LAN or WAN. vSphere Replication simplifies management enabling replication at the virtual machine level and enables RPOs as low as 15 minutes.

I like this one. Again another feature rich offering from Vmware.

usefulness: 9/10


  • Reduced downtime upgrade for VMware Tools

After you upgrade to the VMware Tools available with version 5.1, reboots
have been reduced or eliminated for subsequent VMware Tools upgrades on Windows.

 Been a while coming due to the challenges of upgrading locked files within an operating system most probably not the best and easiest to work within when you are VMware. We here and expect some more enhancements coming in the later versions.

uesfulness 8/10 

  • Additional security enhancements

VMware vShield Endpoint delivers a proven endpoint security solution to any workload with an approach that is simplified, efficient, and cloud-aware. vShield Endpoint enables
3rd party endpoint security solutions to eliminate the agent footprint from the virtual machines, offload intelligence to a security virtual appliance, and run scans with minimal impact.

This was once bluelane and its now bundled in the product. It makes a lot of sense when you are running multiple instances of security as you can now limit the overhead.

usefulness: 8/10

  • vSphere Storage DRS and Profile-Driven Storage

New integration with VMware vCloud® Director™ enables further storage efficiencies and automation in a private cloud environment.

This is cool - a feature that allows us to DRS storage IO. Not tested in anger but a great performance tool in an already feature rich product.

usefulness: 9/10

  • vSphere Auto Deploy

Two new methods for deploying new vSphere hosts to an environment make the Auto Deploy process more highly available then ever before.

This is my favourite. Stateless and stateful ESXi deployments. Got to be worth a bucket of comfort knowing that whatever happens, the ESXi host will also boot up.

usefulness 10/10

  • VMware vCenter™ Operations Manager Foundation

This enables you to leverage comprehensive views into health, risk and efficiency scores of your vSphere environment infrastructure. Quickly drill down to see what’s causing current workload conditions, pinpoint potential problems in the future and identify areas with inefficient use of resources.
vCenter Orchestrator
Orchestrator simplifies installation and configuration of the powerful workflow engine in vCenter Server. Newly designed workflows enhance ease of use, and can also be launched directly from the new vSphere Web Client. 

Always a good thing

usefulness 7/10
 
  • Management using vSphere Web Client

The vSphere Web Client is now the core administrative interface for vSphere. This new flexible, robust interface simplifies vSphere control through shortcut navigation, custom tagging, enhanced scalability, and the ability to manage from anywhere with Internet Explorer or Firefox-enabled devices.

This is a big winner for me. Still we need a windows installer and an operating system to run on but its a big move in the right direction.

usefulness 10/10


  • vCenter Single Sign-On

Dramatically simplify vSphere administration by allowing users to log in once to access all
instances or layers of vCenter without the need for further authentication. This is key for the new VMware vCenter web client and other things like inventory services. Need to understand how the permissions work across resources and how granular you can be with SSO, or whether it is just used as a password across vmware layers.

usefulness 5/10
 

VMware are so ahead of the game. XenServer is catching up and have a release later on in the year called the Augusta release which has features like dom0 disaggregation, 


overall

well its a 9/10 for me. Keep up the good work VMware, you really are miles ahead of the rest.

Wednesday, September 19, 2012

HP Exchange technology notes

I popped along to the HP Exchange technology conference in London and went to a future of networks talk. Here are some notes on the talk.

Networking Future with Mike Witkowski - ISS tech exchange


Discussion and presentation - Where will networking be going over the next 4-10 years. What is the future?

What will the next gen DC require?

> technology used
> speeds and feeds
> overlay networks
> SDN - software design networking

 Pressures demands on networking are:

    memcache

applications will start to have more memcahce like technologies. How can the network deal with this?

    hadoop and gluster storage devices.

Parallel storage instead of serialized (SAS) technologies. Groups of nodes serving up data which will include data mirroring functionality

    Web 2.0 apps

more and more mobile devices. How will applications be able to handle demand for more intensive streaming

    RDMA

How will RDMA technology effect the stress on networks. Windows server 2008 already ships with RDMA

RDMA - direct memory based access for storage blocks. Addressable storage in memory. In todays environments, 10GB will also bottle neck on current storage devices, be it SAS or SATA. SSD and memcache will improve storage speed and access. There will be more persistent memory technologies coming out. Less disks and more memory.

Also pressures are there for current L2 technologies. More and more people are extending layer 2 networks globally. More and more people are talking about VLAN limitations. Vmotion is currently only available across layer 2.

So what do we currently use which can be considered as capable technologies to deal with these challenges?

FABRIC could deal with

    Converged networks
    having a pool of storage
    pool of fabric
    pool of servers

These all typically run on converged networks. Fabric is a network component that has high connection and cross bandwidth connectivity. Perfect for iscsi and layer 2 networks. Also is used with infiband and RDMA based networks

Other issues and admin tasks which are of a current concern is OSR - the over subscription ratio

The aim is to have a virtual connect to be treated as a single management domain. Scale linear across a set of racks. But for this to happen, the price for optic fiber needs to come down. Companies who make optic fiber need to make the solution more efficient.

40GB is dreaming, but is required to reduced the demands on infrastructure. Do not be short sighted. In fact, faster pipes will mean less compute, which means less servers. It all comes back to disks being the bottle neck for most networks.

There was a big discussion around the comparison of a hierarchical topology versus a CLOS topology.

Here is some light reading

hierarchical - http://en.wikipedia.org/wiki/Hierarchical_internetworking_model

CLOS - http://en.wikipedia.org/wiki/Clos_network

Pros and cons





Good technologies out there are QFabric Quantum which both use SDN

Overlay networks


More information requested but over lay networks deal with 4k valsn and above, typically using Q in Q trunking. Service providers will start to use this more and companies like VMware are looking at ways to bring this into the management, ie VCenter

VXLAN is a good example or STT which is Nicera protol, both now owned by VMware.. Where will these technologies go in the future?

Sunday, July 22, 2012

Git examples





I have been working on gitlab recently and needed to write down some git examples, so here is what I came up with:


 
 
 
To roll back files from your last commit

If the commit ID is ce4c1645cad94cc69221607201bd86c8f33b6cc0, run the following command


git reset --hard ce4c1645cad94cc69221607201bd86c8f33b6cc0

git reset without the --hard option resets the commit history but not the files. With the --hard option also files in working tree are reset

To then push these files back to the master branch
 


git commit --all
git push origin master --force 

To recovery one file from a previous commit
To find out what files have been deleted from the previous commit, run


git log --diff-filter=D --summary

This should give you information about which commit the file was delete from.
Once the commit ID is found, run the following command:


git checkout ce4c1645cad94cc69221607201bd86c8f33b6cc0 -- version_one_file_2

where ce4c1645cad94cc69221607201bd86c8f33b6cc0 is the commit ID and -- version_one_file_2 is the file

then run the following commands:


git commit
git push


Adding a FILE to a git repo

This is a very simple example but an important concept. Commands can vary depending on this example on what you are trying to do

create a new file


touch this_is_a_new_file

add the file to the git repo


git add this_is_a_new_file

this adds the file to the current working git directory


git commit this_is_a_new_file

This commits the file to a git snapshot


git push origin master

this pushes the files to the master branch


I deleted a file 3 commits ago. How do I recover that file

How to reproduce the example

1 - clone git repo


see above :)

2 - create a new file A


touch A
git add A

3 - commit and push


git commit A 
git push origin master

4 - create a new file B and delete A


touch B 
git add B 
git rm A

5 - commit and push


git commit
git push origin master

6 - create a new file C


touch C
git add C

7 - commit and push


git commit
git push origin master

8 - try to find file A

run this command


git log --diff-filter=D --summary

it will show you what files have been deleted.

You need to checkout the previous branch; ie the commit before the deletion
You then need to make sure the file you are recoverying is the one you want
You then need to switch back to the master branch
You then need to add the file to the master branch
You then need to commit the recovered file
You then need to push everything to the repo


so..


git log --diff-filter=D --summary $git checkout 
 cdeb3d757f3adcc346da2ab171a825c113bdb50b~1 A 

# note the ~1 rolls back to the previous commit. ~2 would go back 2 commits etc.. 

this just grabs that file in that commit and does not change branches 

git branch 

# check what branch you are on 

git add A

# to add A back to the master branch 

git commit A 
git push origin master


Create a branch, adding file that does not conflict and adding the files to the master branch

Run git branch to show which branch you are using


git branch
* master

This shows you are using the master branch

To add a branch, run the following command


git branch new_branch
git branch 

* master new_branch

the new_branch branch has been created.

You have not checkout the new branch yet. First, the following command shows the files in the master branch.


git ls-files

INSTALL
README
config.xml


To switch to the new branch and to add a new file, run the following commands


git checkout new_branch 

Switched to branch 'new_branch'

git branch

master
* new_branch

notice the * is now next to the new_branch. This is to show which branch you are working on.

You can check the files are the same as the master branch by running


git ls-files 

INSTALL
README
config.xml

Add a new file to the new_branch


touch file_added_to_branch
git add file_added_to_branch
git commit file_added_to_branch
git push origin new_branch

run the following to list the files in the new_branch


git ls-files 

INSTALL
README 
config.xml 
file_added_to_branch

switch to the master branch and list the git repo


git ls-files 
INSTALL 
README 
config.xml

Notice that the file file_added_to_branch is not in the master repo. To add this file, you can merge the new_branch to the master repo by running the following command


git merge new_branch 

Updating 76a5cab..30c41cc Fast-forward 0 files changed, 0 insertions(+), 0 deletions(-) 

create mode 100644 new_branch_file

Note: You have to be on the master branch to merge to the master. We changed to the master branch before running the above git merge command

push the files to the master branch on the git server

git push origin master

The following shows the branch files and how to delete any branches


git ls-files 

INSTALL
README
config.xml 
new_branch_file 

git branch 

* master 
new_branch

git branch -d new_branch 

Deleted branch new_branch (was 30c41cc). $ git branch * master

Delete the branch on the gitlab server


$git push origin :new_branch 

To git@puppet-gitlab:oliver.leach/puppet-cressex-lab.git 
 [deleted] new_branch

Wednesday, July 4, 2012

NetApp SnapMirror




NetApp SnapMirror is a pretty easy utility to use, but offers great power. NetApp snapview snapshots a LUN to a given destination either on a demand basis or a schedule. You can set this up via the UI or you can run commands via SSH. In the following example, we will be setting up snapview on a LUN. 

However, there is one key concept here. You can dedicate different interfaces to handle different roles. So if you have your production data accessible on say e0a, then consider setting up your snapshot traffic on e0b. This way you do not flood the interface or the upstream network components. Also you are spreading the load.

Here's a quick overview or some key concepts you need to consider:



We have a volume
ontap> vol status cloud01

Volume  State
cloud01 online


We have a lun

ontap> lun show

/vol/cloud01/cloud01-na-lun01    100g (107374182400)  (r/w,online,mapped)

And this is our satatus of SnapMirror

ontap> snapmirror status
Snapmirror is on.
ontap> snapmirror destinations
snapmirror destination: no known destinations


We need to edit the /etc/snapmirror.conf file and place some information in ro configure a snapmirror job. Here is a good article on what options you can use. http://www.wafl.co.uk/tag/snapmirrorconf/

The following snapmirror.conf entry indicates that filer ontap1′s volume cloud04_mirror will mirror volume cloud04 via the ontap0-gig interface. The ontap0-gig interface is whatever IP address ontap1 can resolve that name to. In this case, it might be a gigabit ethernet link on filer ontap0. The mirror is updated at 9:30 a.m., 1:30 p.m., and 7:30 p.m., Monday through Friday. The asterisk means that the data replication schedule is not affected by the day of month; it is the same as entering numbers 1 through 31 (comma-separated) in that space. It is actually similar to a cron format. 

ontap0-gig:cloud01 ontap1:cloud01_mirror 30 9, 13, 19 * 1, 2, 3, 4, 5 

The important part here is you can tell snapmirror.conf the interface name to send the snapmirror replication traffic down.

You may also need to set the /etc/snapmirror.allow to allow snapview connetions, otherwise you may see a connection denied issue.

Here is the commands you need to look at

ontap0> wrfile /etc/snapmirror.allow
 10.10.10.2
 ontap1

 


This will configure ontap0 to allow snapmirror connections to ontap1. As long as the name resolution is set correctly, it will traverse the interface as generic IP routing will take care of the traffic flow.

The other iimportant thing you can consider is the bandwidth shaping. The dash () in the command above, which is actually at the argu_ments field location, indicates that both the kbs and restart arguments are set to default. 

So what is the ksb? 

Taken from the link above (I said it was good) is this extract:

The value for this argument specifies the maximum speed (in kilobytes per second) at which SnapMirror data is transferred over the network. The kbs setting is used to throttle network bandwidth consumed, disk I/O, and CPU usage. By default, the filer transfers the data as fast as it can. The throttle value is not used while synchronously mirroring.  

You can set it something like this:

kbs=2000

This means the transfer speed is set at a maximum rate of 2, 000 kilobytes per second.

Asyn, Sync or semi sync

In async mode, snapshot copies of the volume are created periodically on the source. Only blocks that have changed or have been newly created since the last replication cycle are transferred to the target, making this method very efficient in terms of storage system overhead and network bandwidth.

Sync mode sends updates from the source to the destination as they occur, rather than according to a predetermined schedule. This helps data written on the source system to be protected on the destination even if the entire source system fails. NVLOG forwarding and consistency point (CP) forwarding are used to keep the target completely up to date. NVLOG forwarding enables data from the write log that is normally cached in NVRAM on NetApp storage to be synchronized with the target. Consistency point forwarding keeps the on-disk file system images synchronized.

Semi-sync mode differs from sync mode in two ways. Writes to the source aren't required to wait for acknowledgement from the target before they are committed and acknowledged, and NVLOG forwarding is not used. These two changes speed up application response with only a very small hit in terms of achievable recovery point objective (RPO).


SnapMirror status?

We have can see the status through both the CLI and the UI. Here is the CLI


ontap0> snapmirror status
Snapmirror is on.
Source         Destination            State          Lag       Status
ontap0:cloud01 ontap1:cloud01_mirror  Snapmirrored   00:00:29  Idle


Here you can see the command snapmirror status shows the output of the status of the SnapMirror replication. Pretty simple right?

The UI shows a little more information as shown here:






Nice images :( but you get the drift. 

There lots more to this and maybe I will add to this blog at a later date, but you can see how SnapMirror can provide real time data by using synchronous mirroring for your volumes.

Also, using methods of traffic shaping and IP routing, you can traverse your snapmirror vols over a specific interface. Lots of good options. Enjoy!

Tuesday, July 3, 2012

Installing python and django

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#   Depending on whether you run as root, you may need to use sudo.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


sudo yum install gcc tcl tk sqlite-devel readline-devel gdbm-devel -y
sudo yum install tkinter ncurses-devel libdbi-devel tk-devel zlib-devel -y
sudo yum install openssl-devel bzip2-devel -y
sudo yum install httpd httpd-devel -y
sudo yum install mysql mysql-devel -y
wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz
tar zxvf Python-2.7.3.tgz
cd Python-2.7.3
sudo ./configure --prefix=/opt/python2.7 --with-threads --enable-shared
make
sudo make install
ln -s /opt/python2.7/bin/python /usr/bin/python2.7
echo '/opt/python2.7/lib'>> /etc/ld.so.conf.d/opt-python2.7.conf
ldconfig

echo "alias python='/opt/python2.7/bin/python'" >> /etc/bashrc
echo "alias python2.7='/opt/python2.7/bin/python'" >> /etc/bashrc

***log out and log back in at this point. This is to ensure your bash rpofie is updated with the new python location***
wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
sh setuptools-0.6c11-py2.7.egg
wget http://modwsgi.googlecode.com/files/mod_wsgi-3.3.tar.gz
tar zxvf mod_wsgi-3.3.tar.gz
cd mod_wsgi-3.3
./configure --with-python=/opt/python2.7/bin/python
make
make install
curl http://python-distribute.org/distribute_setup.py | python
curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python
pip install Django

Tuesday, June 5, 2012

Puppet integration with NetApp




I was look at the puppet forge the other day and noticed that NetApp integrates with Puppet which made me smile. Take a look here:

https://forge.puppetlabs.com/fatmcgav/netapp/0.1.0

I have copied some of the readme notes. Here is an extract

NetApp operations

As part of this module, there is a defined type called ‘netapp::vqe’, which can be used to create a volume, add a qtree and create an NFS export. An example of this is:

netapp::vqe { 'volume_name':
  ensure         => present,
  size           => '1t',
  aggr           => 'aggr2',
  spaceres       => 'volume',
  snapresv       => 20,
  autoincrement  => true,
  persistent     => true
}


This will create a NetApp volume called ‘v_volume_name’ with a qtree called ‘q_volume_name’. The volume will have an initial size of 1 Terabyte in Aggregate aggr2. The space reservation mode will be set to volume, and snapshot space reserve will be set to 20%. The volume will be able to auto increment, and the NFS export will be persistent. To be honest, that is awesome if you need to build up and automate say infrastructure deployments.


I have used many auto deployments tools in my time, but I have never seen such a great adoption as I have done with Puppet. This just proves it. Why is this? Well, I think the main reasons are that puppet is open source and it doesn't require you to lock your self in with a particular product, i.e. it runs on CentOS, WIndows, Ubuntu etc. It is also very flexbile and you code it in a way that doesn't require integration with the target node. So in the example, NetApp is completely unaware of puppet, so NetApp needs no integration. This is because puppet can interact with the NetApp Manageability SDK Ruby libraries. How cool is that?
  A great example of how good opensource can really be.

Hoorah to Puppet and NetApp!