Wednesday, September 19, 2012

HP Exchange technology notes

I popped along to the HP Exchange technology conference in London and went to a future of networks talk. Here are some notes on the talk.

Networking Future with Mike Witkowski - ISS tech exchange


Discussion and presentation - Where will networking be going over the next 4-10 years. What is the future?

What will the next gen DC require?

> technology used
> speeds and feeds
> overlay networks
> SDN - software design networking

 Pressures demands on networking are:

    memcache

applications will start to have more memcahce like technologies. How can the network deal with this?

    hadoop and gluster storage devices.

Parallel storage instead of serialized (SAS) technologies. Groups of nodes serving up data which will include data mirroring functionality

    Web 2.0 apps

more and more mobile devices. How will applications be able to handle demand for more intensive streaming

    RDMA

How will RDMA technology effect the stress on networks. Windows server 2008 already ships with RDMA

RDMA - direct memory based access for storage blocks. Addressable storage in memory. In todays environments, 10GB will also bottle neck on current storage devices, be it SAS or SATA. SSD and memcache will improve storage speed and access. There will be more persistent memory technologies coming out. Less disks and more memory.

Also pressures are there for current L2 technologies. More and more people are extending layer 2 networks globally. More and more people are talking about VLAN limitations. Vmotion is currently only available across layer 2.

So what do we currently use which can be considered as capable technologies to deal with these challenges?

FABRIC could deal with

    Converged networks
    having a pool of storage
    pool of fabric
    pool of servers

These all typically run on converged networks. Fabric is a network component that has high connection and cross bandwidth connectivity. Perfect for iscsi and layer 2 networks. Also is used with infiband and RDMA based networks

Other issues and admin tasks which are of a current concern is OSR - the over subscription ratio

The aim is to have a virtual connect to be treated as a single management domain. Scale linear across a set of racks. But for this to happen, the price for optic fiber needs to come down. Companies who make optic fiber need to make the solution more efficient.

40GB is dreaming, but is required to reduced the demands on infrastructure. Do not be short sighted. In fact, faster pipes will mean less compute, which means less servers. It all comes back to disks being the bottle neck for most networks.

There was a big discussion around the comparison of a hierarchical topology versus a CLOS topology.

Here is some light reading

hierarchical - http://en.wikipedia.org/wiki/Hierarchical_internetworking_model

CLOS - http://en.wikipedia.org/wiki/Clos_network

Pros and cons





Good technologies out there are QFabric Quantum which both use SDN

Overlay networks


More information requested but over lay networks deal with 4k valsn and above, typically using Q in Q trunking. Service providers will start to use this more and companies like VMware are looking at ways to bring this into the management, ie VCenter

VXLAN is a good example or STT which is Nicera protol, both now owned by VMware.. Where will these technologies go in the future?

Sunday, July 22, 2012

Git examples





I have been working on gitlab recently and needed to write down some git examples, so here is what I came up with:


 
 
 
To roll back files from your last commit

If the commit ID is ce4c1645cad94cc69221607201bd86c8f33b6cc0, run the following command


git reset --hard ce4c1645cad94cc69221607201bd86c8f33b6cc0

git reset without the --hard option resets the commit history but not the files. With the --hard option also files in working tree are reset

To then push these files back to the master branch
 


git commit --all
git push origin master --force 

To recovery one file from a previous commit
To find out what files have been deleted from the previous commit, run


git log --diff-filter=D --summary

This should give you information about which commit the file was delete from.
Once the commit ID is found, run the following command:


git checkout ce4c1645cad94cc69221607201bd86c8f33b6cc0 -- version_one_file_2

where ce4c1645cad94cc69221607201bd86c8f33b6cc0 is the commit ID and -- version_one_file_2 is the file

then run the following commands:


git commit
git push


Adding a FILE to a git repo

This is a very simple example but an important concept. Commands can vary depending on this example on what you are trying to do

create a new file


touch this_is_a_new_file

add the file to the git repo


git add this_is_a_new_file

this adds the file to the current working git directory


git commit this_is_a_new_file

This commits the file to a git snapshot


git push origin master

this pushes the files to the master branch


I deleted a file 3 commits ago. How do I recover that file

How to reproduce the example

1 - clone git repo


see above :)

2 - create a new file A


touch A
git add A

3 - commit and push


git commit A 
git push origin master

4 - create a new file B and delete A


touch B 
git add B 
git rm A

5 - commit and push


git commit
git push origin master

6 - create a new file C


touch C
git add C

7 - commit and push


git commit
git push origin master

8 - try to find file A

run this command


git log --diff-filter=D --summary

it will show you what files have been deleted.

You need to checkout the previous branch; ie the commit before the deletion
You then need to make sure the file you are recoverying is the one you want
You then need to switch back to the master branch
You then need to add the file to the master branch
You then need to commit the recovered file
You then need to push everything to the repo


so..


git log --diff-filter=D --summary $git checkout 
 cdeb3d757f3adcc346da2ab171a825c113bdb50b~1 A 

# note the ~1 rolls back to the previous commit. ~2 would go back 2 commits etc.. 

this just grabs that file in that commit and does not change branches 

git branch 

# check what branch you are on 

git add A

# to add A back to the master branch 

git commit A 
git push origin master


Create a branch, adding file that does not conflict and adding the files to the master branch

Run git branch to show which branch you are using


git branch
* master

This shows you are using the master branch

To add a branch, run the following command


git branch new_branch
git branch 

* master new_branch

the new_branch branch has been created.

You have not checkout the new branch yet. First, the following command shows the files in the master branch.


git ls-files

INSTALL
README
config.xml


To switch to the new branch and to add a new file, run the following commands


git checkout new_branch 

Switched to branch 'new_branch'

git branch

master
* new_branch

notice the * is now next to the new_branch. This is to show which branch you are working on.

You can check the files are the same as the master branch by running


git ls-files 

INSTALL
README
config.xml

Add a new file to the new_branch


touch file_added_to_branch
git add file_added_to_branch
git commit file_added_to_branch
git push origin new_branch

run the following to list the files in the new_branch


git ls-files 

INSTALL
README 
config.xml 
file_added_to_branch

switch to the master branch and list the git repo


git ls-files 
INSTALL 
README 
config.xml

Notice that the file file_added_to_branch is not in the master repo. To add this file, you can merge the new_branch to the master repo by running the following command


git merge new_branch 

Updating 76a5cab..30c41cc Fast-forward 0 files changed, 0 insertions(+), 0 deletions(-) 

create mode 100644 new_branch_file

Note: You have to be on the master branch to merge to the master. We changed to the master branch before running the above git merge command

push the files to the master branch on the git server

git push origin master

The following shows the branch files and how to delete any branches


git ls-files 

INSTALL
README
config.xml 
new_branch_file 

git branch 

* master 
new_branch

git branch -d new_branch 

Deleted branch new_branch (was 30c41cc). $ git branch * master

Delete the branch on the gitlab server


$git push origin :new_branch 

To git@puppet-gitlab:oliver.leach/puppet-cressex-lab.git 
 [deleted] new_branch

Wednesday, July 4, 2012

NetApp SnapMirror




NetApp SnapMirror is a pretty easy utility to use, but offers great power. NetApp snapview snapshots a LUN to a given destination either on a demand basis or a schedule. You can set this up via the UI or you can run commands via SSH. In the following example, we will be setting up snapview on a LUN. 

However, there is one key concept here. You can dedicate different interfaces to handle different roles. So if you have your production data accessible on say e0a, then consider setting up your snapshot traffic on e0b. This way you do not flood the interface or the upstream network components. Also you are spreading the load.

Here's a quick overview or some key concepts you need to consider:



We have a volume
ontap> vol status cloud01

Volume  State
cloud01 online


We have a lun

ontap> lun show

/vol/cloud01/cloud01-na-lun01    100g (107374182400)  (r/w,online,mapped)

And this is our satatus of SnapMirror

ontap> snapmirror status
Snapmirror is on.
ontap> snapmirror destinations
snapmirror destination: no known destinations


We need to edit the /etc/snapmirror.conf file and place some information in ro configure a snapmirror job. Here is a good article on what options you can use. http://www.wafl.co.uk/tag/snapmirrorconf/

The following snapmirror.conf entry indicates that filer ontap1′s volume cloud04_mirror will mirror volume cloud04 via the ontap0-gig interface. The ontap0-gig interface is whatever IP address ontap1 can resolve that name to. In this case, it might be a gigabit ethernet link on filer ontap0. The mirror is updated at 9:30 a.m., 1:30 p.m., and 7:30 p.m., Monday through Friday. The asterisk means that the data replication schedule is not affected by the day of month; it is the same as entering numbers 1 through 31 (comma-separated) in that space. It is actually similar to a cron format. 

ontap0-gig:cloud01 ontap1:cloud01_mirror 30 9, 13, 19 * 1, 2, 3, 4, 5 

The important part here is you can tell snapmirror.conf the interface name to send the snapmirror replication traffic down.

You may also need to set the /etc/snapmirror.allow to allow snapview connetions, otherwise you may see a connection denied issue.

Here is the commands you need to look at

ontap0> wrfile /etc/snapmirror.allow
 10.10.10.2
 ontap1

 


This will configure ontap0 to allow snapmirror connections to ontap1. As long as the name resolution is set correctly, it will traverse the interface as generic IP routing will take care of the traffic flow.

The other iimportant thing you can consider is the bandwidth shaping. The dash () in the command above, which is actually at the argu_ments field location, indicates that both the kbs and restart arguments are set to default. 

So what is the ksb? 

Taken from the link above (I said it was good) is this extract:

The value for this argument specifies the maximum speed (in kilobytes per second) at which SnapMirror data is transferred over the network. The kbs setting is used to throttle network bandwidth consumed, disk I/O, and CPU usage. By default, the filer transfers the data as fast as it can. The throttle value is not used while synchronously mirroring.  

You can set it something like this:

kbs=2000

This means the transfer speed is set at a maximum rate of 2, 000 kilobytes per second.

Asyn, Sync or semi sync

In async mode, snapshot copies of the volume are created periodically on the source. Only blocks that have changed or have been newly created since the last replication cycle are transferred to the target, making this method very efficient in terms of storage system overhead and network bandwidth.

Sync mode sends updates from the source to the destination as they occur, rather than according to a predetermined schedule. This helps data written on the source system to be protected on the destination even if the entire source system fails. NVLOG forwarding and consistency point (CP) forwarding are used to keep the target completely up to date. NVLOG forwarding enables data from the write log that is normally cached in NVRAM on NetApp storage to be synchronized with the target. Consistency point forwarding keeps the on-disk file system images synchronized.

Semi-sync mode differs from sync mode in two ways. Writes to the source aren't required to wait for acknowledgement from the target before they are committed and acknowledged, and NVLOG forwarding is not used. These two changes speed up application response with only a very small hit in terms of achievable recovery point objective (RPO).


SnapMirror status?

We have can see the status through both the CLI and the UI. Here is the CLI


ontap0> snapmirror status
Snapmirror is on.
Source         Destination            State          Lag       Status
ontap0:cloud01 ontap1:cloud01_mirror  Snapmirrored   00:00:29  Idle


Here you can see the command snapmirror status shows the output of the status of the SnapMirror replication. Pretty simple right?

The UI shows a little more information as shown here:






Nice images :( but you get the drift. 

There lots more to this and maybe I will add to this blog at a later date, but you can see how SnapMirror can provide real time data by using synchronous mirroring for your volumes.

Also, using methods of traffic shaping and IP routing, you can traverse your snapmirror vols over a specific interface. Lots of good options. Enjoy!

Tuesday, July 3, 2012

Installing python and django

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#   Depending on whether you run as root, you may need to use sudo.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


sudo yum install gcc tcl tk sqlite-devel readline-devel gdbm-devel -y
sudo yum install tkinter ncurses-devel libdbi-devel tk-devel zlib-devel -y
sudo yum install openssl-devel bzip2-devel -y
sudo yum install httpd httpd-devel -y
sudo yum install mysql mysql-devel -y
wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz
tar zxvf Python-2.7.3.tgz
cd Python-2.7.3
sudo ./configure --prefix=/opt/python2.7 --with-threads --enable-shared
make
sudo make install
ln -s /opt/python2.7/bin/python /usr/bin/python2.7
echo '/opt/python2.7/lib'>> /etc/ld.so.conf.d/opt-python2.7.conf
ldconfig

echo "alias python='/opt/python2.7/bin/python'" >> /etc/bashrc
echo "alias python2.7='/opt/python2.7/bin/python'" >> /etc/bashrc

***log out and log back in at this point. This is to ensure your bash rpofie is updated with the new python location***
wget http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
sh setuptools-0.6c11-py2.7.egg
wget http://modwsgi.googlecode.com/files/mod_wsgi-3.3.tar.gz
tar zxvf mod_wsgi-3.3.tar.gz
cd mod_wsgi-3.3
./configure --with-python=/opt/python2.7/bin/python
make
make install
curl http://python-distribute.org/distribute_setup.py | python
curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python
pip install Django

Tuesday, June 5, 2012

Puppet integration with NetApp




I was look at the puppet forge the other day and noticed that NetApp integrates with Puppet which made me smile. Take a look here:

https://forge.puppetlabs.com/fatmcgav/netapp/0.1.0

I have copied some of the readme notes. Here is an extract

NetApp operations

As part of this module, there is a defined type called ‘netapp::vqe’, which can be used to create a volume, add a qtree and create an NFS export. An example of this is:

netapp::vqe { 'volume_name':
  ensure         => present,
  size           => '1t',
  aggr           => 'aggr2',
  spaceres       => 'volume',
  snapresv       => 20,
  autoincrement  => true,
  persistent     => true
}


This will create a NetApp volume called ‘v_volume_name’ with a qtree called ‘q_volume_name’. The volume will have an initial size of 1 Terabyte in Aggregate aggr2. The space reservation mode will be set to volume, and snapshot space reserve will be set to 20%. The volume will be able to auto increment, and the NFS export will be persistent. To be honest, that is awesome if you need to build up and automate say infrastructure deployments.


I have used many auto deployments tools in my time, but I have never seen such a great adoption as I have done with Puppet. This just proves it. Why is this? Well, I think the main reasons are that puppet is open source and it doesn't require you to lock your self in with a particular product, i.e. it runs on CentOS, WIndows, Ubuntu etc. It is also very flexbile and you code it in a way that doesn't require integration with the target node. So in the example, NetApp is completely unaware of puppet, so NetApp needs no integration. This is because puppet can interact with the NetApp Manageability SDK Ruby libraries. How cool is that?
  A great example of how good opensource can really be.

Hoorah to Puppet and NetApp!


Tuesday, April 17, 2012

NetApp storage solutions


I have worked with many different storage solutions really since the beginning of my journey with VMware, which was at the end of 2005 with VMware ESX 2.5, no virtual center - imagine that!. The first SAN I worked with still stays close to my heart as one of the best products out on the market, Data Core SAN Symphony. It was a great virtualisation SAN where you plugged in vendor neutral storage arrays in to a physical server, which was running Windows Server and the DataCore SAN Symphony management software. This in turn then managed the vendor neutral storage arrays. It offered both iSCSI and Fabric and at the time, it was a really great product to have got involved with. I have used many other products, including the EMC's CX-120, the DS Series from IBM and mainly over the last few years, the Dell Equallogics PS series. I've also built my own SAN using Solaris ZFS and using just plain old CentOS with iscsi targets and tgtadm, running off big disks on a buinch of super micros servers. I also use the Nextenta community edition in a lab which, out of the box, is pretty darn good. However, my other favourite SAN technology that I have used is NetApp. It just oozes a quality and what I like is that it seems to have taken all the good features from all of the other SANs I have used and put them altogether.

I was given a chance to use NetApp when I was working for a large service provider down under, however it didn't make the cut due to the price points and also due to the fact that it was too difficult to migrate from our existing SAN that was running our cloud offering at the time, so we ended up persevering with our existing solution and carried on ploughing our investment into what was a flakey product at the time (no names mentioned).

One of the ways you can get your hands dirty with NetApp is to run the evaluation product. A great colleague of mine, @wadeis, taught me all the tricks there is to know with the NetApps. Here are a couple of key offerings the NetApp solution has:

The NetApp Data ONTAP 8 Operating System

This is the operating system for your storage devices.. You can run this in 2 modes, cluster mode and 7 mode. It runs on FreeBSD, like most provders, Dell, IBM etc (Juniper run also FreeBSD on the network devices) so its a popular choice.. It also has a great command line utility which enables which seems to be very popular. To manage ONTAP, you need to run System Manager which runs on both Windows or using Linux and its a nice UI. You connect to your ONTAP device and the have all the management tools available to you depending on the license you have. Here is a screen shot of System Manager running on Windows managing an ONTAP device.





































It pretty nicely laid out. There are a few key concepts with ONTAP which I want to run through:

Aggregates:

NetApp Aggregates is a way of aggregating your disks from disk arrays together to provide you raw usable disk space. Then you assign logical volumes on top of this raw space. It can hold multiple disk from different disk array but it is highly recommended to use the same type of disk. I.e., don't mismatch SAS with SATA, 300GB with 144GB for example. Keep them the same, but the key is you can have many spindles that make up an aggregate. There are some limitations you need to consider, for example, the maximum aggregate size.

Here are some concepts you'll need to undertstand when mangaging your aggregates

    Benefits of keeping your RAID groups homogenous for disk size and speed
    What types of disks can be used together (FCAL and SAS, SATA and ATA)
    How to add disks to aggregates from a heterogenous disk pool
    The requirement to add disks owned by the same system and pool
    Best practices for providing hot spares


Qtrees:

Qtrees stand for quote based trees. It is a concept that is confusing to start with, but you need not overthink how qtrees work. It stands for quota based trees where you can assign qtrees to volumes and slice up quota's for resources created under the volume. It's a way of partitioning volumes up in to quota's for various purposes. 

For example, you may create a volume, vol01, and share that out using cifs. Then you create 2 qtree's, one called qtree_one and the other qtree_two. You can set quota's on both the qtrees but still share out the volumes. If you lock down a user to only be able to use a certain amount of space in qtree_one, well they can't add more than their set quota. Qtree_two can have a different quota to qtree_one. You would need to have the ontap device connected to your domain so it can pick up domain groups and user to apply the quotas to.

Another way of using quotas on a volume to quota how large your LUNs can be. For example, you might want to assign volume space to various database administrators and allow them to create and manage their own LUNs. You can organize the volume into qtrees with quotas and enable the individual database administrators to manage the space they have been allocated.

If you organize your LUNs in qtrees with quotas, make sure the quota limit can accommodate the sizes of the LUNs you want to create. Data ONTAP does not allow you to create a LUN in a qtree with a quota if the LUN size exceeds the quota.


I created a volume with 100GB space and then created a qtree under the volume with a quota limit of 10GB. I then tried to create a 10GB LUN and gto this error:



 
 
It rounded down the LUN size to 9.96GB but it shows how you can manage space allocation for LUNS using qtrees.

WAFL


Wafl is the file layout used by the netapp filers. It stands for write any file location. Apparently it is not classifgied as a file system but does act like one. WAFL supports all different kinds of storage filsystems. For example, it can handle CIFS for window and UNIX shares, NFS for network file system shares and block base storage for iscsi and FC. So it need to handle a few different types of files. WAFL is best thought of a tree of blocks and at the root of the tree if the root inode. The root inode describe the inode file and the inode file describes the rest of the files in the file system, including bthe block-map and indoe-map files. When WAFL loads, it need to locate the root inode, so this needs to be in a fixed located, which is the only exception to the write anywhere rule.

Here is a diagram that shows how a root inode tree can be made up.
















Now, one of the greatest benefits of using this type of technology is how snapshots work. NetApp can snapshot LUNs instantaneously and it does this by copying the the root inode. If a snapshot or clone changes a block, it remaps the new block in the inode tree, as if it was l.ike a new branch. See this:











So this give NetApp some really great flexibilty when it comes to snapshot s and cloning techniques.

So it's now time to discuss some other tools that NetApp uses to help DR and backups

NetApp SnapShot

NetApp SnapShot software enables you to protect your data with no performance impact and minimal consumption of storage space.

NetApp Snapshot technology enables you to create point-in-time copies of file systems, which you can use to protect data—from a single file to a complete disaster recovery solution.


SnapShot key points:

  •     Perform instant backups by copying the inode root tree.
  •     You can have up to 255 snapshot copies per volume
  •     You can combine other technologies such as SnapMirror to build a data protection solution
SyncMirror

NetApp SyncMirror ensures your data is available and up-to-date at all times. By maintaining two copies of data online, SyncMirror protects your data against all types of hardware outages. This is similar to SAN Symphonies mirrored volumes.


The above is a good diagram on how SyncMirror can work. Its best the have 2 arrays which copies its blocks of data destined for the volume to multiple arrays, therefore you can los a who array and the ONTAP cluster will seemlessly fail over to the other array, without any intervention. This is the same methodolgy as SAN Symphony. The only catch here, is you are duplicating you data set as you are effectively mirroring the volumes. You can by all means mirror on the the same ONTAP device.

SyncMirror allows you to split the data copies so that the mirrored data can be used by another application. This allows for doing backup, application testing, or data mining using up-to-date production data but on the passive mirrored volumes and you can perform these background tasks using the mirrored data without affecting your production environment

SnapMirror

You can schedule snapshots in NetApp at regular intervals. This is handy again for  DR stratgey. The snapshots can

 FlexClone

Again, using WAFLs inode tree structure FlexClone is able to create instant clones. You can use these clones many different purposes, for example, setting up a dev environment, micking production data, or a test environment or a DR strategy. You cannot commit the changes of the clone back in to the producion LUN though, so once it is cloned, you can only branch the dataset.


Here are some more functionality of NetApp which also explains the benefits of using their technologies.

NetApp Data Compression

- Transparent inline data compression for data reduction
- Reduces the amount of storage you need to purchase and maintain

NetApp Deduplication


- General-purpose deduplication for removal of redundant data objects
- Reduces the amount of storage you need to purchase and maintain

FlexCache

- Caches NFS volumes for accelerated file access in remote offices and for server compute farms
- Improves your system’s performance, response times, and data availability for particular workloads

FlexClone

- Instantaneously creates file, LUN, and volume clones without requiring additional storage
- Enables you to save time in testing and development and increases your storage capacity

FlexShare

- Prioritizes storage resource allocation to highest value workloads on a heavily loaded system
- Provides you with better performance for designated high-priority applications

FlexVol

- Creates flexibly sized LUNs and volumes across a large pool of disks and one or more RAID groups
- Enables your storage systems to be used at maximum efficiency and reduces your hardware investment

MetroCluster

- An integrated high-availability/disaster recovery solution for campus and metro-area deployments
- Enables you to have immediate data availability if a site fails

MultiStore

- Securely partitions a storage system into multiple virtual storage appliances
- Allows you to consolidate multiple domains and file servers

Operations Manager

- Manages multiple NetApp systems from a single administrative console
- Simplifies your NetApp deployment and allows you to consolidate management of multiple NetApp systems

Protection Manager

- Backup and replication management software for NetApp disk-to-disk environments
- Lets you automate data protection, enabling you to have mistake-free backup

System Manager

- Provides setup, provisioning, and configuration management of a Data ONTAP storage system
- Simplifies out-of-box setup and device management using an intuitive Windows-based interface

SnapDrive

- Provides host-based data management of NetApp storage from Windows, UNIX, and Linux servers
- Allows you to initiate error-free system restores should servers fail

SnapManager

- Provides host-based data management of NetApp storage for databases and business applications
- Lets you automate error-free data restores and provides you with application-aware disaster recovery

SnapMirror

- Enables automatic, incremental data replication between systems: synchronous or asynchronous
- Provides you with flexibility and efficiency when mirroring for data distribution and disaster recovery

SnapMover

- Enables rapid reassignment of disks between controllers within a system, without disruption
- Lets you load balance an active-active controller system with no disruption to data flow

SnapRestore

- Rapidly restores single files, directories, or entire LUNs and volumes from any Snapshot copy backup

- Instantaneously recovers your files, databases, and complete volumes from your backup

Snapshot

- Makes incremental, data-in-place, point-in-time copies of a LUN or volume with minimal performance impact
- Enables you to create frequent, space-efficient backups with no disruption to data traffic

SnapValidator

- Maximizes data integrity for Oracle

Databases

- Allows you to enhance the resiliency of Oracle Data-bases so they comply with Oracle HARD initiative

SnapVault

- Exports Snapshot copies to another NetApp system, providing an incremental block-level backup solution

- Provides you with cost-effective, long-term backups of disk-based data

SyncMirror

- Maintains two online copies of data with RAID-DP protection on each side of the mirror
- Protects your system from all types of hardware outages, including triple disk failure



The summary for me is the NetApp are way up their for offering performance with functionality but it does come as a cost. The product reminds me of SAN Symphony which is also a failry hefty cost but done of data size. 

I had real fun working with the NetApps and would stick my hand up anytime if offered another opportunity. 

Have fun!










Monday, March 12, 2012

How to enable dns on a NetApp running ONTAP



 








 

I had to enable dns on the NetApp ONTAP device the other day and as I am in the thick of learning NetApp, I thought I would write this one down. It is pretty easy but you need to do this through the cli.

At first, DNS is not enabled. You can see this in the System Manager UI:

 
















You have to drop to the command line and edit a few files to enable dns. The cutdown FreeBSD cli is, yep, cutdown. However, once you know your way around, it is pretty easy. When you need to write and read a file, there are 2 key commands:

wrfile
rdfile

wrfile writes to a file. I don't like this but you need to press CTRL-C to quit out of the file and you get this message when you do so:

read: error reading standard input: Interrupted system call

However you can also use wrfile -a which appends to the file. It's not vi, that's for sure.

However, back to the point. Below shows how one can set up DNS and a sneaky gotcha you need to be aware of.

If you just try to enable dns from the command options dns.enable on, you maight get this message:

Setting option dns.enable to 'off' conflicts with /etc/rc that sets it to 'on'

There is a rc file that loads services on boot and here is where DNS is set, which by default is off as you can see:

ontap> rdfile /etc/rc
hostname ontap
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 mtusize 1500
route add default 10.10.10.1 1
routed on
options dns.enable off

options nis.enable off
savecore


You can see it states dns.enable off. This means, whilst you can start dns by running options dns.enable on, with the rc file set this way, dns.enable on is not persistent. So first you need to update the /etc/rc file and set dns to be enabled.

Hint : You can rdfile /etc/rc then copy and paste the contents appropriately in when you run the wrfile /etc/rc. You'll get the drift when you have a go yourself. So here goes:

ontap> wrfile /etc/rc
hostname ontap
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 mtusize 1500
route add default 10.10.10.1 1
routed on
options dns.enable on
options nis.enable off
read: error reading standard input: Interrupted system call


Here you can see this read: error reading standard input: Interrupted system call. This is because you have to CTRL-C out of the wrfile command to save your changes. If ANYONE knows a way around htis, please send a commnt. However, a man wrfile doesn't suggest a way.

So now, you have the /etc/rc file set up with dns enables, you need to change the /etc/resolv.conf. Here you can use the wrfile -a command. Just append your dns nameserver like so:

ontap> wrfile -a /etc/resolv.conf nameserver 10.10.10.10
Lastly, you need to run the following command to trun on dns

ontap> options dns.enable on

And there you have it. To prove dns is now running, the UI will show your changes:




















Now I can continue setting up CIFS and adding the ONTAP device to AD. Pretty straight forward once you know how.