I had to enable dns on the NetApp ONTAP device the other day and as I am in the thick of learning NetApp, I thought I would write this one down. It is pretty easy but you need to do this through the cli.
At first, DNS is not enabled. You can see this in the System Manager UI:
You have to drop to the command line and edit a few files to enable dns. The cutdown FreeBSD cli is, yep, cutdown. However, once you know your way around, it is pretty easy. When you need to write and read a file, there are 2 key commands:
wrfile
rdfile
wrfile writes to a file. I don't like this but you need to press CTRL-C to quit out of the file and you get this message when you do so:
read: error reading standard input: Interrupted system call
However you can also use wrfile -a which appends to the file. It's not vi, that's for sure.
However, back to the point. Below shows how one can set up DNS and a sneaky gotcha you need to be aware of.
If you just try to enable dns from the command options dns.enable on, you maight get this message:
Setting option dns.enable to 'off' conflicts with /etc/rc that sets it to 'on'
There is a rc file that loads services on boot and here is where DNS is set, which by default is off as you can see:
ontap> rdfile /etc/rc
hostname ontap
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 mtusize 1500
route add default 10.10.10.1 1
routed on
options dns.enable off
options nis.enable off
savecore
You can see it states dns.enable off. This means, whilst you can start dns by running options dns.enable on, with the rc file set this way, dns.enable on is not persistent. So first you need to update the /etc/rc file and set dns to be enabled.
Hint : You can rdfile /etc/rc then copy and paste the contents appropriately in when you run the wrfile /etc/rc. You'll get the drift when you have a go yourself. So here goes:
ontap> wrfile /etc/rc
hostname ontap
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 mtusize 1500
route add default 10.10.10.1 1
routed on
options dns.enable on
options nis.enable off
read: error reading standard input: Interrupted system call
Here you can see this read: error reading standard input: Interrupted system call. This is because you have to CTRL-C out of the wrfile command to save your changes. If ANYONE knows a way around htis, please send a commnt. However, a man wrfile doesn't suggest a way.
So now, you have the /etc/rc file set up with dns enables, you need to change the /etc/resolv.conf. Here you can use the wrfile -a command. Just append your dns nameserver like so:
ontap> wrfile -a /etc/resolv.conf nameserver 10.10.10.10
Lastly, you need to run the following command to trun on dns
ontap> options dns.enable on
And there you have it. To prove dns is now running, the UI will show your changes:
Now I can continue setting up CIFS and adding the ONTAP device to AD. Pretty straight forward once you know how.
I’ve recently been working on a project to enhance a testing strategy that involved revisiting cloudstack API.
I dove headfirst into the deep end, which made me realise just how much
you can do with it. I’d like to share some of my experiences, in the
hope of getting some feedback from people who have used CloudStack or CloudPlatform or
are thinking about it and want to find out more. After plunging into the
depths of the API, this is what I discovered.
First, it’s good! Everything you can do in our compute platform can
be done via the API and it’s actually quite simple once you get the hang
of things. I wrote my test scripts in Python, but there are a few
different languages you can use and I know some guys
have dived in using PHP and .NET. There is also a Java library available from Jclouds so you can make your choice depending on your comfort
zone.
Second, most of the API commands are asynchronous, depending on what
you’re trying to do. This means you can call an API command and move on
to the next one or, like me, hold out for the async command response.
This way you can do one task, wait for the response and grab certain
criteria for the next command. You can then build up a fairly
comprehensive set of commands. For example, you can deploy a virtual machine (VM) using the deployVirtualMachine together with the following:
-
Serviceofferingid: The serviceofferingid relates to
what instance size you wish to use. You can get a list of available
service offerings by using the listServiceOfferingscommand.
-
Templateid: The templateid refers to the ID of
either the templates we offer to customers, or one you have
preconfigured. These values can be obtained by using the listTemplates
command.
-
Zoneid: The zoneid signifies the zone in which you deploy the VM. This is achieved by running the listZones command.
The good news is that once you have these values, you can use them
over and over again. Or you can list all the template IDs and service
offerings and pass a value from a collection if you want different size
instances or want to use different templates.
Once you have built up your deployVirtualMachine API command string,
you’re ready to move on to the next command – and this is just the
beginning. As the deployVirtualMachine is an async command, what you
really need before you can move on is the ID of the VM you have
deployed.
Then you can do other things like attach disks if required,
assign an IP or enable a static NAT, as these commands need a VM ID to
work. This is where the queryAsyncJobResult command comes into play.
Once you run your command to deploy a VM, it responds with a jobid and
jobstatus. The jobid is the asyncjobid number, which you can query using
the queryAsyncJobResult command. Once the deployVirtualMachine job
finishes, the jobstatus changes to 1, which means you can find out all
sorts of information about the VM that has been deployed. One key piece
of information is the VM ID, which is required for most commands to
work. Once you have this, the world is your oyster.
What I did was create a class in Python, which I was able to reuse to
find out the status of the async job. Once I had this working, I was
able to call the method in the class every time I wanted to find out the
status of the async job.
So, let’s quickly look at attaching a disk to a virtual machine.
First, you need to create a disk using createVolume and, again, apply
the queryAsyncJobResult to find out the ID of the volume you have
created. You will also use the diskofferingid to create a disk of a
certain size (I created a 50GB disk). Once you have this information and
the queryAsyncJobResult comes back, you are ready to attach the disk to
the VM. Here are the required API commands to do so:
-
The virtualmachineid is the ID of the VM, which you can get when using the queryAsyncJobResult of the deployVirtualMachine job.
-
ID refers to the ID of the disk volume you can obtain when using the queryAsyncJobResult of the createVolume.
Once you have built up your createVolume and have the corresponding
async job results, you’re ready to use the attachVolume commands. Hey
presto! You’ve now deployed a VM and attached a disk. And so it
continues…
This isn’t a tutorial or documentation explaining how to use our API
but merely a blog on how I did it. I’ve just finished testing all our
API commands and yes, there are many ways to skin a cat, but the point
is that it can be done and you can achieve a lot with just the click of a
button. It’s not all plain sailing – it never is – but once you get
involved, you can easily work these things out and it does become quite
simple.
I’m keen to hear from people who have used our API and what
programming language they used, or from people who are thinking about
using the API its capabilities. I’m happy to share some of the framework
of the Python scripts I’ve put together, so get in touch if you’d like a
hand getting started with our compute API.
Having been deeply involved in the cloud revolution, I thought it would be good to actually take a step back and unpick some of the cloud computing jargon I've heard over the last twelve months. What does it all actually mean?
I like to keep things simple, so let me try to simplify the tangle of definitions currently floating out there.
Iaas, Paas, Saas
Well, here's a confusing one to begin with! Why Iaas over Paas? Why Saas over Iaas? Do we all understand the difference? Maybe, maybe not. Here's my interpretation, for what it's worth.
IaaS is 'Infrastructure as a Service'. It means service providers provide an infrastructure to you, the customer, as an ongoing service and not as a one-off hardware purchase. OPEX versus CAPEX for you accountants out there. That infrastructure includes data storage, CPU, memory and networking. However, what a self-service cloud lmeans is that this is all available through a user interface where the customer has instant access to provision and configure the infrastructure instantly.
PaaS is 'Platform as a Service'. This is similar but not quite the same as IaaS as it is still about providing a virtual cloud computing infrastructure. PaaS differs by having a software layer on top of the IaaS as an intermediary between the customer and the infrastructure itself. Therefore, although the software may make some tasks easier, it may lack the flexibility that comes with direct access.
SaaS is 'Software as a Service'. This is a further step above Iaas and PaaS where you use software that happens to be powered by the cloud and is accessible online. Salesforce is a great example of SaaS.
Public, Private and Hybrid clouds
Public cloud computing is where you run a virtual infrastructure entirely through an external cloud provider. This would be a typical scenario for startups or people who have fully migrated their data centres into the cloud.
Private cloud is where a cloud computing environment is run internally on your own hardware infrastructure. This can often be as simply as running KVM, VMware or Citrix Xenserver to configure your infrastructure to offer cloud flexibility to your internal IT department.
A hybrid cloud leverages both private and public cloud.
As customer accounts are kept entirely separate and all virtual resources are dedicated (ie; are not impacted by the activities of other users), there is often very little benefit to a private cloud but plenty of additional hardware expense. We might call it public cloud computing, but it's still virtually (pun intended) as private as your traditional server infrastructure.
Online Storage versus Compute Storage
Online and compute storage are two very different service offerings. Compute (your virtual servers) obviously needs storage to run the operating system and is bundled as part of the offering. However what happens when you want to just use offsite storage - to backup or archive your files, for example - without the expense of running virtual servers to access the necessary storage? Would you buy a new computer just because you needed more hard drive space?
True public cloud storage can be used completely independently of compute and costs a lot less as a result. By connecting either your public, private or hybrid cloud to online cloud storage, you can simply extend your local storage setup to a safe and secure offsite location. With an API, you can also do lots of cool things with it, such as storing all your website assets (images, video, etc) in cloud storage instead of driving more expensive compute server requests every time someone visits one of your web pages.
Horizontal and vertical cloud compute
This really can be simplified as the difference between the number of cloud servers you're running compared to how powerful these servers are. If you have ten web servers and you need to accommodate a period of heavy traffic, then you may want to add another ten servers to your server farm to deal with this increased demand. This is horizontal scaling. However, if you decide to scale the size - and therefore the memory and CPU of those servers - from 2GB to 4GB, then this is vertical scaling. Most providers enables you to do both. Of course, you can also scale down the resources you use once demand dies down. You could shutdown the ten VMs and keep them on standby or decease the vCPUs and memory on each server. This can be done via our API or using our UI interface.
Cloud bursting
Cloud bursting is a term often thrown around that I particularly like. This is really the gem in the crown for cloud computing in my opinion.
Remember you only pay for what you use. Imagine this: it's nearly Christmas and you sell customised Christmas cards. Your busy period is unlikey to be January, but December is likely to be a mad rush. You need compute resources to deal with increased customer demand in the weeks up to Christmas but for the other ten months of the year you won't need anywhere nearly the same level of resources. A more efficient way is to burst into the cloud. Cloud bursting enables you to spin up, manually or automatically, cloud compute resources to cover your busy period and then once this period is over simply power down or completely remove the compute instances.
You can spin up on demand and only be charged for the period you ran the servers. It can be as little as an hour or as much as a year.
Cloud Compute versus VPS
VPS stands for virtual private server. A VPS runs on compute hardware with allocated resources, shared or explicit. Sometimes a VPS server can run on one dedicated physical server. It can be configured with high availability but doesn't always allow you to easily manage your compute resources. So VPS is similar to cloud computing, but not as scalable or flexible.
Cloud compute gathers a large number of resources, compute, network, storage etc and presents this to the end user so they can leverage the entire service to scale and provision quickly using either a user interface or an API. Cloud compute runs on large and powerful clusters, configured with redundancy and high availability as standard, enabling virtualisation of compute assets on demand over the internet.
Cloud storming
This is similar to cloud bursting, but is really for the dedicated cloud user or cloud junkie. Cloud storming is when you leverage a number of different cloud service offerings for your own compute environment.
Why would one do this? Benefits such as redundancy, reduced latency in different geographical locations, or to be in a relevant operating time zone.
These are just some of the most common terms and I'm sure some of you may disagree on my definitions. Let me know what bits of cloud jargon I've missed or offer your alternative definitions in the comments below.