Friday, May 31, 2013

MYSQL master and slave replication considerations and failover

I recently saw an interesting question on a forum which asked how to best set up a MYSQL master slave environment. The answer was, 'take a look at this link!'. The link details some really interesting information about how best to set up your master slave environment, but also how to scale your application in a MYSQL master / slave setup.

Here is an interesting diagram to look at first.



 


















This is pretty simple stuff. A web app, in this case, requests a write to the MySQL master, which replicates the information to each of the slaves. 10/10!

You need to run your slaves with the --log-bin option and without --log-slave-updates. What does this mean exactly? Well these are optional values you can use when start the mysql. 

In linux flavours, you can edit the file /etc/my.sql and set these values in this file that get loaded when mysql starts. You can also specify some values using a mysql client, connecting to the database and specifying the values required. I would recommend using /etc/my.sql as a one stop shop.

In Windows, MySQL programs read startup options from the following files, in the following order:















First of all, what do we need to set?

For --log-bin, this needs a filename value. See here:










So Type = file name and the default is off. If we need to set up replication, the mysql /etc/my.cnf file will look like this:

[mysqld]

log-bin=mysql-bin

For  --log-slave-updates, well this is slightly easier. Why? Take a look at this:














The default is set to false anyhow so unless the config has explicitly set this to true, we don't need to do anything. For good measure:

[ mysqld]

log-slave-updates=false


However, what happens in a failure here?  Each MySQL Slave (Slave 1, Slave 2, and Slave 3) is a slave running with --log-bin and without --log-slave-updates. Because updates received by a slave from the master are not logged in the binary log unless --log-slave-updates is specified, the binary log on each slave is empty initially. If for some reason MySQL Master becomes unavailable, you can pick one of the slaves to become the new master. For example, if you pick Slave 1, all Web Clients should be redirected to Slave 1, which will log updates to its binary log. Slave 2 and Slave 3 should then replicate from Slave 1.

The reason for running the slave without --log-slave-updates is to prevent slaves from receiving updates twice in case you cause one of the slaves to become the new master. Suppose that Slave 1 has --log-slave-updates enabled. Then it will write updates that it receives from Master to its own binary log. When Slave 2 changes from Master to Slave 1 as its master, it may receive updates from Slave 1 that it has already received from Master.


However there is some intervention required to do all of this. It's not automated. Here is what you would need to do in a case of a MYSQL failure:

Make sure that all slaves have processed any statements in their relay log. On each slave, issue STOP SLAVE IO_THREAD, then check the output of SHOW PROCESSLIST until you see Has read all relay log. When this is true for all slaves, they can be reconfigured to the new setup. On the slave Slave 1 being promoted to become the master, issue STOP SLAVE and RESET MASTER.

On the other slaves Slave 2 and Slave 3, use STOP SLAVE and CHANGE MASTER TO MASTER_HOST='Slave1' (where 'Slave1' represents the real host name of Slave 1). To use CHANGE MASTER TO, add all information about how to connect to Slave 1 from Slave 2 or Slave 3 (user, password, port). In CHANGE MASTER TO, there is no need to specify the name of the Slave 1 binary log file or log position to read from: We know it is the first binary log file and position 4, which are the defaults for CHANGE MASTER TO. Finally, use START SLAVE on Slave 2 and Slave 3. 

There is an interesting scaling solution you could consider which I like. Take a look at this diagram:



In this scenario, the web client are directed to slaves for read purposes only, and any writes are directed to the MySQL master server. This takes the read load off from the MySQL master server. Reads generate more requests than writes generally. However, you need to have the application aware of the slave setup. This is generally done by creating MYSQL connectors that are aware of read only functions. These connectors would then utilise a slave server rather than a master server. For more info, see this link https://dev.mysql.com/doc/refman/5.7/en/replication-solutions-scaleout.html.

Happy playing!

Wednesday, May 29, 2013

Cloudstack db.properties password encryption

I was recently asked to update the db password in the db.properties files in Cloudstack. I kind of knew the guy who asked me was still in Cloudstack 2.2 land where passwords were in plain old text, but as of Cloudstack 3.x, the passwords in db.properties file and other passwords for that fact are encrypted. Lets have a closer look at the db.properties file.



The following file on each cloudstack and cloudplatform server contains the db connection details
 
/etc/cloud/management/db.properties

In this file there is information about the db username and password details.
 
# CloudStack database tuning parameters
db.cloud.password=ENC(57nNfvJaN9X54lbJi0pjugc9YylyRo8c)
db.cloud.username=cloudadmin
db.cloud.host=10.0.26.16
db.cloud.validationQuery=SELECT 1
db.cloud.testOnBorrow=true

In cloudplatform and cloudstack version 3 +, the db.cloud.password is encrypted. To change this, run the following file to work out the encrypted password
 
java -classpath /usr/share/java/cloud-jasypt-1.8.jar 
org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI 
encrypt.sh input="Password12" 
password="$(cat /etc/cloud/management/key)" verbose=true

Where
 
input=Password12

is the password you wish to encrypt.

The output should look something like this.
 
$ java -classpath /usr/share/java/cloud-jasypt-1.8.jar 
org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI 
encrypt.sh input="Password12" 
password="$(cat /etc/cloud/management/key)" verbose=true
 
----ENVIRONMENT-----------------

Runtime: Sun Microsystems Inc. OpenJDK 64-Bit Server VM 20.0-b12



----ARGUMENTS-------------------

verbose: true
input: Password12
password: password



----OUTPUT----------------------

57nNfvJaN9X54lbJi0pjugc9YylyRo8c

The 57nNfvJaN9X54lbJi0pjugc9YylyRo8c is the encrypted password for Password12. Use this as the new password in the db.properties file

If you wish to decrypt the password, run the following command:
 
java -classpath /usr/share/java/cloud-jasypt-1.8.jar 
 org.jasypt.intf.cli.JasyptPBEStringDecryptionCLI 
 decrypt.sh input="57nNfvJaN9X54lbJi0pjugc9YylyRo8c" 
 password="$(cat /etc/cloud/management/key)" verbose=false

The password is then displayed, in this case Password12

Friday, May 10, 2013

Summary of VMUG




VMUG in London was good - these VMUG's always are. In fact I must say. if you get the chance to go, do it.

2 talks stood out for me and here is my review:



1 – Software-Defined Control – Transforming IT Operations, Shmuel Kliger, VMTurbo

Shmuel Kliger, who is the founder, gave a talk about using VMturbo and the benefits of using it in your environment. It’s a tool that gathers 1000’s (well 10,000’s) of metrics from all component resources and calculates the optimum state for your environment. It’s good for the corporate IT world and a bit like the visioncore products that got bought out by Quest Software. It’s pretty much useless in a cloud environment if you ask me unless you like looking at charts and graphs and saying, ooh that looks good but ‘now what?’. Having said that, it can integrate with CloudPortal business manager and if it works well, it may be worth trying to offer this as a service bundle to offer vmturbo to the customer as a service offering.

2 – Don't Do Virtual Networking, Do Cloud Networking - Greg Ferro, packetpushers.net

This was by far the best talk presented by a guy who was to the point, lots of knowledge about networks and opinionated views. A very high summary is the discussions were about how traditional networks will turn in to more overlay networks and how SDN technology will glue overlay networks all together using encapsulation managed by SDN controllers using openflow as the management protocol. In fact, this was also something discussed at the HP technology conference last year so there seems to be a consistent message from these events. The opinion is SDN is still not ready but should start to progress in a few years’ time, circa 2015. The feeling was Cisco hasn’t yet made a move in the SDN space yet and there are no real standards. The only real standard are open vswitch, which seems to be the choice for the virtualization network stack but that’s not news, and openflow. I feel that the technology is all over the shop a bit at the moment but is gaining pace and needs some standardisations.


I didn't make vBeers but would have if I had the chance. It was great meeting with some of the guys there and talking about VMware and Puppet. Overall, a good day out.