I’ve been using Dropbox for my personal files and at work for quite awhile now. I’ve also been using Google Storage to store all my photos. I’ve also have iCloud for my iPhone backups, CloudApp for Mac and a few other “Cloud” based storage solutions.
I have to admit, that Dropbox has the cleanest integration into MacOSX, with the neat menu icon. I’ve also like the integration into GMail which I use for personal and work email so I can easily send links to large files by browsing my Dropbox directories.
But all this comes at a cost.
Dropbox gives you 2GB to start for free and up to 18GB (500MB per referral = 32 referrals to get 16GB). If you need more, it’s $100/100GB per year.
CloudApp no longer has a free option, but they have a monthly charge of $5 for unlimited files/day, 250MB file size limit and the option to use your own domain.
Google recently changed their storage options. Long ago, I upgraded my Google Storage to 20GB for $5/year. That was a great deal a few years ago. At the most recent Google I/O conference, they announced that they were giving everyone 15GB to share across Gmail, Google+, and Drive. So now I’d be paying for $5/5GB per year. Not such a great deal anymore.
So I was beginning to think about reconciling all my data that I have in various cloud storage solutions and in external hard drives at home. I have a 1TB external drive plugged into my Mac at home with about 400GB free, and I have Verizon FIOS which gives me 50Mbps/25Mbps speeds, and I have experience setting up servers.
Owncloud to save the day. I’ll admit, this solution isn’t as elegant as Dropbox, but I’m going to try and develop all the integration points I have come to rely on.
If you’ve had experience setting up a simple IIS server or MAMP server, you’ll be able to navigate your way through setting up an instance of Owncloud. You could also setup an instance of Owncloud on Amazon AWS using a BitNami package for a turn key solution, but you’ll end up paying for the AWS hosting.
I was able to download the Owncloud source code and set it up on my instance of MAMP in under 5 minutes. I also purchased a domain name and set it up to point to my server.
I’ll be posting a Part 2 to this post to highlight some of the challenges I ran into when setting up Owncloud to fit my personal needs.
In my previous post, I walked you through setting up WordPress on Amazon Web Services (AWS), using an EC2 micro instance with Ubuntu and NGINX with a separate RDS micro instance running MySQL.
Recall the AWS Architecture image I created that showed how we were going to setup the WordPress site?
Each zone consists of an EC2, RDS, and S3 instance. So let’s spin up an S3 instance in the Oregon zone to complete the package. Create a new bucket and call it whatever you want, but it needs to be something that someone else in that zone hasn’t used.
Now, there are multiple ways to utilizing S3 with WordPress, but we’re going to use the W3 Total Cache plugin to handle multiple things. Before we can use the plugin, we’re going to have to make sure that everything on the server is setup to support it.
Before we begin though, let’s take a quick break and do a quick load test on the server, but we can’t test load on a blank WordPress site, so lets get some test blog posts. I used this websites test posts and supplemented lots of images. You can download my test post file here. After you have imported the test posts and images, let’s head over to http://loadimpact.com and run a test on our server. After the load impact test is finished, you can also run a test on http://blitz.io.
What do these tests tell us? The first one from Load Impact shows us that even with 50 concurrent users on the site, the load time of the site remains consistently at around 400ms. The Blitz IO test shows us the WordPress site can handle approximately 127k hits/day or about 1 user/sec.
Pingdom.com is also a very useful site. In addition to their alerts, they have a great waterfall tool that shows all the elements on a page and how long each one takes. http://pingdom.com.
Results for the WordPress site straight out of the box aren’t that bad. We’ll run these tests again after we make some modifications to the site and see how they improve.
Step 1 – W3 Total Cache Plugin for WordPress
Login to your WordPress site by going to http://YOUR_EC2_ELASTIC_IP/wp-admin and click on Plugins from the left navigation. Then click on “Add New” from the upper left of main screen and in the search plugins box, type in W3 Total Cache. It should be listed as the first search result, go ahead and install the plugin and activate it.
Now, you’ll notice that on the left hand navigation, you’ll have a “Performance” section at the bottom. This is the W3 Total Cache Plugin settings tab, so click on it and let’s configure it.
1) Page Cache – click on “enable” and select APC since we’ve enabled it on our servers.
2) Minify – click on “enable” and then for Minify mode, select Manual so we can specify the S3 bucket for a CDN. For the Minify Cache Method, select APC once again since our server is configured for it. For HTML minifier, leave it at default and choose default for JS and CSS.
3) Database Cache – click on “enable” and select APC.
4) Object Cache – same as above.
5) Browser Cache – same as above.
6) CDN – click on “enable” and select “Amazon Simple Storage Server”
7) Under Miscellaneous – leave everything as default, but you might want to take advantage of using the Google Page Speed dashboard widget. You’ll have to enable it on your Google account. Click on the APIs Console link from the plugin page and you’ll be taken to your Google account, find the Google Page Speed API and enable it. Then copy the API key and paste it into the field.
When you hit save all settings, the page will refresh and you’ll see notices at the top.
The red one says that you must provide the Access Key, Secret Key and Bucket for the CDN. So lets go back to your AWS Console to get that information. When you are at your AWS Console, click on your name in the upper right and select Security Credentials. Scroll down a little on the page and you’ll see the Access Credentials.
Back on your WordPress blog, click on the CDN link under the Performance section of your navigation. Now fill in the Access Key ID, Secret Key, and Bucket Name. I’ve selected to always use HTTP to reduce the SSL latency. Now click on “Test S3 upload” and you should see a Test Passed message.
Click on Save all Settings and now let’s clean it up.
At the top of your W3 Total Cache page, you’ll see a bunch of notices. Let’s take a look at these.
1) Export the Media Library – this will move all your images and media files that you uploaded to the new S3 bucket. Click on it and new window pops up and you’ll see how many files are in your media library. I had 8.
2) WP-Includes – go ahead and click on that button to generate a new pop-up window. I have 298 files to move to the S3 bucket.
3) Click on Theme files and export those. Click on Custom files, but you shouldn’t have any of those yet. After you are done, you can hide the message.
4) Deploy – Click on this to make all the changes on your production site. All links to images and external files will be modified in your post to reflect the S3 location.
Let’s do a sanity check by loading your website and viewing the properties of an image file.
Now, lets re-run those 3 tests.
1) http://loadimpact.com – the updated report shows that the W3 settings helped improve the response rate of the site when it was loaded with 50 concurrent users. The response rate from 400ms after the initial install to 100ms. A very good improvement in regards to the effort put in.
2) http://blitz.io – the updated report from Blitz.io shows the site is a bit more responsive and is able to handle about 157k hits/day, up from 128k hits/day before installing and configuring W3 Total Cache.
3) http://pingdom.com – the updated waterfall report shows that the site load time went from 1.16s to 2.01s. This is due to the latency from S3 on the initial requests of the image and media files. There is a trade off here, of the site being ultra responsive versus being resilient.
So by installing a simple WordPress plugin, we were able to optimize the site to handle more traffic and be a bit more resilient. Next up, let’s take a look at enhancing the WordPress dashboard to make monitoring the site easier.
I’ll be walking you through getting started with Amazon Web Services. Before we begin, I’m going to assume that you have some experience with setting up websites, even if they are simple static ones. In this particular example I’ll be setting up a WordPress site using services from AWS.
While there are many different ways you can host WordPress, I chose AWS because;
1) AWS would allow me to start and stop servers based on my traffic needs. Scalability is a big selling point with AWS, they have micro instances that can support up to a few hundred concurrent visitors to large cluster environments that can be used for high throughput applications. http://aws.amazon.com/ec2/instance-types/
2) The AWS suite of services complement each other and are easy to integrate into an environment. For instance, you can spin up multiple micro instances in different regions and put them behind a load balancer in a few seconds.
3) The free tier of services allows developers to experiment building out environments without having to pay setup fees for bulky equipment or services. http://aws.amazon.com/free/
4) More and more businesses are realizing the benefits of moving their data centers and infrastructure into the cloud. Long gone are the days when it took a few days for a data center to build a traditional virtual server, in AWS you can spin up a new instance within a matter of seconds.
Step 1 – Sign up for an AWS account
Go to http://aws.amazon.com and sign up for an account. You’ll have to provide credit card details, but the example I’ll walk you through qualifies for their free services, so you shouldn’t get charged. I hold no responsibility if you do!
Step 2 – Overview of AWS Services
For the WordPress proof of concept (POC), we’ll be using the following AWS services that I have highlighted in the above image.
EC2 – The computer/server that will be used to host the WordPress files. You’ll have the choice to determine the operating system (OS), and instance size. There are many different flavors of OS that you can use with AWS. Some are standard, but 3rd parties have also provided pre-installed software based on typical kits that are used.
S3 – Is Amazon’s content distribution network (CDN). This is similar to Akamai or other CDN’s that are commercially available. One of the benefits of having a CDN is that it frees up the main servers resources to handle requests. The other benefit of having a CDN is that services like Akamai replicate files across their network, so if you wanted to download a huge file from a site, Akamai would allow you to download it from the server closest to you, instead of a server on the other side of the coast. We’ll be using S3 to serve images and static files from our WordPress site.
RDS – Is Amazon’s database service. Typically, databases are housed on a separate server because of the resources it requires. In data rich applications, databases are housed on clusters, which are multiple servers. For the WordPress site, we’ll be using RDS to house our MySQL database.
Load Balancers – This Amazon service is bundled in with EC2. For a typical WordPress site, you won’t need a load balancer, but for high traffic sites, the load balancers help route traffic so user experience is unaffected. The best way to utilize a load balancer is to setup smaller instances behind it and have the load balancer spread the traffic across multiple sites. This way, you can pay a lower amount of money for smaller instances. It can also help avoid outages. If you are expected to have your site up for advertising reasons, or other reasons, having 2 smaller instances behind a load balancer instead of 1 larger instance will be cheaper and more resilient.
Step 3 – Launch EC2 Instance #1
Let’s launch the first server for the WordPress installation. Jump into your AWS console by going to http://console.aws.amazon.com and click on EC2. I just want to point out a few things on this screen. First, I’ve selected Oregon as the region in the upper right of the screen. When we setup the second instance, we’ll select Virginia.
Click on Launch Instance, and in the window that pops up, select Quick Launch Wizard and fill out the following fields;
Name of instance – this is a friendly name and can have spaces. I chose WordPress-Oregon.
Choose a key pair – this generates a file that you have to download to your computer. I chose wordpress.
Launch configuration – This is where you choose your operating system. I’m going to be building an Ubuntu server LTS (long term support) 64 bit that runs NGINX web server. Alternatively, you can choose the standard Amazon Linux AMI (Amazon Machine Image) and use Apache. I’ll have a comparison of both environments in another post.
The next window will allow you to edit the details of the server, but I’ll just use the defaults.
You’ll then get a confirmation window stating it could take a few minutes for the instance to be available. Click on “Instances” from the Navigation pane and you should see the instance you just created. Click on the check box to the left of it and the details will populate the bottom half of your screen.
Click on “Security Groups” under Network & Security and check “quicklaunch-1” to see the details. In the lower section of the screen, click on Inbound, so you can open up ports to allow for incoming connections to the server. Create the following inbound rules and remember to click on “Apply Rule Changes”.
Port Range: 20-21
Port Range: 80
Port Range: 22 (SSH) should have already been there. Port 20-21 will be for FTP access (used by Wordpress auto installs) and Port 80 is used by the web server to serve content to web visitors.
Let’s generate an Elastic IP and assign it to the instance. From the left Navigation, click on “Elastic IPs” and click on “Allocate New Address”. Now, click on the IP address and assign it to the instance we just created.
Step 4 – Connect to EC2 via SSH
SSH can be a little complicated, but I’m going to give you the basics so you can get started. If you want to learn more about it, Google it!
I’m using a Mac, so I can SSH by opening the Terminal app. If you are on a PC, you can SSH by running the CMD (Command) prompt from Run. Alternatively, you can download graphic interfaces that will allow you to SSH into the server. For Mac, I’m using Cyberduck (http://cyberduck.ch), because it has a minimal interface, and allows me to open an SSH terminal connected from the app, so I won’t have to remember the login.
Remember that Key Pair you downloaded when you were starting the instance? It should be in your download folder somewhere. I called mine “wordpress”, so the file is in my download folder and it is called “wordpress.pem”.
Open Terminal and type in “cd /downloads” to go to your downloads folder where the “wordpress.pem” file is located.
These two commands will get you logged into your Ubuntu instance and changed as the root account. As the root account, you will have access to install new software and change the configuration on the server.
Now lets install a firewall to protect the server from unwanted incoming connections.
Let’s configure PHP to use APC (Alternative PHP Cache), because we’ll be able to take advantage of it with a WordPress plugin called W3 Total Cache. We’ll have to use the “nano” command to edit the /etc/php5/fpm/php.ini file to add these 3 lines at the bottom.
Now you’ll see an editor window, with a bottom row of command keys. I used “Control+V” to page down to the end of the file and found the last entry and typed in the new APC code. Then I used “Control+X” to close the file and when I was prompted to save the file, I said yes and I overwrote the existing “php.ini” file. Add these lines to the bottom of the php.ini file before the close.
apc.write = 1
apc.slam_defense = 0
Next, let’s edit the “www.conf” file to use NGINX. Open the /etc/php5/fpm/pool.d/www.conf” file using the nano command.
Find the line that says;
user = www-data
group = www-data
And replace it with;
user = nginx
group = nginx
Next, find the line that says;
listen = 127.0.0.0.1:9000
And replace it with this line;
listen = /dev/shm/php-fpm-www.sock
Add these 3 lines right after the “listen” parameter.
What we did was make a directory “mkdir” called “www”, and gave it ownership rights to NGINX and changed the read/write permissions.
Now, lets actually download WordPress and install it.
tar zxvf latest.tar.gz
mv * /var/www/
chown -R nginx:nginx /var/www/
cp /var/www/wp-config-sample.php /var/www/wp-config.php
chown nginx:nginx /var/www/wp-config.php
Let’s generate a salt-key using the WordPress API by going to http://api.wordpress.org/secret-key/1.1/salt/, copy all the text into your clipboard, because we will be editing the “/var/www/wp-config.php” file for WordPress and will need to paste in the new key.
I found the first line to replace and used “Control+K to cut out the lines, then used “Command+V” to paste in the salt-key that was generated. You can “Control+X” and overwrite the file for now.
We should also be inputting our DB information, but we haven’t spun up the Amazon RDS MySQL instance yet. So, lets do that now.
Step 5 – Spin up an RDS instance and install MySQL
Let’s go back to your AWS Console and go into RDS and Launch a DB Instance. We’re going to install MySQL Community Edition on a micro instance with Multi-AZ deployment is No.
Within the instance, I’ve created a new database named “wordpress”.
Next, we’ll have to update the security on the RDS instance to allow incoming connections from our EC2 instance we created earlier.
From the left Navigation, click on “DB Security Groups”. From the drop down under Connection Type, select “EC2 Security Group”.
Now, that we’ve created the RDS instance with MySQL and the initial WordPress database, we can go back to the terminal window that is connected via SSH to the instance.
nano -w /var/www/wp-config.php
I updated the MySQL settings area of the file and updated the following;
I initially setup this blog to just post about random technology, but have decided to use this blog to create information posts about simple proof of concepts I’ve used in the past that have lead on to full projects and products with a life cycle.
I’ll be using screen casts to provide live demos and walk throughs, so stay tuned!