Saturday, 14 May 2011
The Cloud and you
Two things I can say for sure:
The cloud will fail.
The cloud will get better.
How much any of these two things means to you is dependent on how willingly, and how totally, you embrace the cloud.
Obligatory history lesson starts here:
In 1979 I built my first computer. I soldered the chip-carriers (2, I think) to the tiny PCB, and I soldered all the discretes in their rightful places and I attached power and it had life. A little while later it died (faulty character generator, if you must know).
From then on I had various computers (PET, TRS-80, AMSTRAD 464/664/PCW9512, CoCo, Amiga, Atari, Jupiter Ace, Sinclair Spectrum, IBM XT, IBM AT, a multitude of clones, home-brews and lately Apple Macs.)
All have failed in some way or another.
Lessons learnt from the above history? Technology will fail you (usually when you need it most) and technology improves if it lasts long enough.
Obligatory history lesson ends here.
Therefore, the cloud will fail, but it will improve.
If you are going to use the cloud, decide the level of failure you can tolerate, then use the cloud up to that level, and no more.
One example I know of:
A small business was having server problems - capacity and hardware were below par. The owner, a fairly tech-savvy guy, crunched his numbers and came up with a solution using Amazon. Unfortunately, like a lot of small businesses, the risk analysis was pretty much non-existent, and Disaster Recovery and Business Continuity were words heard once in a management seminar. But, hey, it's Amazon - what could go wrong? Well, Google "Amazon Cloud outage" and you will have a good idea. His business is still going, but some of his customers are tending to use the business less these days, and some are still in "negotiations" over goods supplied late.
Amazon are doing a lot to make sure this doesn't happen again, and so is the small business owner.
Between them, (with lots of hard work) they should be able to put this incident to rest. Out of it will come an improved cloud service, and a chastened, but wiser small business owner.
Monday, 2 May 2011
Why I use Open Source Software
Don't get me wrong - I use and advocate the use and support of closed source software any time I feel it is appropriate to do so. But it's not that often these days.
But I don't think the following scenario would have been possible using closed source equivalents, and I am damn sure that the cost of using them would have blown my budget out of the water!
I run a network. It has over 400 users, many of whom are mobile, work abroad for extended periods of time, and work all the hours God send us. Planned downtime is a rarity. Unplanned downtime is happening more frequently, but due to outside problems (power outage, internet congestion etc) rather than internal problems - although we have our fair share of those too!
I use Open Source software wherever possible and I do so because it is generally a better "fit" for the network tasks I have than some proprietary software. And I can usually bend it to fit what I want - I can't do that with closed source.
So when I get an OS solution that works, I tend to generally leave it alone. Oh, I apply security patches, but rarely do I update anything that's working unless I need the new feature(s) or they come with a security update.
That's why you can find installations of Apache 1.3 still working on intranet machines, why I still have working Slackware 11 installations and why some un-maintained programs are still doing the business on the network - they work and they are on internal machines with no security implications.
So when a power outage along with a faulty UPS takes out a machine that has been working steadily for the last 5 years as a dhcp server, a nat box, a wireless sign-on web page, a transparent proxy and a router for several private IP ranges, I take the opportunity to upgrade the hardware and software with thanks. When it happens on the Friday of a long weekend ( Friday through to Tuesday ), I am even more thankful for the opportunity to work on it uninterrupted.
Here is the setup:
Hardware: 4 disk rack mount 1U box with dual Athlon processors and 2 gigabytes of RAM ( A bit light these days, but should be enough) and 2 disks only installed
Software: Slackware64 13.1, standard full install. Main packages are Squid, Apache, dhcpd, dnsmasq, and some custom start up scripts for adding addresses to ethernet cards and starting iptables with the nat table entries and port redirects for the transparent proxy.
The process went something like this:
Install Slackware. ( 30 Minutes )
Get dnsmasq working as DNS server only
Get dhcpd (installed version) working. (15 Minutes )
Get Apache in default mode working then configure for my defaults. ( 15 Minutes )
Get Squid. Get Slackbuild script for Squid. Compile Squid. Install Squid ( 45 Minutes )
Read Squid documentation (BIG package, lots of changes since I last used Squid in anger!) (4 Hours )
Implement necessary changes to Squid configuration, test, and repeat. ( 12 hours, including Internet searches, reading Blogs, Wikis etc. )
( Transparent proxying using Squid was a hack in Squid 2, that has been elevated to "built-in" in Squid 3 - but judging by the Blog pages and wiki's it is problematic in Squid 3...)
Curse Squid (5 Minutes)
Get a copy of Squid 2, try to compile in 64 bits on target box. (2 hours, failed )
Curse Slackware ( 30 Seconds )
Find, install, configure and test an alternative to Squid for transparent proxying (TinyProxy )( 1 hour )
Install, test, debug and eventually modify the PHP pages for the wireless page signon. ( 3 hours)
Test all functions from various areas of the building ( 4 Hours )
Total time taken: ~ 28 hours on the software, spread over 2 days.
All the software was available, free and easily downloadable - no feature crippled demos, no limit on the number of connections/users/CPU's, nobody upselling, nobody bombarding me with phone calls/emails for stuff I don't want/don't need and am quite capable of finding for myself if or when I do, and no expiry date where they get a chance to do it all again in 12 months time.
And that is why I will put up with the occasional failure (looking at you, Squid*) in the Open Source model - they don't market this stuff, they just make it useful!
(* By the way, I am quite happy for Squid users to prove me wrong - it is a BIG package, and has over 170+ options, so there is every chance that I screwed up and not Squid - but TinyProxy went in, I did a minimal config and it just worked...)
But I don't think the following scenario would have been possible using closed source equivalents, and I am damn sure that the cost of using them would have blown my budget out of the water!
I run a network. It has over 400 users, many of whom are mobile, work abroad for extended periods of time, and work all the hours God send us. Planned downtime is a rarity. Unplanned downtime is happening more frequently, but due to outside problems (power outage, internet congestion etc) rather than internal problems - although we have our fair share of those too!
I use Open Source software wherever possible and I do so because it is generally a better "fit" for the network tasks I have than some proprietary software. And I can usually bend it to fit what I want - I can't do that with closed source.
So when I get an OS solution that works, I tend to generally leave it alone. Oh, I apply security patches, but rarely do I update anything that's working unless I need the new feature(s) or they come with a security update.
That's why you can find installations of Apache 1.3 still working on intranet machines, why I still have working Slackware 11 installations and why some un-maintained programs are still doing the business on the network - they work and they are on internal machines with no security implications.
So when a power outage along with a faulty UPS takes out a machine that has been working steadily for the last 5 years as a dhcp server, a nat box, a wireless sign-on web page, a transparent proxy and a router for several private IP ranges, I take the opportunity to upgrade the hardware and software with thanks. When it happens on the Friday of a long weekend ( Friday through to Tuesday ), I am even more thankful for the opportunity to work on it uninterrupted.
Here is the setup:
Hardware: 4 disk rack mount 1U box with dual Athlon processors and 2 gigabytes of RAM ( A bit light these days, but should be enough) and 2 disks only installed
Software: Slackware64 13.1, standard full install. Main packages are Squid, Apache, dhcpd, dnsmasq, and some custom start up scripts for adding addresses to ethernet cards and starting iptables with the nat table entries and port redirects for the transparent proxy.
The process went something like this:
Install Slackware. ( 30 Minutes )
Get dnsmasq working as DNS server only
Get dhcpd (installed version) working. (15 Minutes )
Get Apache in default mode working then configure for my defaults. ( 15 Minutes )
Get Squid. Get Slackbuild script for Squid. Compile Squid. Install Squid ( 45 Minutes )
Read Squid documentation (BIG package, lots of changes since I last used Squid in anger!) (4 Hours )
Implement necessary changes to Squid configuration, test, and repeat. ( 12 hours, including Internet searches, reading Blogs, Wikis etc. )
( Transparent proxying using Squid was a hack in Squid 2, that has been elevated to "built-in" in Squid 3 - but judging by the Blog pages and wiki's it is problematic in Squid 3...)
Curse Squid (5 Minutes)
Get a copy of Squid 2, try to compile in 64 bits on target box. (2 hours, failed )
Curse Slackware ( 30 Seconds )
Find, install, configure and test an alternative to Squid for transparent proxying (TinyProxy )( 1 hour )
Install, test, debug and eventually modify the PHP pages for the wireless page signon. ( 3 hours)
Test all functions from various areas of the building ( 4 Hours )
Total time taken: ~ 28 hours on the software, spread over 2 days.
All the software was available, free and easily downloadable - no feature crippled demos, no limit on the number of connections/users/CPU's, nobody upselling, nobody bombarding me with phone calls/emails for stuff I don't want/don't need and am quite capable of finding for myself if or when I do, and no expiry date where they get a chance to do it all again in 12 months time.
And that is why I will put up with the occasional failure (looking at you, Squid*) in the Open Source model - they don't market this stuff, they just make it useful!
(* By the way, I am quite happy for Squid users to prove me wrong - it is a BIG package, and has over 170+ options, so there is every chance that I screwed up and not Squid - but TinyProxy went in, I did a minimal config and it just worked...)
Subscribe to:
Posts (Atom)