In the past you had to have a specific piece of equipment for each individual task. However this is all rapidly changing in the IT sector. I can still remember the surprise when about 15 years ago I discovered that the small version of my Watchguard firewall device had the very same external dimensions as the large version that I eventually needed as the company expanded. The only difference between them I discovered was a hexadecimal digit and the license key.

My immediate reaction was to feel cheated, but then it dawned on me that this was simply the business model that the company was using. It was probably cheaper to manufacture a single device for the whole series even using the same chips, which would then be downgraded with the software to allow five different models to be marketed.

Back then it was also unthinkable that I would run anything except Apple software on my Apple laptop. Nowadays I quite happily run different Linux systems and even Windows, virtual or otherwise. I can even make Windows programs operate with OSX programs.

New dividing lines

The dividing lines no longer run in tandem with hardware. This has become quite normal. Most of the latest upgrades for our smart phones bring new features with them. Even with car engines it is common knowledge that you can conjure up a few extra horse power and even extra functionality with a little piece of software. Though I do find it odd that the car industry doesn’t have a business model offering after-sales upgrades.

This is now the norm in the IT world. This is initiated through virtualization. A VM no longer depends on the underlying hardware, but only on its hypervisor (software in this case). The same VM can at any time easily run on another piece of hardware.

For years now I’ve used a different virtualization definition: virtualization is the technology we use whereby we sever the link between the application and the hardware, by way of the software.

Nowadays VMware has already gone much further. It’s not just the VMs that are virtualized according to my definition, but also the networking, storage, switching, etc.

Which is actually no bad thing. After all, a firewall or a router is really just a piece of hardware on which software is run. For some time now software has taken care of the functionality. Except that today we see that software is independent of the underlying hardware.

Long live Moore!

I believe this is a direct result of Moore’s Law which states that the power of computing doubles approximately every two years. Naturally the law invites a lot of discussion. It cannot be applied evenly to everything and some people argue about whether it occurs every 18 or 24 months. What can also be said is that computing power continues to grow without costing more.

I’ve programmed PLCs with just 4 KB of memory. When I see code I always react by asking myself if it could have been done with less. Why? Should we today be considering a gigabyte more or less? Logical then that we continue to expect more power from the software.

Therefore we arrive at a stage where people are now able to bypass hardware limitations with software.

Nimble storage is an example

With respect to storage systems, it is still quite normal for people to meet increasing demand for IOPS with hardware. More spindles result in greater throughput. Higher IOs are provided with hardware such as SAS and SSD.

Moore’s Law assumes that storage will reach its limits, or at least things will become cheaper with increasing use of intelligent software that in turn requires more CPU and memory – but so what. And the price of these has dropped dramatically.

I believe Nimble has put a lot of thought into this. An awful lot of thought has gone into how data flows move from one location to another. Sitting down in front of a clean slate they came up with a whole new filing system that moves work from storage to the CPU thanks to software.

Example of software intelligence

Allow me to give an example. You will find more examples at Nimble themselves.
What slows down hard disks the most? Correct, moving the read/write heads. Each IO test contains totally different digits for sequential assignments versus at-random. But what if you ensure by way of large enough buffers and intelligent software that the heads are used as little as possible? A lot of time can be saved with the intelligent classification of the data blocks.

At Nucleus we’re always open to new technology, provided it has already proved itself. Nimble’s slideware was interesting and theoretically they all seemed to be logical choices as far as we could see. Next step? Putting that thing on the rack. The new storage system also proved that it can provide many more IOs for the same money.

Another interesting fact is the scalability of the product. More IOs is just a question of minutes and not having to change an entire storage system. A real godsend for hosting providers like Nucleus.

We also see a lot of extra functionality both now and in the upcoming software releases, so that our preference has become much clearer since the beginning of this year. From now on we’ll be building our cloud systems based on Nimble as our central storage.

The entire cloud is changing

This example shows that there’s still going to be a lot of change in our sector. There will always be more opportunities available and as an infrastructure provider it’s our job to identify where improvements can be made.

We’re looking ahead to some really exciting times over the next few years.

Related posts
Nucleus - Laravel

What is the best way to host Laravel?

What is the best way to host Laravel? By combining Laravel, Forge and user-friendly deployments with managed hosting. Find out why!

Read more

Spectre en Meltdown

What impact do Spectre and Meltdown really have?

What impact do Spectre and Meltdown really have? Was this one of the most dangerous leaks? And how about loss of performance?

Read more

Blog Uptime

How do I improve my uptime: a step-by-step plan

Uptime has become crucial in our “always on, always connected” society. I already wrote about the impact of downtime in an earlier blog post. But per permanent uptime, which is considered a given by end users, can have a serious impact on a company.

Read more