What is cloud computing? Everything you need to know about the cloud explained

Cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.

Rather than owning their own computing infrastructure or data centres, companies can rent access to anything from applications to storage from a cloud service provider.

One benefit of using cloud-computing services is that firms can avoid the upfront cost and complexity of owning and maintaining their own IT infrastructure, and instead simply pay for what they use, when they use it.

In turn, providers of cloud-computing services can benefit from significant economies of scale by delivering the same services to a wide range of customers.

Cloud-computing services cover a vast range of options now, from the basics of storage, networking and processing power, through to natural language processing and artificial intelligence as well as standard office applications. Pretty much any service that doesn’t require you to be physically close to the computer hardware that you are using can now be delivered via the cloud – even quantum computing.

Cloud computing underpins a vast number of services. That includes consumer services like Gmail or the cloud backup of the photos on your smartphone, though to the services that allow large enterprises to host all their data and run all of their applications in the cloud. For example, Netflix relies on cloud-computing services to run its its video-streaming service and its other business systems, too.

Cloud computing is becoming the default option for many apps: software vendors are increasingly offering their applications as services over the internet rather than standalone products as they try to switch to a subscription model. However, there are potential downsides to cloud computing, in that it can also introduce new costs and new risks for companies using it.

A fundamental concept behind cloud computing is that the location of the service, and many of the details such as the hardware or operating system on which it is running, are largely irrelevant to the user. It’s with this in mind that the metaphor of the cloud was borrowed from old telecoms network schematics, in which the public telephone network (and later the internet) was often represented as a cloud to denote that the location didn’t matter – it was just a cloud of stuff. This is an over-simplification of course; for many customers, location of their services and data remains a key issue.

Cloud computing as a term has been around since the early 2000s, but the concept of computing as a service has been around for much, much longer – as far back as the 1960s, when computer bureaus would allow companies to rent time on a mainframe, rather than have to buy one themselves.

These ‘time-sharing’ services were largely overtaken by the rise of the PC, which made owning a computer much more affordable, and then in turn by the rise of corporate data centres where companies would store vast amounts of data.

But the concept of renting access to computing power has resurfaced again and again – in the application service providers, utility computing, and grid computing of the late 1990s and early 2000s. This was followed by cloud computing, which really took hold with the emergence of software as a service and hyperscale cloud-computing providers such as Amazon Web Services.

Building the infrastructure to support cloud computing now accounts for a significant chunk of all IT spending, while spending on traditional, in-house IT slides as computing workloads continue to move to the cloud, whether that is public cloud services offered by vendors or private clouds built by enterprises themselves.

Indeed, it’s increasingly clear that when it comes to enterprise computing platforms, like it or not, the cloud has won.

Tech analyst Gartner predicts that as much as half of spending across application software, infrastructure software, business process services and system infrastructure markets will have shifted to the cloud by 2025, up from 41% in 2022. It estimates that almost two-thirds of spending on application software will be via cloud computing, up from 57.7% in 2022.

gartner2022j.jpg

Image: Gartner

That’s a shift that only gained momentum in 2020 and 2021 as businesses accelerated their digital transformation plans during the pandemic. The lockdowns throughout the pandemic showed companies how important it was to be able to access their computing infrastructure, applications and data from wherever their staff were working – and not just from an office.

Gartner said that demand for integration capabilities, agile work processes and composable architecture will drive the continued shift to the cloud.

The scale of cloud spending continues to rise. For the full year 2021, tech analyst IDC expects cloud infrastructure spending to have grown 8.3% compared to 2020 to $71.8 billion, while non-cloud infrastructure is expected to grow just 1.9% to $58.4 billion. Long term, the analyst expects spending on compute and storage cloud infrastructure to see a compound annual growth rate of 12.4% over the 2020-2025 period, reaching $118.8 billion in 2025, and it will account for 67.0% of total compute and storage infrastructure spend. Spending on non-cloud infrastructure will be relatively flat in comparison and reach $58.6 billion in 2025.

All predictions around cloud-computing spending are pointing in the same direction, even if the details are slightly different. The momentum they are describing is the same: tech analyst Canalys reports that worldwide cloud infrastructure services expenditure topped $50 billion in a quarter for the first time in Q4 2021. For the full year, it has cloud infrastructure services spending growing 35% to $191.7 billion

canalys2022cloud.jpg

Image: Canalys

Canalys argues that there is already a new growth opportunity for cloud on the horizon, in the form of augmented and virtual reality and the metaverse. “This will be a significant driver for both cloud services spend and infrastructure deployment over the next decade. In many ways, the metaverse will resemble the internet today, with enhanced capabilities and an amplified compute consumption rate,” the analyst said.

Cloud computing can be broken down into a number of different constituent elements, focusing on different parts of the technology stack and different use cases. Let’s take a look at some of the best known in a bit more detail.

Infrastructure as a Service (IaaS) refers to the fundamental building blocks of computing that can be rented: physical or virtual servers, storage and networking. This is attractive to companies that want to build applications from the very ground up and want to control nearly all the elements themselves, but it does require firms to have the technical skills to be able to orchestrate services at that level. 

Platform as a Service (PaaS) is the next layer up – as well as the underlying storage, networking, and virtual servers, this layer also includes the tools and software that developers need to build applications on top, which could include middleware, database management, operating systems, and development tools.

Software as a Service (SaaS) is the delivery of applications as a service, probably the version of cloud computing that most people are used to on a day-to-day basis. The underlying hardware and operating system is irrelevant to the end user, who will access the service via a web browser or app; it is often bought on a per-seat or per-user basis.

SaaS is the largest chunk of cloud spending simply because the variety of applications delivered via SaaS is huge, from CRM such as Salesforce, through to Microsoft’s Office 365. And while the whole market is growing at a furious rate, it’s the IaaS and PaaS segments that have consistently grown at much faster rates, according to analyst IDC: “This highlights the increasing reliance of enterprises on a cloud foundation built on cloud infrastructure, software-defined data, compute and governance solutions as a Service, and cloud-native platforms for application deployment for enterprise IT internal applications.” IDC predicts that IaaS and PaaS will continue growing at a higher rate than the overall cloud market “as resilience, flexibility, and agility guide IT platform decisions”.

idc-public.jpg

Image: IDC

While the big cloud vendors would be very happy to provide all the computing needs of their enterprise customers, increasingly businesses are looking to spread the load across a number of suppliers. All of this has lead to the rise of multi-cloud. Part of this approach is to avoid being locked in to just one vendor (which can lead to the sort of high costs and inflexibility that the cloud is often claimed to avoid), and part of it is to find the best mix of technologies across the industry.

That means being able to connect and integrate cloud services from multiple vendors is going to be a new and increasing challenge for business. Problems here include skills shortages (a lack of workers with expertise across multiple clouds) and workflow differences between cloud environments. Customers will also want to manage all their different cloud infrastructure from one place, make it easy to build applications and services and then move them, and ensure that security tools can work across multiple clouds – none of which is especially easy right now.

The exact benefits will vary according to the type of cloud service being used but, fundamentally, using cloud services means companies not having to buy or maintain their own computing infrastructure.

No more buying servers, updating applications or operating systems, or decommissioning and disposing of hardware or software when it is out of date, as it is all taken care of by the supplier. For commodity applications, such as email, it can make sense to switch to a cloud provider, rather than rely on in-house skills. A company that specializes in running and securing these services is likely to have better skills and more experienced staff than a small business could afford to hire, so cloud services may be able to deliver a more secure and efficient service to end users.

Using cloud services means companies can move faster on projects and test out concepts without lengthy procurement and big upfront costs, because firms only pay for the resources they consume. This concept of business agility is often mentioned by cloud advocates as a key benefit. The ability to spin up new services without the time and effort associated with traditional IT procurement should mean that it is easier to get going with new applications faster. And if a new application turns out to be wildly popular, the elastic nature of the cloud means it is easier to scale it up fast.

For a company with an application that has big peaks in usage, such as one that is only used at a particular time of the week or year, it might make financial sense to have it hosted in the cloud, rather than have dedicated hardware and software laying idle for much of the time. Moving to a cloud-hosted application for services like email or CRM could remove a burden on internal IT staff, and if such applications don’t generate much competitive advantage, there will be little other impact. Moving to a services model also moves spending from capital expenditure (capex) to operational expenditure (opex), which may be useful for some companies.

Cloud computing is not necessarily cheaper than other forms of computing, just as renting is not always cheaper than buying in the long term. If an application has a regular and predictable requirement for computing services it may be more economical to provide that service in-house.

Some companies may be reluctant to host sensitive data in a service that is also used by rivals. Moving to a SaaS application may also mean you are using the same applications as a rival, which might make it hard to create any competitive advantage if that application is core to your business.

While it may be easy to start using a new cloud application, migrating existing data or apps to the cloud might be much more complicated and expensive. And it seems there is now something of a shortage in cloud skills, with staff with DevOps and multi-cloud monitoring and management knowledge in particularly short supply.

In one report, a significant proportion of experienced cloud users said they thought upfront migration costs ultimately outweigh the long-term savings created by IaaS.

And of course, you can only access your applications if you have an internet connection.

Cloud computing tends to shift spending from capex to opex, as companies buy computing as a service rather than in the form of physical servers. This may allow companies to avoid large increases in IT spending which would traditionally be seen with new projects; using the cloud to make room in the budget might be easier than going to the CFO and looking for more money.

Of course, this doesn’t mean that cloud computing is always or necessarily cheaper that keeping applications in-house; for applications with a predictable and stable demand for computing power, it might be cheaper (from a processing power point of view at least) to keep them in-house.

To build a business case for moving systems to the cloud, you first need to understand what your existing infrastructure actually costs. There’s a lot to factor in: obvious things like the cost of running data centres, and extras such as leased lines. The cost of physical hardware – servers and details of specifications like CPUs, cores and RAM, plus the cost of storage. You’ll also need to calculate the cost of applications, whether you plan to dump them, re-host them in the cloud unchanged, completely rebuilding them for the cloud, or buy an entirely new SaaS package. Each of these options will have different cost implications. The cloud business case also needs to include people costs (often second only to the infrastructure costs) and more nebulous concepts like the benefit of being able to provide new services faster. Any cloud business case should also factor in the potential downsides, including the risk of being locked into one vendor for your tech infrastructure (see multi-cloud, above).

Analysts argue that as the cloud now underpins most new technological disruptions in everything from mobile banking to healthcare, usage is only going grow. It’s hard to see many new technology projects being delivered that don’t harness the cloud in some way. Gartner says that more than 85% of organizations will embrace a cloud-first principle by 2025 and will not be able to fully execute on their digital strategies without it. The analyst says new workloads deployed in a cloud-native environment will be pervasive, not just popular, and anything non-cloud will be considered legacy. By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021.

And if that sounds unrealistic, it may be that figures on adoption of cloud depend on who you talk to inside an organisation. Not all cloud spending will be driven centrally by the CIO: cloud services are relatively easy to sign-up for, so business managers can start using them, and pay out of their own budget, without needing to inform the IT department. This can enable businesses to move faster, but also can create security risks if the use of apps is not managed.

Adoption will also vary by application: cloud-based email is much easier to adopt than a new finance system, for example. And for systems such as supply chain management, that are working efficiently as they are, there will be less short-term pressure to do a potentially costly and risky shift to the cloud.

Many companies remain concerned about the security of cloud services, although breaches of security are rare. How secure you consider cloud computing to be will largely depend on how secure your existing systems are. In-house systems managed by a team with many other things to worry about are likely to be more leaky than systems monitored by a cloud provider’s engineers dedicated to protecting that infrastructure.

However, concerns do remain about security, especially for companies moving their data between many cloud services, which has led to growth in cloud security tools, which monitor data moving to and from the cloud and between cloud platforms. These tools can identify fraudulent use of data in the cloud, unauthorised downloads, and malware. There is a financial and performance impact, however: these tools can reduce the return on investment of the cloud by 5% to 10%, and impact performance by 5% to 15%. The country of origin of cloud services is also worrying some organisations (see ‘Is geography irrelevant when it comes to cloud computing?’ below)

Public cloud is the classic cloud-computing model, where users can access a large pool of computing power over the internet (whether that is IaaS, PaaS, or SaaS). One of the significant benefits here is the ability to rapidly scale a service. The cloud-computing suppliers have vast amounts of computing power, which they share out between a large number of customers – the ‘multi-tenant’ architecture. Their huge scale means they have enough spare capacity that they can easily cope if any particular customer needs more resources, which is why it is often used for less-sensitive applications that demand a varying amount of resources.

idc-2.jpg

Image: IDC

Private cloud allows organizations to benefit from some of the advantages of public cloud – but without the concerns about relinquishing control over data and services, because it is tucked away behind the corporate firewall. Companies can control exactly where their data is being held and can build the infrastructure in a way they want – largely for IaaS or PaaS projects – to give developers access to a pool of computing power that scales on-demand without putting security at risk. However, that additional security comes at a cost, as few companies will have the scale of AWS, Microsoft or Google, which means they will not be able to create the same economies of scale. Still, for companies that require additional security, private cloud might be a useful stepping stone, helping them to understand cloud services or rebuild internal applications for the cloud, before shifting them into the public cloud.

Hybrid cloud is perhaps where everyone is in reality: a bit of this, a bit of that. Some data in the public cloud, some projects in private cloud, multiple vendors and different levels of cloud usage. 

For startups that plan to run all their systems in the cloud, getting started is pretty simple. But the majority of companies, it is not so simple: with existing applications and data, they need to work out which systems are best left running as they are, and which to start moving to cloud infrastructure. This is a potentially risky and expensive move, and migrating to the cloud could cost companies more if they underestimate the scale of such projects.

A survey of 500 businesses that were early cloud adopters found that the need to rewrite applications to optimise them for the cloud was one of the biggest costs, especially if the apps were complex or customised. A third of those surveyed cited high fees for passing data between systems as a challenge in moving their mission-critical applications. The skills required for migration are both difficult and expensive to find – and even when organisations could find the right people, they risked them being stolen away by cloud-computing vendors with deep pockets. 

Beyond this, the majority also remained worried about the performance of critical apps, and one in three cited this as a reason for not moving some critical applications.

Actually, it turns out that is where the cloud really does matter. Geopolitics is forcing significant changes on cloud-computing users and vendors. Firstly, there is the issue of latency: if the application is coming from a data centre on the other side of the planet, or on the other side of a congested network, then you might find it sluggish compared to a local connection. That’s the latency problem.

Secondly, there is the issue of data sovereignty. Many companies, particularly in Europe, have to worry about where their data is being processed and stored. European companies are worried that, for example, if their customer data is being stored in data centres in the US or (owned by US companies), it could be accessed by US law enforcement. As a result, the big cloud vendors have been building out a regional data centre network so that organizations can keep their data in their own region.

Some have gone further, effectively detatching some of those datacenters from their main business to make it much harder for US authorities – and others – to demand access to the customer data stored there. The customer data in the data centres is under the control of an independent company, which acts as a “data trustee”, and US parents cannot access data at the sites without the permission of customers or the data trustee. Expect to see cloud vendors opening more data centres around the world to cater to customers with requirements to keep data in specific locations.

Cloud security is another issue; the UK government’s cyber security agency has warned that government agencies need to consider the country of origin when it comes to adding cloud services into their supply chains. While it was warning about antivirus software in particular, the issue is the same for other types of services too.

Cloud-computing services are operated from giant datacenters around the world. AWS divides this up by ‘regions’ and ‘availability zones’. Each AWS region is a separate geographic area, like EU (London) or US West (Oregon), which AWS then further subdivides into what it calls availability zones (AZs). An AZ is composed of one or more datacenters that are far enough apart that in theory a single disaster won’t take both offline, but close enough together for business continuity applications that require rapid failover. Each AZ has multiple internet connections and power connections to multiple grids: AWS has over 80 AZs.

Google uses a similar model, dividing its cloud-computing resources into regions that are then subdivided into zones, which include one or more datacenters from which customers can run their services. It currently over eight zones: Google recommends customers deploy applications across multiple zones and regions to help protect against unexpected failures.

Microsoft Azure divides its resources slightly differently. It offers regions that it describes as is a “set of datacentres deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network”. It also offers ‘geographies’ typically containing two or more regions, that can be used by customers with specific data-residency and compliance needs “to keep their data and apps close”. It also offers availability zones made up of one or more data centres equipped with independent power, cooling and networking.

Those data centres are also sucking up a huge amount of power: for example, Microsoft struck a deal with GE to buy all of the output from its new 37-megawatt wind farm in Ireland for the next 15 years in order to power its cloud data centres. Ireland said it now expects data centres to account for 15% of total energy demand by 2026, up from less than 2% back in 2015.

When it comes to IaaS and PaaS, there are really only a few giant cloud providers. Leading the way is Amazon Web Services, and then the following pack of Microsoft’s Azure, Google, and IBM. According to data from Synergy Research, Amazon, Microsoft and Google continue to attract well over half of worldwide cloud spending, with Q3 market shares of 33%, 20% and 10% respectively. And with growth rates that are higher than the overall market, their share of worldwide revenues continues to grow. However, that still leaves plenty of revenue for the chasing pack of companies – about $17 billion. “Clearly there are challenges with the big three companies lurking in the background, so the name of the game is not competing with them head on,” said the analyst.

synergy.jpg

Image: Synergy Research

The big three cloud companies all have their own strengths. AWS is the most established player and was behind Amazon’s ability to support huge seasonal swings in demand from consumers. Being first out to market with cloud services and pushing hard to gain market share has made it the market leader, and it continues to innovate. Microsoft’s Azure has become an absolutely core part of Microsoft’s strategy, and the company has the enterprise history and products to support businesses as they switch to the cloud. Google Cloud is the smallest of the big three players, but clearly has the might of the advertising-to-Android giant behind it.

Beyond the big three there are others, such as Alibaba Cloud, IBM, Dell and Hewlett Packard Enterprise, that all want to be part of the enterprise cloud project. And of course, from giants like Salesforce down to tiny startups, pretty much every software company is a SaaS company now.  

There are and will continue to be cloud outages. Those outages might happen at a local level because your internet is disrupted either by physical means (a digger cuts your broadband) or because of cyberattacks. But the big vendors have outages too and because, we are all increasingly reliant on their services, when the cloud stops, work stops. Few companies have backup systems to turn to in this situation. So long as cloud vendors keep outages to a minimum, then users will probably consider that using the cloud is more reliable than home-grown apps. But if outages become widespread, that opinion might change.

Cloud computing is reaching the point where it is likely to account for more of enterprise tech spending than the traditional forms of delivering applications and services in-house that have been around for decades. However, use of the cloud is only likely to climb as organisations get more comfortable with the idea of their data being somewhere other than a server in the basement. And now cloud-computing vendors are increasingly pushing cloud computing as an agent of digital transformation instead of focusing simply on cost. Moving to the cloud can help companies rethink business processes and accelerate business change, goes the argument, by helping to break down data any organisational silos. Some companies that need to boost momentum around their digital transformation programmes might find this argument appealing; others may find enthusiasm for the cloud waning as the costs of making the switch add up.

Cloud-computing case studies

There are plenty of examples of organisations deciding to go down the cloud-computing route: here are a few examples of recent announcements.

Previous coverage

The Art of the Hybrid Cloud

Cloud computing is gobbling up more of the services that power businesses. But, some have privacy, security, and regulatory demands that preclude the public cloud. Here’s how to find the right mix.

Public cloud, private cloud, or hybrid cloud: What’s the difference?

Trying to understand and articulate the differences between public, private, and hybrid cloud? Here’s a quick breakdown.

Read more on cloud computing

Next Post

Web hosting for SEO: Why it's important

Want to know an easy way to speed up and improve the overall performance of your website? Invest in good hosting. Ignorance is no longer an excuse for any company to use a cheap web host. Website performance is a critical element that can help improve your rankings, traffic and […]