What is compute?

In cloud computing, the term “compute” describes concepts and objects related to software computation. It is a generic term used to reference processing power, memory, networking, storage, and other resources required for the computational success of any program. 

For example, applications that run machine learning algorithms or 3D graphics rendering functions require many gigs of RAM and multiple CPUs to run successfully. In this case, the CPUs, RAM, and Graphic Processing Units required will be called compute resources, and the applications would be compute-intensive applications.

Let’s look at some compute FAQs to understand the term in the context of modern computing.

What are compute resources?

Compute resources are measurable quantities of compute power that can be requested, allocated, and consumed for computing activities. Some examples of compute resources include:

CPU

The central processing unit (CPU) is the brain of any computer. CPU is measured in units called millicores. Application developers can specify how many allocated CPUs are required for running their application and to process data.

Memory

Memory is measured in bytes. Applications can make memory requests that are needed to run efficiently.

If applications are running on a single physical device, they have limited access to the compute resources of that device. But if applications run on the cloud, they can simultaneously access more processing resources from many physical devices. Let’s take a closer look at this.

What is cloud computing?

Cloud Computing is a generic term for anything that involves delivering computer resources over the internet. Users can access large amounts of computing power on demand. They can purchase by the minute or the hour and use as much or as little as required for their computing activities.

The history of cloud computing

Traditionally, web applications were hosted on fixed physical servers. The computing power required for the application was limited to the server on which it ran. Website owners could purchase multiple servers or space within a single server, but they had to pay a fixed price, irrespective of usage. They also had to manage and maintain their own computer server infrastructure.

Today, applications can use computing resources of many different physical devices in a flexible manner. Cloud providers have vast compute resource pools, comprising both servers and physical data centers. Enterprises can host applications and access these compute resources from the pool. The provider fully manages and optimizes the resources to give flexibility, scalability, and efficiency to users. Users only have to maintain their applications; compute infrastructure management is done by the cloud provider.

Accounting software: an example of cloud computing

Let’s look at the example of accounting software. A few decades ago, businesses had to purchase and install software on their internal physical hardware. Software capabilities were limited to the underlying hardware. Over time, the system would run out of memory or require updates, and hardware failure could lose critical business data.

Today, accounting software developers can run software on the cloud. Users can access it in the same way they access any website, and they can purchase tiered plans according to their needs. The software applications use computing power from the cloud, and the software itself can scale as needed. There is also no fear of customers losing their data.

What are microservices?

Microservices, also known as microservices architecture, is an architectural style used for software applications structured as a collection of services. 

Enterprise applications are typically built in three main parts:

  1. A client-side user interface (an app or HTML page that runs on an end-user device)
  2. A database (that stores the application data)
  3. A server-side application (that defines the functional and logical working of the operation)

This monolith architecture style is inflexible and hard to modify. Small changes in one part of the system require the whole monolith to be re-deployed.

On the other hand, microservices architecture designs the application as a collection of functional services with decentralized management. A microservice attempts to address a single concern, such as a data search, logging function, or web service function. These microservices can be deployed independently, use their own compute resources, and communicate with each other.

For example, a news application could be composed of independent microservices like sports, crime, global, etc. Each microservice would manage its own retrieval and sorting of content. They could also be scaled up or down as required. During a major sporting event, the sport microservice would use additional processing power but would be scaled down following the event’s completion. 

What are virtual machines?

Virtual machines are the basic building blocks that get computing power from the cloud. A virtual machine is software that can perform all the same functions as a physical computer, including running applications and operating systems. It is a digital version of a physical computer. Virtual machines can run programs and operating systems, store data, connect to networks, and conduct other computing functions. They may also require maintenance such as updates and system monitoring.

A virtual machine may run on many underlying physical machines, giving applications the computing power required to run successfully. Cloud computing gives users the ability to deploy and scale virtual machines on-demand. It offers guaranteed computational resources in a secure and isolated environment without the overheads required to manage physical servers.

What is an Amazon EC2 instance?

In AWS Compute services, virtual machines are called instances. AWS EC2 provides various instance types with different configurations of CPU, memory, storage, and networking resources so a user can tailor their compute resources to the needs of their application. 

There are five types of instances:

General purpose instances

General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories. 

Compute optimized instances

Compute optimized instances are used to run high-performance compute applications that require fast network performance, extensive availability, and high input/output (I/O) operations per second. Scientific and financial modeling and simulation, big data, enterprise data warehousing, and business intelligence are examples of this type of application.

Accelerated computing instances

Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.

Memory optimized instances

Memory optimized instances use high-speed, solid-state drive infrastructure to provide ultra-fast access to data and deliver high performance. These instances are ideal for applications that require more memory and less CPU power, such as open-source databases and real-time big data analytics.

Storage optimized instances

Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

What is a container?

Before software is released, it must be tested, packaged, and installed. Software deployment refers to the process of preparing an application for running on a computer system or a device.

Docker is a tool used by developers for deploying software. It provides a standard way to package an application’s code and run it on any system. It combines software code and its dependencies inside a container. Containers (or Docker Images) can then run on any platform via a docker engine. Amazon Elastic Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. This ensures quick, reliable, and consistent deployments, regardless of the environment.

A hospital booking application: an example of Docker

For example, a hospital wants to make an appointment booking application. The end users may use the app on Android, iOS, Windows machine, MacBook, or via the hospital’s website. If the code were deployed separately on each platform, it would be challenging to maintain. Instead, Docker could be used to create a single universal container of the booking application. This container can run everywhere, including on computing platforms like AWS.

What is serverless computing with AWS cloud?

Serverless computing refers to the development of applications with externally managed, underlying server infrastructure. Serverless services, like AWS Lambda, come with automatic scaling, built-in high availability, and a pay-for-value billing model.

Serverless computing is a way to describe the services, practices, and strategies that enable software development companies to innovate and respond faster to change. Teams can release applications quickly, get feedback, and improve their software by eliminating operational overheads.

For example, a tech start-up creates an application to search and filter university courses. To launch, the company can go serverless, and focus on refining user experience and systems. By using fully managed hardware infrastructure, it can invest in marketing instead.

 

What is elastic load balancing of compute resources?

Load balancing is the process of evenly distributing computing resources and workload in a cloud computing environment. This is done to reduce lag and maintain processing time, even when the application is in high demand. Load balancers can intelligently distribute client requests across multiple application servers that are running in a cloud environment.

Elastic Load Balancing enables users to maximize application performance and reliability. It can automatically distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, AWS lambda functions, and virtual servers. It can handle the varying load of application traffic, reduce cost, and efficiently scale the application up or down to match demand

E-commerce: an example of elastic load balancing

For example, an online e-commerce store runs an application for sorting the best deals of the day. As a compute-intensive application, it uses cloud compute and load balancing to manage demand. This automatically uses additional processing resources on weekends, Christmas, and other seasonal peaks when demand spikes. On other days, it scales down compute when demand slows. Without load balancing, the store would have to pay peak usage rates even on slow days, reducing profit margins.

What are compute services?

Compute services are also known as Infrastructure-as-a-Service (IaaS). Compute platforms, such as AWS Compute, supply a virtual server instance and storage and APIs that let users migrate workloads to a virtual machine. Users have allocated compute power and can start, stop, access, and configure their computer resources as desired.

How to choose between different AWS Compute Services

Choosing the best AWS infrastructure depends on your application requirements, lifecycle, code size, demand, and computing needs. Take a look at these three examples:

  1. If you want to deploy a selection of on-demand instances offering a wide array of different performance benefits within your AWS environment, you would use Amazon Elastic Compute Cloud (EC2).
  2. If you want to run Docker-enabled applications packaged as containers across a cluster of EC2 instances, you could use Amazon Elastic Container Service (Amazon ECS).
  3. If you want to run your own code using only milliseconds of compute resource in response to event-driven triggers in serverless environment, you could use AWS Lambda.

What are the benefits of AWS compute services?

AWS Compute services offer the broadest and deepest functionality for compute. Key benefits of using AWS Compute include:

Right compute for your workloads

Amazon EC2 (Amazon Elastic Compute Cloud) offers granular control for managing application infrastructure with the choice of processors, storage, and networking. Amazon Elastic Container Services (Amazon ECS) offer choice and flexibility to run containers.

Built-in security

AWS offers significantly more security, compliance, and governance services, and key features than the next largest cloud provider. The AWS Nitro System has security built in at the chip level to continuously monitor, protect, and verify the instance hardware.

Cost optimization

With AWS compute you pay only for the instance or resource you need, for as long as you use it, without requiring long-term contracts or complex licensing.

Flexibility

AWS provides multiple ways to build, deploy, and get applications to market quickly. For example, Amazon Lightsail is an easy-to-use service that offers you everything you need to build an application or website.

To determine which AWS Compute service is best suited to grow your business, don't hesitate to Get in Touch with our team of experts or sign-up for a Free AWS Account today.

Next steps on AWS

Check out additional product-related resources
Learn more about compute services 
Sign up for a free account

Instant get access to the AWS Free Tier.

Sign up 
Start building in the console

Get started building with AWS in the AWS management console.

Sign in