Zscaler Cloud Platform

Machine Identity in the Cloud - Bypassing All Security Controls

A digital cloud

Modern public cloud environments provide great flexibility, agility, and benefits to companies of all sizes. In addition to operational benefits and cost reductions, the public cloud offers great security benefits, if managed properly. If not, the scale and flexibility of the public cloud can cause a severe security risk. Management of cloud identities, both human and non-human, is a key area of focus for organizations looking to have their cloud deployments act as a security asset rather than a liability.

In this blog, we’ll review the concept of machine identity - what it is, why it is important, and the security risks and benefits associated with it. 

 

Machine identity - why do we need it?

In multi-cloud environments, machine identities, such as virtual machines, serverless functions, roles, containers, applications, scripts, etc., play a pivotal role in driving digital transformation. They help enterprises in scaling up workloads, running scripts, patching holes, completing repetitive tasks effectively, and increasing productivity at the speed of agile DevOps with lower cost and errors. With these capabilities, machine identities are often tasked with advanced responsibilities where they make decisions on behalf of human identities that are part of autonomous and automated processes. 

The recent increase in machine identities requires new ways of managing risk. The average enterprise today has more than 1,000 applications in use, often supporting tens or hundreds of thousands of machine identities—each with varying access requirements that change constantly based on business needs. 

This is a lot to keep track of for a fast-moving enterprise, but pair this with numerous human identities and a complex multi-cloud environment, and the security challenge is considerable.  Traditional methods of adequately tracking, managing, and protecting identities no longer work in the case of machine identities. 

According to Gartner, "Managing machine identities is becoming a critical security capability" 

Source: Smarter With Gartner, “Top 8 Security and Risk Trends We’re Watching”, 15 November 2021.

According to Gartner, “It is impossible to keep pace with this change, and therefore manual methods for determining least-privilege access are neither feasible nor scalable. To address this adequately, organizations need a more identity-centric view of their cloud infrastructure entitlements. Furthermore, as organizations begin to understand appropriate access, the ability to efficiently remove unneeded entitlements and adjust access policies is essential.” 

Source: Gartner, “Managing Privileged Access in Cloud Infrastructure”, Paul Mezzera, Refreshed 7 December 2021, Published 9 June 2020. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

It is essential for enterprises to recognize where and how machine identities are used in multi-cloud environments and place necessary systems and processes to manage them properly. Failure to do so increases attack surface, exposes critical resources to security and compliance risk, and jeopardizes business operations.

Enterprises using multiple cloud platforms sometimes utilize cloud-native security tools provided by cloud service providers to manage and control human and machine identities, access, and permissions across infrastructure, applications, and data. But, relying on native tools cannot provide a single unified view of access or support a global set of cloud access policies. Enterprises may not be able to address some of the challenges related to machine identities such as fine-grained visibility, control, and enforcement of consistent least privilege policies in multi-cloud environments. 

Why is it critical to securely manage machine identities in the public cloud? Watch the demo video below:

 

 

Consider a simple web application hosted in the public cloud (e.g. AWS). 

It is likely to have a web interface, hosted on a web server (e.g. Nginx installed on a virtual machine/EC2 instance) which has to communicate with a database (e.g. DynamoDB). 

When the web server is trying to access DynamoDB, the request must include valid AWS credentials. One way to accomplish this is to hard code the AWS credentials in the application code, environment variable, text file, etc. 

While it is easy to do during initial application deployment, it creates an operational challenge, as one has to take care of credentials rotation across one or multiple instances of the application server. 

It also contradicts the AWS Preventative Security Best Practices, which state:

You should not store AWS credentials directly in the application or EC2 instance. These are long-term credentials that are not automatically rotated, and therefore could have significant business impact if they are compromised.

AWS also provides a recommendation in the same document:

An IAM role enables you to obtain temporary access keys that can be used to access AWS services and resources.

Cloud platforms allow us to associate an identity with different entities, like virtual machines, cloud functions (e.g. AWS Lambda), and other workloads. Returning to our web application example, instead of hard coding the API credentials, we can simply specify that our web server has access to specific DynamoDB tables.

AWS will then automatically handle the required authentication, allowing our developers to focus on delivering business value, instead of handling infrastructure issues.

 

What is the catch?

As described above, leveraging machine identities in a public cloud environment can greatly improve security posture and minimize the risks associated with long-term credentials. 

The catch is that one should be extremely prudent when creating and allocating IAM permissions to non-human entities, as deviating from the least-privilege concept can cause severe damage. After all, IAM permissions can bypass most, if not all, of traditional security controls.

Let us consider an example of a simple AWS account implementing several security best practices: 

  1. Different Virtual Private Clouds (VPCs) for production and R&D sandboxing. 
  2. Confidential data stored in S3 buckets which explicitly deny public access.
  3. Production workloads that are allowed to communicate with the S3 buckets only via VPC endpoint.

The account setup is depicted below:

 

The main idea behind the setup illustrated above is that while we allow our developers freedom to provision any workloads in the R&D Test VPC, we protect our confidential data stored in the S3 buckets by preventing any access to it from outside of the production VPC.

Let us review the AWS setup from the AWS console: 

Review results:

 

  1. All public access is explicitly denied:

 

 

  1. S3 bucket policy allows read access only from a specific VPC endpoint:

 


 

Security assessment with AWS Security Hub 

As security professionals, we cannot rely on manual verification of our setup, so we use cloud provider native tools like AWS Security Hub to make sure that our S3 buckets—and the data contained in them—is secure.

Let us check if there are any security findings related to one of our sensitive S3 buckets:

 

We can see that there are no high or critical severity issues, therefore we can assume that our data is reasonably safe. 
 

How poorly-configured machine identities bypass security controls

Poorly-configured and weakly-secured machine identity can introduce critical risks. 

In our example (refer to the diagram above), a developer deployed a popular server image for web application hosting - the LAMP (Linux, Apache, MySQL & PHP) stack, in the R&D sandboxing environment. 

As there are no strict security requirements for the sandboxing environment, the server contained a PHP vulnerability, which allowed an attacker to inject custom PHP code into the server. This code would allow a remote attacker to execute shell commands by wrapping them in a standard HTTP GET request. 

Considering the security controls outlined above, the impact of such an exploit should be limited to the sandboxing environment, which is strictly isolated from the rest of our cloud estate. 

We can start by verifying that the exploit is working by running an ‘ls’ command, to list the directory content: 

Now we can try and see what S3 buckets the server has access to:

Note the financeconfidential bucket highlighted above. Let’s try to access its content:

As you can see, by exploiting a vulnerable server in an isolated sandboxing environment we managed to easily obtain access to an S3 bucket containing confidential information, despite implementing several best practices (different VPCs, VPC endpoint, blocking public access to S3 & bucket policy) and performing a security audit using AWS Security Hub.

How did it happen? What went wrong?

Further analysis identified the culprit - an overly permissive IAM role was associated with the EC2 instance hosting the test web server:

This role allocation allowed the EC2 instance to bypass all access restrictions and exposed confidential data.

Conclusion

Cloud provider native security solutions (e.g. AWS Security Hub) provide great value by identifying basic misconfigurations. Unfortunately, they lack two critical components - context (i.e. what is important vs. what is less important) and identity awareness.

In our next blog post, we’ll discuss how the Zscaler cloud protection offering can help secure your cloud environment from the risk of excessive permissions that lead to data exposure. 

Stay up to date with the latest digital transformation tips and news.

By submitting the form, you are agreeing to our privacy policy.