Data Is Important – But Not Everything
Updated: Feb 16
What is one of the worst things that can happen to your company?
What you're thinking about is probably related to data. Am I right?
In dozens of conversations with CISOs and other security professionals we've often gotten a question similar to this: "Cool, but can you also identify what types of data we have in the cloud?"
Our answer to this is "No, but to us it's not the only thing you need to focus on, we tell you about all the potential `ins` in your environment."
What do we mean by this?
Common Attack Vectors In The Cloud
In the cloud it's simple to "end up on the internet". One wrong or unintentional click and you expose a Virtual Machine / Database / Storage Account to the whole world.
Of course, it is important to make sure your customers' data is not exposed to people that should not have access to it, but often being focused on a single issue keeps us blind to other issues we might have in our environment, issues that other people might be able to find and use as a way in.
It is important to understand what common attack vectors actors use in the cloud.
On first thinking you'd be forgiven to think that a production server needs more protection than a development server.
What if I told you that often both servers however are accessible from employee workstations via the internal network? In return this means that the development server is just a valid path in to one's environment as the production server, and often an even easier one because organisations choose to not focus as much on non-production environments and are more lax around non-production security settings.
If following a "DevOps" approach (can highly recommend The DevOps Handbook ) then it is important to also understand that non-production and production are really just different stages in the release cycle and should always look the same (sizing / scale might be different). So they should also from a security point of view be treated the same.
Especially in larger environments where many teams have their own dedicated environments and where many security tools require system administrators / developers to know how to onboard their systems to these security tools, these environments tend to have a low overall coverage of their systems as users tend to skip onboarding for multiple reasons.
An automated security service that discovers new systems is really the only way to get full visibility.
Insecure by default
The cloud should be treated as insecure by default, not because it inherently is, but because it can easily become insecure or a security issue.
Also, cloud providers want to make it easy for users to consume their services, and all providers take different paths for different services.
Let's take a look at one of the most common cloud services used by organisations, AWS S3.
Creating an S3 resource via the AWS Console makes that resource private, nobody can access it and the console even warns the user that making changes to this setting can be a security issue. Good job AWS. However, more likely than not, users will use automation to create these resources, and if we check AWS's documentation on how to create an S3 resource, this simple call results in the following:
That's not great, and often overlooked.
It's not just AWS though, this is a common theme across all cloud providers where a compromise needs to be made between usability and security, especially for users that are just starting out.
One similar example on the Azure cloud is the creation of a Storage Account explained here and here . This doesn't result directly in a publicly accessible Storage Account Container, but certain properties are set up by default in a way that is less than ideal and really only one additional step away from making the whole resource public, including also not enforcing the latest TLS version.
These examples are not intended to bash the cloud providers, but to highlight the need for full security coverage of an organisation's cloud resources.
In today's cloud world it is so simple to make mistakes that these configuration toggles ("make something public", "turn off encryption", etc.) must be monitored, in real time, and ideally a process should exist to revert / remediate those mistakes that humans make.
Whether or not something has important data stored on it, in the cloud this cannot be the only decision point when it comes to making a call on its security posture. It is a lot more complicated than that and we believe the question "what type of data was stored on the breached environment?" is really only important for a post-incident exercise.
So make sure you always have a full view of everything that is happening in all your cloud environments so that you can confidently say "I'm in control of my cloud security."