If you have been following the post-RSA Conference buzz this year, you probably noticed a recurring theme: cloud security is still broken. Not because the technology is immature. Not because we lack tools. It is broken because of misconfiguration, and the numbers back that up. Study after study confirms that cloud misconfiguration is the single biggest cause of data breaches in cloud environments.
The frustrating part? Most of these misconfigurations are entirely preventable. A public S3 bucket here, a default password there, an IAM role that gives way too much access. These are not sophisticated attacks. They are unlocked doors.
The Misconfigurations That Keep Showing Up
Let's start with the usual suspects. If you run any workloads in AWS, Azure, or GCP, these are the misconfigurations that security teams find over and over again.
Public Storage Buckets
This one has been making headlines for years, and it still happens constantly. An S3 bucket, Azure Blob container, or GCS bucket gets created with public read access, either by accident or because a developer needed quick access during testing and never locked it down. The result? Sensitive data sitting on the open internet, waiting to be discovered by anyone running a simple enumeration script.
We have seen breaches where hundreds of millions of records were exposed this way. Customer data, internal documents, database backups, credentials files. The scale can be staggering, and the root cause is almost always the same: someone toggled a setting and forgot to toggle it back.
Overly Permissive IAM Roles
IAM is hard. Everyone knows it, and most teams take shortcuts. The most common shortcut is granting *:* permissions or attaching AdministratorAccess to roles that only need a handful of specific actions. This means that if any single service or user account using that role gets compromised, the attacker has the keys to everything.
One breach that made the rounds involved an attacker who gained initial access through a minor vulnerability in a web application. Under normal circumstances, the blast radius would have been small. But the application's IAM role had full admin permissions across the entire AWS account. The attacker pivoted from a simple web exploit to complete account takeover in minutes.
Default Credentials and Exposed Secrets
Default passwords on databases, admin panels, and management consoles are still shockingly common. But the more modern version of this problem is secrets leaking through environment variables. Teams store API keys, database connection strings, and service account credentials in .env files or hardcode them into container configurations. Those secrets end up in version control, in CI/CD logs, or in container images pushed to public registries.
One particularly painful pattern: a company rotates all its production credentials after a breach, but the old credentials are still sitting in a Git commit history that nobody cleaned up. The next attacker just digs through the logs.
Security Groups and Network ACLs Wide Open
Allowing inbound traffic from 0.0.0.0/0 on ports like 22 (SSH), 3389 (RDP), or database ports is another classic. It happens most often in development environments that somehow make it to production, or in "temporary" rules that become permanent because nobody tracks them.
How to Scan for Public-Facing Resources
The good news is that every major cloud provider gives you CLI tools to check for these problems. You do not need expensive third-party software to get started. Here are the commands you should be running regularly.
AWS: Find Public S3 Buckets
Use the AWS CLI to list all buckets and check their public access settings:
aws s3api list-buckets --query "Buckets[].Name" --output text
Then for each bucket:
aws s3api get-public-access-block --bucket YOUR_BUCKET_NAME
If the output shows any of the four block settings as false, that bucket may be publicly accessible. You can also use:
aws s3api get-bucket-acl --bucket YOUR_BUCKET_NAME
Look for grants to AllUsers or AuthenticatedUsers. Those are red flags.
Azure: Check for Public Blob Containers
List all storage accounts, then check container access levels:
az storage account list --query "[].name" -o tsv
az storage container list --account-name YOUR_ACCOUNT --query "[?properties.publicAccess!='none']"
Any container that returns a result has some level of public access enabled.
GCP: Find Public Cloud Storage Buckets
gsutil iam get gs://YOUR_BUCKET_NAME
Check for bindings that include allUsers or allAuthenticatedUsers. You can also run:
gcloud asset search-all-iam-policies --query="policy:allUsers" --scope=projects/YOUR_PROJECT
These commands take a few minutes to run across your environment, and they will surface problems that may have been lurking for months.
The 10-Item Cloud Configuration Checklist
Print this out. Tape it to your monitor. Run through it monthly. These ten items cover the configurations that cause the vast majority of cloud breaches.
- Block public access on all storage buckets. Enable the account-level public access block in AWS. Set equivalent policies in Azure and GCP. There should be zero exceptions unless you have a documented, reviewed business reason.
- Enforce least-privilege IAM policies. No role should have
*:*permissions. Use IAM Access Analyzer (AWS), Azure AD access reviews, or GCP IAM Recommender to identify and trim overly permissive roles. - Enable encryption at rest for all data stores. This includes databases, object storage, EBS volumes, and backups. Use customer-managed keys where possible so you control rotation and revocation.
- Enable encryption in transit. Enforce TLS on all endpoints. Redirect HTTP to HTTPS. Disable older TLS versions (1.0 and 1.1).
- Turn on cloud-native logging. Enable CloudTrail (AWS), Activity Log (Azure), or Cloud Audit Logs (GCP). Make sure logs are written to a separate, tamper-resistant storage location.
- Restrict security group and firewall rules. No inbound rules should allow
0.0.0.0/0on management ports (SSH, RDP) or database ports. Review all rules quarterly at minimum. - Rotate all credentials and access keys. Set a 90-day maximum lifetime for access keys. Use temporary credentials (STS, managed identities) wherever possible instead of long-lived keys.
- Enable MFA on all human accounts. Every user who can log into your cloud console should have multi-factor authentication enabled. No exceptions for executives, developers, or "temporary" accounts.
- Audit network ACLs and VPC configurations. Check for default VPCs still in use, overly broad subnet configurations, and missing network segmentation between production and non-production environments.
- Scan for secrets in code and configs. Use tools like
git-secrets,truffleHog, or your CI/CD platform's built-in secret scanning to catch credentials before they hit your repository.
Set Up Automated Alerts for Public Resource Creation
Checklists are great, but they only work when someone remembers to run them. The better approach is to get alerted automatically whenever someone creates a public-facing resource in your cloud environment.
AWS: CloudTrail + EventBridge
Create an EventBridge rule that watches for PutBucketAcl and PutBucketPolicy API calls in CloudTrail. Filter for calls that set public access. Route matching events to an SNS topic that sends email or Slack notifications to your security team.
You can also enable AWS Config rules like s3-bucket-public-read-prohibited and s3-bucket-public-write-prohibited. These evaluate your buckets continuously and flag violations in near real-time.
Azure: Activity Log + Alerts
In Azure Monitor, create an alert rule on the Activity Log that triggers when a storage container's public access level changes. You can also use Azure Policy to prevent public blob access entirely, which is even better than alerting on it after the fact.
GCP: Cloud Audit Logs + Cloud Functions
Create a log-based metric in Cloud Logging that watches for storage.setIamPermissions calls granting access to allUsers. Trigger a Cloud Function that sends a notification or, better yet, automatically reverts the change.
The goal is to make it impossible for a public resource to exist in your environment without someone knowing about it within minutes.
Your Monthly 30-Minute Audit Routine
You do not need a full-day workshop to stay on top of cloud security. Here is a 30-minute routine you can run on the first Monday of every month.
Minutes 1 to 5: Storage audit. Run the CLI commands above to check for public buckets across all your accounts. Fix anything that should not be public.
Minutes 6 to 10: IAM review. Pull your IAM Access Analyzer findings (or equivalent). Look for unused roles, unused permissions, and any new roles created in the last 30 days. Question anything with broad access.
Minutes 11 to 15: Security group review. Check for any rules allowing 0.0.0.0/0 inbound. Cross-reference with your approved exceptions list. Remove anything unauthorized.
Minutes 16 to 20: Logging and alerting check. Verify that CloudTrail, Activity Log, or Audit Logs are still enabled and writing to the correct destination. Check that your automated alerts fired at least one test event in the last 30 days (if they did not, your alerting pipeline might be broken).
Minutes 21 to 25: Secrets scan. Run a secrets scan against your primary repositories. Check CI/CD logs for any credential exposure. Verify that recently rotated credentials are actually rotated and the old ones revoked.
Minutes 26 to 30: Documentation. Log what you found, what you fixed, and any follow-up items. This takes five minutes and gives you an audit trail that proves due diligence if you ever need it.
Thirty minutes. Once a month. It will not catch everything, but it will catch the things that cause 80% of cloud breaches.
RSA Conference Takeaway: The Basics Still Matter
Every year at RSA, the expo floor is packed with vendors selling the next generation of cloud security platforms. AI-powered posture management. Automated remediation engines. Real-time threat graphs. And some of those tools are genuinely valuable.
But the breaches that actually happen, the ones that make the news, the ones that cost millions in incident response and regulatory fines, are almost never caused by a sophisticated zero-day. They are caused by a storage bucket that should not have been public, an IAM role that should not have had admin access, or a credential that should not have been in a Git repo.
"The most expensive cloud security tools in the world will not save you if your S3 buckets are public and your IAM roles have admin access. Start with the basics."
Before you invest in the shiny new platform, make sure you have the fundamentals locked down. Run through the checklist. Set up the alerts. Do the monthly audit. Those three things will do more for your security posture than any tool you can buy.