Last Updated: January 10, 2026
Every application has secrets. Database passwords, API keys, encryption keys, OAuth tokens, SSH credentials. Mishandling them can turn a minor oversight into a catastrophic breach.
Your code might be bulletproof, but if an attacker gets your secrets, none of that matters.
In this chapter, we will cover:
A secret is any piece of sensitive information that grants access to protected resources or enables cryptographic operations. Unlike regular configuration data, secrets have real consequences when exposed.
Consider the difference: if someone discovers your application runs on port 8080, nothing bad happens. If they discover your database password, they can download your entire user table, modify records, or delete everything.
That asymmetry is what makes secrets fundamentally different from other configuration.
Common types of secrets include:
| Type | Examples | Risk if Exposed |
|---|---|---|
| Credentials | Database passwords, service account passwords | Direct access to data stores |
| API Keys | Stripe keys, AWS access keys, third-party service tokens | Unauthorized API calls, financial fraud |
| Encryption Keys | AES keys, TLS private keys | Data decryption, man-in-the-middle attacks |
| Certificates | SSL/TLS certificates, code signing certs | Impersonation, malicious code distribution |
| Tokens | OAuth tokens, JWTs, session tokens | Account takeover, privilege escalation |
| SSH Keys | Private keys for server access | Full server compromise |
| Connection Strings | Database URIs with embedded credentials | Database access |
The table above shows just how varied secrets can be, and each type requires careful handling. An encryption key might protect data at rest, while an API key guards access to a third-party service. Different secrets, same fundamental problem: if they leak, you lose control.
Managing secrets sounds straightforward. Keep them safe, right? But in practice, several factors make this surprisingly difficult.
A typical microservices architecture might have dozens of services, each needing credentials to talk to databases, message queues, caches, and external APIs. Each service runs in multiple environments (development, staging, production). Each environment might have multiple instances.
Multiply it out: 20 services x 5 secrets each x 3 environments = 300 secrets to manage. And that is a small application.
A larger organization might have thousands. Keeping track of what exists, where it is stored, who has access, and when it was last rotated becomes a significant operational challenge.
Secrets need to be accessed by both humans and machines, and each access pattern creates its own risks:
Each access point is a potential leak. A developer's laptop gets stolen. A CI/CD log accidentally prints an environment variable. An application crash dump includes the database connection string. The more places a secret exists, the more opportunities for it to escape.
Secrets are not static. They need to be created, distributed, rotated, revoked, and audited. Each phase has its own challenges:
Doing this manually does not scale. With hundreds of secrets and frequent rotations, human processes break down. Someone forgets to update a service after rotation, and now production is down at 3 AM.
This is why secrets management requires dedicated tooling and thoughtful processes. The problem is too large and too consequential to handle ad hoc.
Before discussing solutions, let us examine the anti-patterns. These are mistakes that show up repeatedly in security breaches. Understanding what goes wrong helps you avoid the same traps.
This is the cardinal sin of secrets management. A developer writes:
The code gets committed to Git. Even if you delete it later, it lives forever in the commit history. Git repositories get cloned, backed up, and shared.
One developer's laptop gets stolen, and now your production database password is in the hands of an attacker. A contractor leaves the company with a copy of the repo, and they still have every secret that was ever committed.
Security researchers regularly scan GitHub for exposed credentials. In 2023, GitGuardian detected over 10 million hardcoded secrets in public repositories.
These are not obscure test projects. They include credentials for production databases, cloud provider accounts, and payment processors.
Environment variables are often presented as the solution to hardcoded secrets:
Environment variables are better than hardcoding, but they have serious limitations that many developers do not understand:
/proc. A vulnerability in one application can expose secrets from another.Environment variables are fine for non-sensitive configuration like feature flags or service URLs. For actual secrets, you need something more robust.
.env FilesDevelopers often store secrets in .env files for local development:
The intention is good: keep secrets out of code. The problem comes when .gitignore is not set up correctly, or a developer commits the file by accident. Once it is in the repository, it is exposed. And because .env files often contain multiple secrets in one place, a single mistake can leak everything at once.
Even worse, developers sometimes commit .env.example files with real credentials instead of placeholders. Or they commit .env.production thinking it will be ignored because .env is in .gitignore.
"Hey, can you send me the production database password?" "Sure, it is in Slack."
Secrets shared via email, Slack, or text messages persist indefinitely. Chat logs are searchable. Email gets forwarded. Screenshots get taken. These channels are designed for convenience, not security.
Slack messages are stored on Slack's servers. Email is often unencrypted in transit and at rest. Text messages can be intercepted or recovered from backups. None of these channels provide the confidentiality, access control, or audit logging that secrets require.
Many teams set a database password once and never change it. This creates compounding risk:
Yet rotation is often skipped because it is painful. Changing a database password means updating every service that uses it, coordinating the rollout, and testing that nothing breaks. Without automation, this is a multi-hour operation with significant risk of downtime. So teams avoid it, and the security debt accumulates.
How teams handle secrets has evolved significantly over the years, each generation solving some problems while creating others. Understanding this history helps explain why modern solutions look the way they do.
The earliest approach was storing secrets in configuration files deployed alongside code:
Operations teams would manually edit these files on servers, or deploy them through separate channels from the application code. This required careful access control on config directories and manual updates when secrets changed.
The problems were obvious: secrets lived on disk in plain text, rotation required manual file edits across multiple servers, and there was no audit trail of who accessed what. It did not scale beyond a handful of servers.
The twelve-factor app methodology popularized injecting configuration through environment variables. CI/CD platforms added "secret" features to store sensitive values:
The diagram above shows the typical flow: CI/CD systems store encrypted secrets and inject them as environment variables when deploying containers. The application reads from its environment at startup.
This improved over config files by centralizing secret storage and removing secrets from the deployment artifact. But it still had the environment variable limitations we discussed: process visibility, logging leaks, and no rotation without redeployment.
Tools like git-crypt, SOPS, and Sealed Secrets emerged to let teams store encrypted secrets in Git:
This flow lets you version control your secrets alongside your code. Developers encrypt secrets before committing, and only authorized systems can decrypt them during deployment.
The secret itself is encrypted, so even if the repository is exposed, the plaintext value remains protected. You get version history and code review for secret changes. However, key management becomes the new challenge. Who holds the decryption key? How do you rotate it? How do you revoke access when someone leaves?
Modern secrets management systems provide centralized, purpose-built infrastructure for handling secrets. Instead of storing secrets in files, environment variables, or encrypted blobs, applications fetch secrets at runtime from a secure API:
The architecture diagram shows the key components: all clients authenticate to the secrets manager, which then checks authorization policies before allowing access to encrypted secrets. Every access is logged for audit purposes.
This is the current best practice for most organizations. Secrets live in one place with strong access controls, automatic rotation, and comprehensive audit logs. The complexity shifts from managing secrets to managing the secrets manager itself.
A secrets management system is essentially a highly secure key-value store with strong authentication, authorization, and audit capabilities.
Let us break down each component and understand why it matters.
Before retrieving secrets, clients must prove their identity. This is the first line of defense: even if an attacker knows a secret exists, they cannot retrieve it without valid credentials.
Modern secrets managers support multiple authentication methods for different use cases:
The key insight is that machines should authenticate as machines, not with shared human credentials. An application running on AWS should use its IAM role, not a static API key that a developer created. IAM roles are automatically rotated, cannot be accidentally committed to Git, and are tied to specific compute resources.
The diagram illustrates how different clients use different authentication methods, but all end up with a verified identity that the secrets manager can use for authorization decisions.
Once authenticated, the system checks what the client is permitted to do. Authentication tells you who someone is; authorization tells you what they can do. These are separate concerns, and conflating them is a common security mistake.
Authorization is typically expressed through policies that map identities to permissions:
Good secrets managers follow the principle of least privilege. Each application should only access the specific secrets it needs, nothing more. The payments service can read payment credentials, but not user authentication secrets or admin keys. If the payments service is compromised, the blast radius is limited.
This granularity is one of the biggest advantages over environment variables or config files. With those approaches, any process that can read the environment or filesystem gets all secrets. With a secrets manager, each secret has its own access policy.
Secrets are encrypted at rest using strong algorithms (typically AES-256-GCM). But encryption alone is not enough. The encryption keys themselves must be protected, which leads to a hierarchy of keys:
This diagram shows a typical key hierarchy. The master key encrypts data encryption keys, which in turn encrypt individual secrets. This design allows key rotation without re-encrypting all secrets, and limits the impact if a single encryption key is compromised.
The master key is often protected by a Hardware Security Module (HSM) or cloud KMS. An HSM is a dedicated cryptographic processor that never exposes the raw key material. Even if the entire secrets manager storage is compromised, the data remains encrypted because the master key never leaves the HSM.
Every access is logged: who accessed what secret, when, and from where. This creates an audit trail for security investigations, compliance requirements, and anomaly detection:
Notice that third entry: an unauthorized access attempt is logged even when denied. This helps detect compromised applications trying to access secrets beyond their permissions, or malicious insiders probing for valuable credentials. Without audit logs, you would never know the attempt occurred.
Audit logs also answer forensic questions after a breach: which secrets did the attacker access? When did the unauthorized access start? What IP addresses were involved? This information is essential for incident response and determining the scope of compromise.
How do applications actually get secrets from a secrets manager? There are several patterns, each with trade-offs. The right choice depends on your infrastructure, security requirements, and operational constraints.
The application calls the secrets manager API at startup or when needed:
This is the most straightforward approach: your application has a direct relationship with the secrets manager.
A sidecar container or process retrieves secrets and makes them available to the application:
The sidecar writes secrets to a shared volume (often a tmpfs mount in memory) that the application reads. This pattern is common in Kubernetes with tools like Vault Agent or External Secrets Operator. The sidecar handles authentication and renewal, keeping the application code simple.
An initialization container fetches secrets before the main application starts:
The init container runs to completion, writing secrets to a shared volume. Then the main application container starts and reads from that volume. This ensures secrets are available before the application needs them.
The orchestration platform injects secrets as environment variables at container start:
Tools like External Secrets Operator, AWS Secrets Manager integration, or Doppler synchronize secrets from your secrets manager into Kubernetes Secrets or environment variables. The application reads configuration from its environment as usual, unaware that the values came from a secrets manager.
One of the most powerful features of modern secrets managers is dynamic secrets: credentials generated on-demand with automatic expiration. This is a fundamental shift from traditional static credentials.
Instead of storing a static database password that lives forever, the secrets manager generates unique credentials for each request:
The sequence diagram shows the lifecycle: the application requests credentials, the secrets manager creates a temporary database user with a random password, returns those credentials to the application, and automatically revokes them when the lease expires. No human intervention required.
Automatic rotation: Every credential is unique and short-lived. There is no password to rotate because each one expires automatically. If your credentials have a 1-hour TTL, you effectively rotate every hour without any manual process.
Blast radius reduction: If a credential is compromised, it expires within minutes or hours. The attacker has a narrow window to use it, and cannot maintain persistent access. Compare this to a static password that might be valid for years.
Attribution: Each application instance gets unique credentials. If you see suspicious database activity from app_xyz123, you know exactly which application instance is responsible. With shared credentials, you cannot distinguish between legitimate access and an attacker who obtained the same password.
No shared secrets: Different application instances never share the same credential. A compromise of one instance does not give access to others.
HashiCorp Vault supports dynamic secrets for many backend systems:
For example, requesting AWS credentials:
These credentials are valid for one hour. When the lease expires, Vault automatically revokes the IAM user. If your deployment script runs for 30 minutes, the credentials are only valid for an hour total, not indefinitely.
The operational complexity is higher than static secrets. Your applications need to handle credential renewal, and you need monitoring to catch lease expiration issues. But the security improvement is substantial, especially for high-value credentials like database access or cloud provider permissions.
Let us compare the major options available today. Each has different strengths, and the right choice depends on your infrastructure, team expertise, and requirements.
Overview: The most feature-rich open-source secrets manager. Vault is a Swiss Army knife for secrets, supporting static secrets, dynamic secrets, encryption as a service, and PKI management.
Best for: Organizations with complex requirements, multi-cloud environments, or those needing dynamic secrets. If you need to generate database credentials on-demand or issue short-lived certificates, Vault is the go-to choice.
Key features:
Considerations: Operational complexity is the main challenge. Running Vault in production requires understanding unsealing, cluster management, and failure modes. Many teams underestimate the operational burden. HashiCorp Cloud Platform (HCP) offers a managed version that reduces this complexity.
Overview: Fully managed secrets service from AWS with tight integration into the AWS ecosystem.
Best for: AWS-native applications and teams that want managed infrastructure without operational overhead.
Key features:
Considerations: Vendor lock-in is the obvious concern. If you run workloads on multiple clouds or on-premises, AWS Secrets Manager only covers part of your infrastructure. Pricing can also add up with many secrets and frequent API calls.
Overview: Google Cloud's managed secrets service, designed for simplicity and GCP integration.
Best for: GCP-native applications, especially those running on GKE or Cloud Run.
Key features:
Considerations: GCP lock-in, similar to AWS. Limited dynamic secret support compared to Vault. Best suited for teams already invested in the GCP ecosystem.
Overview: Microsoft's cloud secrets and key management service, deeply integrated with Azure and the Microsoft ecosystem.
Best for: Azure-native applications and enterprises already using Azure AD and Microsoft 365.
Key features:
Considerations: Azure lock-in. The learning curve is steeper if you are not already familiar with Azure AD and RBAC. Works best when you are fully committed to the Azure ecosystem.
| Feature | Vault | AWS Secrets Manager | GCP Secret Manager | Azure Key Vault |
|---|---|---|---|---|
| Deployment | Self-hosted or HCP | Managed | Managed | Managed |
| Dynamic Secrets | Yes (comprehensive) | Limited (RDS only) | No | No |
| Multi-cloud | Yes | No | No | No |
| Encryption as a Service | Yes (Transit) | Via KMS | Via Cloud KMS | Yes |
| PKI Management | Yes (built-in) | Via ACM | Via CAS | Yes |
| Pricing | Free (OSS) or HCP | Per secret + API calls | Per secret version + API calls | Per operation |
| Operational Complexity | High | Low | Low | Medium |
If you are all-in on a single cloud provider, their native secrets manager is usually the path of least resistance. If you need multi-cloud support, advanced features like dynamic secrets, or want to avoid vendor lock-in, Vault is the most capable option despite its operational complexity.
Let us consolidate everything into actionable practices that you can implement today.
This bears repeating because it remains the most common mistake. No secrets in source code, ever. Use pre-commit hooks to catch accidental commits before they happen:
Tools like git-secrets, detect-secrets, or GitGuardian can scan commits for patterns that look like credentials. They are not perfect, but they catch many accidental commits before they reach the repository.
Each application should only access the secrets it needs. A frontend service should not have access to database credentials. An analytics service should not have access to payment processor keys.
Structure your secrets hierarchically to enable fine-grained policies:
With this structure, you can grant the payments service access to secrets/production/payments-service/* and nothing else. A compromise of the payments service does not expose user service credentials.
Establish rotation schedules based on sensitivity and risk:
| Secret Type | Rotation Frequency |
|---|---|
| API keys | 90 days |
| Database passwords | 30-90 days |
| Encryption keys | Annually (with re-encryption) |
| Service account tokens | On compromise or personnel change |
Dynamic secrets make rotation automatic. For static secrets, automation is essential. Manual rotation does not happen consistently, and inconsistent rotation is almost as bad as no rotation.
Enable comprehensive logging for all secret access. You should be able to answer these questions after any incident:
Audit logs are also valuable for routine security reviews. If the payments service suddenly starts accessing secrets it never accessed before, that is worth investigating.
Assume secrets will eventually leak. Design your systems to limit blast radius when they do:
The secrets manager is now your highest-value target. If an attacker compromises your secrets manager, they get everything. Protect it accordingly:
Applications should authenticate using platform-native identities whenever possible:
These identities are automatically rotated by the platform, cannot be accidentally committed to Git (because they are not files or strings), and are tied to specific compute resources. If a VM is terminated, its IAM role credentials are automatically invalid. This is much safer than static API keys that live forever.