Kubernetes Blueprints Left on the Doorstep
The scope of the assessment covered the external attack surface of a media streaming company's cloud infrastructure: public-facing services, API endpoints, and any management interfaces reachable from outside the corporate network. The company ran its production workloads on a managed Kubernetes cluster spread across multiple cloud regions. Deployments were managed through a GitOps workflow.
GitOps tooling is infrastructure that manages infrastructure. It is granted the permissions necessary to create, update, and delete Kubernetes resources across clusters. When such tooling is externally reachable without authentication, the immediate concern is not just information leakage — it is that the organization has placed its deployment control plane on the internet.
The ArgoCD instance was on the first page of subdomain enumeration results.
Discovery
Subdomain enumeration against the primary domain returned approximately 340 hostnames. Standard practice is to resolve each to an IP address, probe for open ports, and fetch the HTTP response from port 80 and 443. Hostnames that resolve to the same internal CDN or load balancer are grouped and set aside. Hostnames with unusual response signatures receive closer attention.
One hostname returned a 200 response on port 443 with a content type of application/json and a response body that began:
json
{
"Version": "v2.8.4",
"BuildDate": "...",
"GitCommit": "...",
"GoVersion": "...",
"Platform": "linux/amd64"
}This is the ArgoCD API server's version endpoint, which responds without authentication. The endpoint exists for health checks and compatibility verification. Its presence confirmed an ArgoCD instance was listening on a public IP address.
The next step was straightforward: attempt the unauthenticated API endpoints.
What the API Returned
ArgoCD's REST API is well-documented. When anonymous access is enabled or when no authentication provider is configured and the local session validation is absent, API requests without a Authorization header succeed rather than returning 401.
The applications list endpoint:
GET /api/v1/applications
Returned the full inventory of managed applications — 47 in total. Each entry in the response contained the application name, the source repository URL with branch and directory path, the destination cluster endpoint, and the target namespace. A condensed example:
json
{
"items": [
{
"metadata": {
"name": "media-transcoder-prod",
"namespace": "argocd"
},
"spec": {
"source": {
"repoURL": "https://git.internal.example.com/platform/k8s-manifests",
"targetRevision": "main",
"path": "apps/transcoder/prod"
},
"destination": {
"server": "https://10.0.1.100:6443",
"namespace": "transcoding"
}
}
}
]
}Forty-seven applications. Each with its repository path, namespace, and the internal Kubernetes API server address of the cluster it deployed into.
The Scope of the Disclosure
Reading through the full application list produced a nearly complete map of the company's production architecture.
Service inventory. The application names — following a consistent naming scheme of {service}-{environment} — revealed every service running in production: video ingestion pipelines, transcoding workers, content delivery caches, user authentication services, billing systems, internal analytics, and monitoring infrastructure. Services that are not publicly documented or customer-facing were listed alongside customer-facing services with no distinction.
Repository structure. Every application's source repository path was included. The company used a monorepo for Kubernetes manifests, and the directory paths revealed how the codebase was organized: separate directories per service, per environment, with configuration overrides. The internal GitLab hostname was exposed in each repository URL.
Cluster topology. The destination server field for each application contained the internal API server address of the Kubernetes cluster receiving deployments. The company ran separate clusters per region, plus a dedicated cluster for internal tooling. The API server addresses were internal RFC 1918 addresses, but their enumeration confirmed the cluster count, the regional distribution, and that the internal tooling cluster ran on a different address range from production.
Namespace organization. Each application's target namespace was explicit. Namespaces often group related services, and the naming convention revealed how the organization segmented its workloads: transcoding, delivery, auth, billing, platform-infra, and several others.
Helm values. For applications using Helm, the detail endpoint for individual applications included the values passed to the chart:
GET /api/v1/applications/billing-prod
The Helm values for the billing service contained references to secrets management paths, database hostnames, and environment-specific configuration keys. While the secrets themselves were stored in an external secrets manager and not present in the values, the paths and key names were sufficient to understand the secrets structure.
What This Enables
The immediate impact of this exposure was information disclosure. No data was exfiltrated, no systems were modified. But the value of this reconnaissance to a motivated attacker is significant.
Targeting. An attacker with a separate foothold — an exploited public-facing service, a phishing-obtained credential, or a cloud credential from a metadata service — would typically spend time discovering what else exists in the network before pivoting. The ArgoCD API eliminated that step entirely. The service names, their namespaces, their cluster addresses, and their organization were already enumerated.
Supply chain leverage. The exposed repository URLs pointed to an internal GitLab instance. An attacker who obtained access to that GitLab — through stolen credentials, a GitLab vulnerability, or a compromised developer machine — could modify the Kubernetes manifests that ArgoCD deploys. ArgoCD would sync the changed manifests to production automatically, deploying attacker-controlled workloads without any additional action. The ArgoCD exposure made this attack path visible.
Secrets map. The Helm values referencing external secrets paths were not secrets themselves, but they were a map to where secrets lived. Combined with cloud IAM misconfiguration or a stolen service account token, this information provides precise targeting for credential extraction rather than requiring broad enumeration of a secrets management system.
Configuration That Created the Exposure
ArgoCD's anonymous access mode is controlled by a single flag in its ConfigMap:
yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
server.anonymous.enabled: "true"When this key is set to true, unauthenticated requests to the ArgoCD API are processed with the permissions of the built-in anonymous user. By default, the anonymous user has read access to all applications, repositories, and cluster configurations.
In this case, the flag had been set during initial setup to allow the operations team to verify connectivity to the ArgoCD UI before configuring the identity provider integration. The identity provider was subsequently configured, but the anonymous access flag remained set. The UI appeared to require login — the identity provider integration worked as intended — but the API endpoints bypassed the authentication flow entirely.
This is a consistent pattern: authentication configuration for the web interface and authentication configuration for the API are sometimes treated as equivalent when they are not.
Remediation
The fix required two changes:
Remove anonymous access. Set server.anonymous.enabled: "false" in the ArgoCD ConfigMap, or remove the key entirely since false is the default:
yaml
data:
# server.anonymous.enabled: "true" # Remove or set to falseRestart the ArgoCD server after the ConfigMap change for it to take effect.
Restrict network access. The ArgoCD API server should not be reachable from the public internet. Access should be limited to internal networks or to clients connected via VPN. This is a defense-in-depth measure: even if authentication fails or is misconfigured in the future, external attackers cannot reach the endpoint. Network policy and load balancer configuration changes were applied to remove the public route.
As a verification step, the unauthenticated applications list endpoint was tested from an external IP address after the changes. The response was a 401 with an authentication challenge — the API no longer processed unauthenticated requests.
Treating Delivery Infrastructure as Attack Surface
Continuous delivery tooling occupies a privileged position in any cloud environment. It has write access to production clusters by design. It reads source repositories. It manages secrets references. Compromising it is equivalent to compromising the deployment pipeline itself.
This finding is a reminder that internal tooling requires the same security controls as production-facing services. Network accessibility, authentication requirements, and access control configuration all apply. The fact that a service is labeled "internal" or "operations tooling" is not a security control. Subdomains are publicly enumerable. Cloud load balancers can be misconfigured. Services migrate between environments and leave DNS records pointing to their previous locations.
External enumeration of management interfaces is a routine part of infrastructure assessment. When a continuous delivery dashboard appears in those results, the potential impact is rarely limited to information disclosure.
For a technical overview of information disclosure vulnerabilities across web infrastructure, see the source map exposure knowledge article.