A Case Study in Responsible Disclosure: From Exposed API Keys to "Pure Coincidence"
Introduction
Recently, a developer with a notable YouTube following showcased a new, multi-agent AI application. In their video, they emphasized the project’s “enterprise level” security and the challenges of solo development. As a member of the tech community, I was curious about the architecture. A brief investigation revealed two critical, yet common, security vulnerabilities.
This post is a case study in the responsible disclosure process that followed. It is intended to be an educational tool for developers, highlighting the importance of fundamental security practices and the right (and wrong) ways to respond when someone reports a flaw in your work.
The Vulnerabilities
The issues discovered were fundamental in nature and exposed the project to significant financial and operational risk.
1. Client-Side Exposure of a Google Gemini API Key
The application’s backend served a frontend that included a Google Gemini API key (AIza...) directly in the
client-side code. This key was sent with requests from the user’s browser.
- The Risk: Anyone with a web browser could inspect the network traffic, find this key, and use it to make their own Gemini API calls. This could result in massive, unexpected bills for the key’s owner and a denial of service for the legitimate application once rate limits were exhausted.
2. Unsecured Administrative Endpoints
The FastAPI backend’s documentation was publicly exposed at api.example-app.com/docs. This Swagger UI revealed
administrative endpoints for managing the AI agents.
- The Risk: These endpoints had no authentication. Any user could add, delete, or modify the agents that were core to the application’s functionality, effectively allowing for vandalism and disruption of the service.
The Disclosure Process: A Journey Through Denial
Following best practices, I immediately reached out to the developer privately via email to report my findings. What followed was a masterclass in deflection and denial.
-
The Initial Response: The developer downplayed the risks, stating the project wasn’t “live” yet and that he would implement security practices like a “VPC” later. This demonstrated a fundamental misunderstanding: the infrastructure was already live and exposed, and a VPC would not fix an application-level flaw where the server sends a secret key to the client.
-
The Live Call: To clarify the issue, I offered to show him the vulnerability on a live call. During the call, the developer repeatedly deflected and denied the vulnerability’s existence and severity. The excuses evolved through several stages: from initially claiming the key was merely a token, to asserting it was a fake or random string, then minimizing its importance as a “test” key. It was eventually admitted that the key belonged to a third-party company, raising further concerns about credential handling. The conversation concluded with the mathematically impossible claim that the valid key was generated by “pure coincidence.”
This pattern of denial demonstrated a profound unwillingness to take responsibility for a critical security failure.
Key Takeaways for Developers
This experience serves as a powerful reminder of several core principles:
- Never, Ever Expose Keys on the Client-Side: Secret keys must always remain on the server. Your backend should be a proxy that makes authenticated API calls on behalf of the user.
- Security is Not an Afterthought: Claiming you will “add security later” is a recipe for disaster. Secure architecture should be the foundation of your project, not a final coat of paint.
- Listen During a Disclosure: When a security researcher reports a vulnerability, listen with an open mind. Their goal is to help you, not to attack you. Arguing and making excuses only makes you look incompetent and unprofessional.
- Know What You Don’t Know: Claiming “enterprise security” expertise while making fundamental mistakes is a dangerous bluff. Be honest about your skill level and seek help when you are out of your depth.
Conclusion
After a long and arduous process, I strongly urged the developer to fix the issue. The goal of this post is not to shame, but to educate. The mistakes made here are common, but the response to them was uniquely poor. As developers, we have a responsibility to build secure software and to react to security reports with humility and urgency.