What Goes Into a Security Assessment: From Reconnaissance to Remediation
Most organizations have run a vulnerability scan at some point. A tool runs for a few hours, produces a PDF with color-coded findings, and someone files it away. Maybe a few things get patched. The scan runs again next quarter.
That is not a security assessment.
A real security assessment is manual, methodical, and focused on finding the vulnerabilities that matter — the ones that lead to account takeovers, data exfiltration, or privilege escalation. The ones that automated scanners cannot find because they require understanding how the application is supposed to work before you can identify where it breaks.
This is how we do it.
Why This Matters More Than Ever
Code is being written faster than ever. Frameworks generate boilerplate. Templates scaffold entire applications. The pace of shipping has accelerated dramatically.
But security vulnerabilities don't come from syntax errors. They come from broken assumptions — an API endpoint that trusts a header it shouldn't, an authorization check that validates the role but not the resource, a password reset flow that leaks tokens through redirect manipulation.
These are architectural and logical flaws. No scanner has a signature for "this business logic doesn't match what the developer intended." Finding them requires a human who understands the application, its context, and the ways it can be abused.
Phase 1: Reconnaissance
Before we touch a single endpoint, we need to understand what we're looking at.
Scope definition comes first. What's in scope — the main application? APIs? Subdomains? Mobile endpoints? Infrastructure? We define boundaries clearly so both sides know what to expect.
Passive reconnaissance maps the target's digital footprint without sending a single request. DNS records, subdomain enumeration, technology fingerprinting, public code repositories, historical data from web archives. We're building a picture of the attack surface before we start testing.
Active reconnaissance probes the target's responses. Port scanning, service identification, endpoint discovery. We map out every entry point — web routes, API endpoints, WebSocket connections, GraphQL schemas, authentication providers.
The goal of reconnaissance is simple: don't test blind. Understand what exists, how it connects, and where the interesting attack surface lives before investing time in exploitation.
Phase 2: Attack Surface Mapping
Reconnaissance tells you what exists. Attack surface mapping tells you what matters.
We prioritize based on risk:
- Authentication flows — login, registration, password reset, SSO, OAuth, session management. These are where account takeover chains begin.
- Authorization boundaries — role-based access, resource ownership, privilege escalation paths. Can a regular user access admin endpoints? Can user A modify user B's data?
- Data input points — any place the application accepts user-controlled data. Form fields, headers, query parameters, file uploads, API request bodies.
- Third-party integrations — payment processors, email services, cloud storage, SSO providers. Misconfigurations at integration boundaries are common and high-impact.
- Business logic — the application-specific workflows that define what users can and cannot do. Coupon stacking, order manipulation, race conditions in financial operations.
We document everything in a structured format: endpoint, method, parameters, authentication requirements, and initial risk assessment. This becomes the testing roadmap.
Phase 3: Manual Testing
This is where the real findings are.
Automated scanners are excellent at finding known vulnerability patterns — missing headers, outdated libraries, basic injection points with predictable signatures. But they cannot:
- Chain vulnerabilities — A low-severity header injection combined with an email template vulnerability becomes a critical account takeover. Each finding alone looks minor. Together, they're devastating. Scanners test endpoints in isolation.
- Test business logic — Can a user apply a discount code twice? Can they modify the price parameter in a checkout flow? Can they access another user's invoice by incrementing an ID? These are application-specific flaws with no generic signature.
- Understand context — An open redirect on a marketing page is informational. An open redirect in an OAuth callback is critical. The vulnerability is identical; the impact depends entirely on context.
- Navigate complex auth flows — Multi-step authentication, CSRF-protected state changes, flows that require specific session state. Manual testing follows the application's actual behavior.
We test methodically, covering OWASP categories but going deeper into the application-specific areas identified during attack surface mapping. We maintain detailed notes — what we tested, what we found, and what we ruled out.
Finding Chains: The Real Value
The most impactful findings are rarely single vulnerabilities. They're chains.
Here's an anonymized example from a recent assessment:
- Header injection — The target application trusted the
X-Forwarded-Hostheader from proxied requests, using it to generate URLs in server responses. - Password reset poisoning — The same URL generation logic was used in password reset emails. Submitting a reset request with a manipulated header caused the email to contain a link pointing to an attacker-controlled domain.
- Account takeover — A victim clicking the legitimate-looking password reset email (sent from the real application, passing all email authentication checks) would send their reset token to the attacker. Full account compromise.
Each individual finding had limited severity. The chain — which required understanding how the application's URL generation, email templating, and authentication recovery interacted — was critical.
No scanner would find this. It requires understanding the application's architecture and manually tracing data flows across components.
Phase 4: Exploitation and Proof-of-Concept
Every finding in our reports is verified. Not "possibly vulnerable" — proven.
For each finding, we produce:
- Proof-of-concept reproduction steps — Exact commands, requests, or scripts that demonstrate the vulnerability. Anyone on the development team should be able to reproduce it.
- Impact assessment — What can an attacker actually do with this? Data access? Account takeover? Privilege escalation? We describe the realistic worst-case scenario, not a theoretical one.
- CVSS scoring — Standardized severity scoring so findings can be prioritized consistently across teams.
- Evidence — Request/response captures, screenshots, or output demonstrating the vulnerability in action.
We don't report theoretical issues. If we can't prove it, it doesn't go in the report.
Phase 5: Reporting
The report is the deliverable. It's what the client's team uses to actually fix things. A bad report — even with good findings — is wasted work.
Our reports have two layers:
Executive Summary
One page. No jargon. Written for leadership and non-technical stakeholders.
- Overall security posture assessment
- Number of findings by severity
- Highest-risk findings with business impact in plain language
- Recommended priorities
Technical Findings
Each finding is a self-contained section:
- Title and severity (CVSS score + qualitative rating)
- Affected component — exact endpoint, parameter, or flow
- Description — what the vulnerability is and why it matters
- Proof of concept — step-by-step reproduction with exact commands
- Evidence — request/response captures, screenshots
- Impact — realistic attack scenario and consequences
- Remediation — specific, actionable steps to fix the issue. Not "implement proper input validation" — actual code-level guidance: which function to use, which header to set, which configuration to change
- Verification — how to confirm the fix works
What Makes a Good Report
The difference between a useful report and a scan dump:
| Scan Dump | Security Assessment Report |
|---|---|
| Lists CVEs from dependency versions | Demonstrates exploitable vulnerabilities with PoC |
| "Implement input validation" | "Use parameterized queries in UserController.getById() at line 47" |
| Tests endpoints in isolation | Documents multi-step attack chains |
| Generic severity from tool defaults | CVSS scored with application-specific context |
| PDF generated automatically | Written by the researcher who found the issues |
After the Report: Remediation and Verification
A report sitting in a shared drive doesn't make anything more secure. We stay engaged through the fix cycle.
After the development team implements fixes, we retest. Not a full reassessment — targeted verification of each finding. Did the fix actually resolve the vulnerability? Did it introduce any regressions? Is the remediation complete, or does it only address one variant of the issue?
We provide a verification report confirming which findings are resolved and which need additional work.
Working With Us
We conduct security assessments for organizations that take their security posture seriously. Whether you need a one-time assessment of a specific application or an ongoing partnership with regular reviews and pre-release testing, we scope each engagement to fit.
Every assessment includes:
- Detailed reconnaissance and attack surface mapping
- Manual testing beyond automated scanning
- Verified findings with proof-of-concept reproduction
- Executive summary and technical report with remediation guidance
- Verification retesting after fixes are implemented
If you want your application assessed with this methodology, get in touch.