Skip to content
Fast-turnaround security assessments available — 10+ years development & security experienceGet started
Back to Knowledge Base
services

Our Security Assessment Process: From Scope to Remediation

·10 min read

Our Security Assessment Process: From Scope to Remediation

Security assessments are only as good as the methodology behind them. A disorganized test produces scattered findings with uncertain severity. A scan dump produces noise. A well-structured assessment produces a clear, prioritized map of what's vulnerable, how it can be exploited, and exactly what to fix.

This is the process we follow for every engagement. It's the same methodology we've applied across 400+ targets, producing over 1,400 verified findings — more than 320 of them critical severity. It works because it's systematic, manual, and focused on finding what matters.

Phase 1: Scoping

Every assessment begins with scope definition. This isn't a formality — it's the foundation that determines whether the assessment delivers value or wastes effort.

What We Define

Target boundaries. Which applications, APIs, domains, and infrastructure components are in scope? We identify specific URLs, IP ranges, environments, and any third-party integrations that should or should not be tested.

Testing parameters. What type of access do we start with? Black-box (no credentials, no documentation), gray-box (authenticated access, limited documentation), or white-box (full source code access, architecture documentation). Each approach has different advantages, and we recommend the model that best matches the engagement's goals.

Account provisioning. For authenticated testing, we need test accounts at each relevant privilege level — regular user, moderator, admin, API consumer, support agent. Testing access control requires multiple authenticated sessions with different permission sets.

Environment details. Production, staging, or dedicated test environment. We strongly prefer testing against staging environments that mirror production configurations, as this provides realistic results without risking impact to live users. When production testing is necessary, we coordinate timing and establish communication channels for immediate incident response.

Rules of engagement. Explicitly defined boundaries: Are denial-of-service tests permitted? Social engineering? Physical access? Data exfiltration simulation? What communication channels are used for urgent findings? Who is the escalation contact if something breaks?

Timeline and milestones. Start date, testing window, draft report delivery, final report delivery, remediation verification period. Clear timelines prevent scope creep and ensure both sides plan resources appropriately.

Why Scoping Matters

Poor scoping produces poor assessments. Too narrow, and critical attack surface goes untested. Too broad, and time gets spread thin across low-value targets. Vague boundaries create disputes about what was and wasn't tested. Missing account access means authorization testing — one of the highest-value activities — can't happen.

We invest time in scoping because it directly determines the quality of everything that follows.

Phase 2: Reconnaissance

With scope defined, we begin mapping the target. Reconnaissance is not testing — it's building a comprehensive understanding of what exists before we start probing for vulnerabilities.

Passive Reconnaissance

We gather information without sending requests to the target:

  • DNS enumeration — identifying subdomains, mail servers, nameservers, and associated infrastructure
  • Technology fingerprinting — determining the tech stack from public indicators: HTTP headers, JavaScript libraries, framework signatures, job postings that mention specific technologies
  • Public code analysis — searching code repositories for leaked credentials, internal documentation, configuration files, and API patterns
  • Certificate transparency — reviewing TLS certificate logs to discover associated domains and subdomains
  • Historical data — web archive snapshots that reveal deprecated endpoints, old application versions, and removed functionality that may still be accessible

Active Reconnaissance

With passive intelligence gathered, we begin probing the target directly:

  • Port and service scanning — identifying open ports, running services, and their versions across all in-scope infrastructure
  • Endpoint discovery — mapping web routes, API endpoints, WebSocket connections, and GraphQL schemas through crawling, wordlist-based discovery, and JavaScript analysis
  • Authentication provider mapping — identifying login mechanisms, SSO integrations, OAuth flows, and session management implementations
  • API documentation discovery — locating Swagger/OpenAPI specs, GraphQL introspection endpoints, and developer documentation that reveals API structure

The goal is a complete map of the attack surface. Every endpoint, every parameter, every authentication mechanism, every input that accepts data. Missing an endpoint during reconnaissance means missing a vulnerability during testing.

Phase 3: Attack Surface Mapping

Reconnaissance tells us what exists. Attack surface mapping prioritizes what to test and how to test it.

We categorize every discovered component by risk:

High priority: Authentication and session management, authorization and access control, payment and financial operations, data export and reporting, administrative functionality, API endpoints that handle sensitive data.

Medium priority: User profile management, file upload and download, search and filtering, notification systems, integration endpoints.

Lower priority: Static content delivery, public marketing pages, documentation sites.

This prioritization ensures that testing time is concentrated on the components most likely to contain high-severity vulnerabilities. We don't skip lower-priority targets — we test them too — but the high-value attack surface gets the most thorough attention.

Phase 4: Manual Testing

This is the core of the assessment. Automated tools run in the background as supplements, but the primary testing is manual, methodical, and informed by the reconnaissance and mapping phases.

What Manual Testing Covers

Authentication testing. We test every authentication mechanism: login with valid and invalid credentials, account lockout behavior, password complexity enforcement, multi-factor authentication bypass, session token generation and management, cookie security flags, token expiration and rotation, remember-me functionality, and password reset flows from end to end.

Authorization testing. Using the multiple authenticated accounts provisioned during scoping, we systematically test horizontal and vertical access control across every endpoint. Can User A access User B's data? Can a regular user reach admin functionality? Can a read-only API key perform write operations? We test every combination.

Input validation. Every parameter that accepts user input is tested for injection vulnerabilities: SQL injection, cross-site scripting, server-side template injection, command injection, path traversal, XML external entities, and deserialization attacks. This goes beyond running an automated scanner — we test with context-aware payloads informed by the technology stack and application behavior.

Business logic testing. We analyze application workflows and test the assumptions they rely on. Can steps be skipped? Can parameters be manipulated? Do race conditions exist in financial operations? Can discount or referral systems be abused? This is where human understanding of the application's purpose is essential.

API-specific testing. REST and GraphQL APIs get dedicated testing for mass assignment, excessive data exposure, broken object-level authorization, improper rate limiting, and injection through API-specific vectors like GraphQL query depth and batch operations.

Configuration and deployment review. Server configurations, TLS settings, security headers, error handling, debug mode indicators, default credentials, exposed management interfaces, and cloud resource permissions.

How We Test

We don't follow a checklist mechanically. The testing process is iterative — findings in one area inform testing in another. An information disclosure vulnerability that reveals internal API structure leads to testing of those internal endpoints. A weak session management implementation prompts deeper investigation of authentication flows. A missing rate limit on one endpoint triggers a review of rate limiting across all endpoints.

This adaptive approach is why manual testing finds vulnerability chains that automated tools miss entirely. A scanner tests each endpoint in isolation. A researcher follows the thread from one finding to the next, building attack chains that demonstrate real-world impact.

Phase 5: Exploitation and Proof of Concept

Every finding is verified through exploitation. We don't report theoretical vulnerabilities — if we can't demonstrate the impact, it doesn't go in the report.

For each vulnerability, we develop a proof of concept that demonstrates:

  • Reproduction steps — exact requests, parameters, and sequence needed to trigger the vulnerability
  • Impact demonstration — what an attacker achieves through exploitation (data accessed, privileges gained, actions performed)
  • Attack chain documentation — how the vulnerability connects to other findings to produce greater impact

Proof-of-concept development serves two purposes. First, it eliminates false positives — if we can't reproduce it, we don't report it. Second, it gives your development team everything they need to understand, reproduce, and fix the issue without guessing about the details.

We follow responsible exploitation practices. We never exfiltrate real user data, we use test accounts for demonstration, and we stop exploitation at the point where impact is clearly demonstrated rather than maximizing damage.

Phase 6: Reporting

The report is the deliverable. If it's not clear, specific, and actionable, the assessment's value is lost. We don't produce scan dumps — every report is written by the researchers who conducted the testing.

Report Structure

Executive summary. A non-technical overview of the assessment scope, approach, key findings, and overall risk posture. Written for leadership and stakeholders who need to understand the business impact without reading technical details.

Findings summary. A prioritized table of all findings with severity ratings, CVSS scores, affected components, and current status. This gives the development team an immediate overview of what needs attention and in what order.

Individual findings. Each finding includes:

  • Descriptive title and unique identifier
  • Severity rating with CVSS 3.1 score and vector
  • Affected endpoint(s) and component(s)
  • Detailed technical description of the vulnerability
  • Step-by-step proof-of-concept reproduction with exact requests and responses
  • Business impact assessment — what an attacker can achieve and what's at risk
  • Specific remediation guidance with code-level recommendations where applicable
  • References to relevant CWE identifiers, OWASP categories, and external resources

Attack chains. Where individual findings combine to produce greater impact, we document the full chain — showing how a low-severity information disclosure leads to a medium-severity authentication bypass that enables a critical data exfiltration.

Methodology notes. What was tested, how it was tested, what tools were used, and what areas received the most focus. This provides transparency and helps you understand the assessment's coverage.

What Makes Our Reports Different

We've delivered reports based on over 1,400 findings across 400+ targets. Every report shares these characteristics:

  • Zero false positives. Every finding is verified through proof-of-concept exploitation.
  • Developer-ready. Remediation guidance includes specific code patterns, configuration changes, and architectural recommendations — not generic advice like "implement input validation."
  • Business context. Severity ratings consider the application's specific context, not just the technical CVSS calculation. A medium-severity vulnerability in a healthcare platform's patient data API may warrant higher urgency than the same technical flaw in a marketing site.
  • Complete attack chains. Individual findings that combine into more severe attack paths are documented as chains, showing the full path from initial access to maximum impact.

Phase 7: Remediation Verification and Retesting

A report is not the end of the engagement. After your development team applies fixes, we retest every finding to verify:

  • The vulnerability is resolved. The original proof of concept no longer succeeds.
  • The fix is complete. Variations of the original attack vector are also blocked — not just the specific payload we demonstrated.
  • No regressions were introduced. The fix didn't break related functionality or introduce new vulnerabilities.

Retesting typically occurs within a defined window after the initial report delivery. We update the report with verification status for each finding, providing a clear record of what's been fixed and what remains open.

This closure loop is essential. Without verification, you're trusting that fixes work based on the developer's assessment alone. Our retesting confirms it with the same methodology that found the vulnerability in the first place.

What This Process Produces

The output of a Raijuna assessment is not a list of scanner findings. It's a verified, prioritized, actionable map of your application's security posture — built through manual testing by researchers who understand your application, your business context, and the attack techniques that real adversaries will use against you.

The methodology is consistent. The coverage is thorough. The findings are verified. The reports are actionable. That's what 400+ assessments and 1,400+ findings have refined into a process that works.

Need your application tested for security vulnerabilities? Get in touch.

Need your application tested?

We find these vulnerabilities in real applications every day. Get a comprehensive security assessment with detailed remediation.

Request an Assessment
methodologysecurity-assessmentpenetration-testingprocessremediationreporting

Summary

A complete walkthrough of how Raijuna conducts security assessments — from initial scoping through reconnaissance, manual testing, and exploitation to delivering actionable reports with verified findings and remediation verification.

Key Takeaways

  • 1Every engagement begins with precise scope definition to ensure complete coverage without wasted effort
  • 2Reconnaissance and attack surface mapping occur before any testing begins, building a comprehensive target model
  • 3Manual testing is the core of the methodology — automated scanners supplement but never replace human analysis
  • 4Every finding is verified with proof-of-concept exploitation before it reaches the report
  • 5Reports include executive summaries, technical details, CVSS scores, and code-level remediation guidance
  • 6Remediation verification and retesting confirm that fixes actually resolve the vulnerabilities

Frequently Asked Questions

Timeline depends on scope. A focused web application assessment typically takes 1-2 weeks of active testing, plus time for reporting and remediation verification. Larger engagements covering multiple applications, APIs, and infrastructure may take 3-4 weeks. We scope each engagement individually based on complexity.

A vulnerability scan runs automated tools that check for known signatures and produces a list of potential issues. A security assessment involves manual testing by experienced researchers who understand the application's business logic, test authorization boundaries, chain vulnerabilities together, and verify every finding with proof-of-concept exploitation.

A detailed report containing an executive summary, individual findings with severity ratings and CVSS scores, proof-of-concept reproduction steps, attack chain documentation, and specific remediation guidance. Every finding is verified and actionable — no false positives, no generic advice.

Yes. Remediation verification is included in every engagement. After your team applies fixes, we retest each finding to confirm the vulnerability is resolved and that the fix didn't introduce new issues. This ensures findings are truly closed.