CrawSecureCrawSecure

Trust & Privacy

Security tools need to be trustworthy. Here is exactly how CrawSecure handles your data.

Data flow β€” what goes where

● CLI path (100% local)

your machine
  └─ npx crawsecure .
       └─ reads files β†’ memory
       └─ applies rules
       └─ prints report
       └─ [optional] --output
            └─ crawsecure.json
               └─ stays on disk
                  (you control it)

βœ“ Zero network calls

● Web scanner path

browser
  └─ File API β†’ memory
  └─ @crawsecure/browser
       └─ applies rules
       └─ result shown in UI
       └─ [opt-in] Save scan
            └─ POST /api/scans
               └─ score: 72
               └─ risk: "HIGH"
               └─ rules: [...]
               └─ filesScanned: 12
               (no code, no paths)

βœ“ File contents never leave the tab

Privacy guarantees

βœ… Your code never leaves your machine

All analysis runs locally β€” in the CLI or in your browser. File contents are processed in memory and discarded immediately after the scan. No uploads, no exceptions.

βœ… Only security signals are stored

When you choose to save a scan (authenticated users), we store only: overall score, rule names triggered, file count, and severity counts. Never file paths, never code snippets.

βœ… Scans are read-only

The engine never writes to disk, never executes code from scanned files, and never modifies your system in any way.

βœ… Open source engine

The scanner rules are public on GitHub. You can audit exactly what patterns we detect. No black boxes, no hidden logic.

βœ… No dark patterns

We do not force uploads. We do not run hidden network calls during analysis. Saving a scan is always opt-in. You are always in control.

βœ… Verifiable in your browser

Open DevTools β†’ Network tab while running a scan. You will see zero outbound calls during analysis. The scan payload (if you choose to save) is shown to you before submission.

Exact data stored when you save a scan

{
  "scanId":       "550e8400-e29b-41d4-a716-446655440000",
  "userId":       "github|12345678",
  "summary":      { "critical": 2, "warning": 5, "info": 1 },
  "score":        68,
  "risk":         "HIGH",
  "rulesTriggered": ["eval", "curl"],
  "filesScanned": 42,
  "createdAt":    "2026-03-01T12:00:00Z"
}

Nothing else. Specifically, we never store:

  • File names or paths
  • File contents or code snippets
  • Directory structure
  • Your Git history or repository metadata

The server validates every save request with a strict Zod schema that only accepts the fields above. Read the source β†’

Third-party services we use

CrawSecure uses three external services. Here is exactly what each one receives.

GitHub OAuth

Authentication only

Receives: GitHub username, public profile, email (optional)

We store: Username + email stored in Firestore user document

Privacy policy β†’

Firebase Firestore

Scan history & user data

Receives: Aggregated scan signals, usage counts, user profile

We store: No file paths, no code. Only numbers and rule IDs.

Privacy policy β†’

Stripe

Payment processing (PRO plan)

Receives: Name, email, payment method (handled entirely by Stripe)

We store: CrawSecure stores only the Stripe customer ID and subscription status

Privacy policy β†’
Questions? Open a GitHub issue and we'll answer publicly.