Trust & Privacy
Security tools need to be trustworthy. Here is exactly how CrawSecure handles your data.
Data flow β what goes where
β CLI path (100% local)
your machine
ββ npx crawsecure .
ββ reads files β memory
ββ applies rules
ββ prints report
ββ [optional] --output
ββ crawsecure.json
ββ stays on disk
(you control it)β Zero network calls
β Web scanner path
browser
ββ File API β memory
ββ @crawsecure/browser
ββ applies rules
ββ result shown in UI
ββ [opt-in] Save scan
ββ POST /api/scans
ββ score: 72
ββ risk: "HIGH"
ββ rules: [...]
ββ filesScanned: 12
(no code, no paths)β File contents never leave the tab
Privacy guarantees
All analysis runs locally β in the CLI or in your browser. File contents are processed in memory and discarded immediately after the scan. No uploads, no exceptions.
When you choose to save a scan (authenticated users), we store only: overall score, rule names triggered, file count, and severity counts. Never file paths, never code snippets.
The engine never writes to disk, never executes code from scanned files, and never modifies your system in any way.
The scanner rules are public on GitHub. You can audit exactly what patterns we detect. No black boxes, no hidden logic.
We do not force uploads. We do not run hidden network calls during analysis. Saving a scan is always opt-in. You are always in control.
Open DevTools β Network tab while running a scan. You will see zero outbound calls during analysis. The scan payload (if you choose to save) is shown to you before submission.
Exact data stored when you save a scan
{
"scanId": "550e8400-e29b-41d4-a716-446655440000",
"userId": "github|12345678",
"summary": { "critical": 2, "warning": 5, "info": 1 },
"score": 68,
"risk": "HIGH",
"rulesTriggered": ["eval", "curl"],
"filesScanned": 42,
"createdAt": "2026-03-01T12:00:00Z"
}Nothing else. Specifically, we never store:
- File names or paths
- File contents or code snippets
- Directory structure
- Your Git history or repository metadata
The server validates every save request with a strict Zod schema that only accepts the fields above. Read the source β
Third-party services we use
CrawSecure uses three external services. Here is exactly what each one receives.
GitHub OAuth
Authentication onlyReceives: GitHub username, public profile, email (optional)
We store: Username + email stored in Firestore user document
Privacy policy βFirebase Firestore
Scan history & user dataReceives: Aggregated scan signals, usage counts, user profile
We store: No file paths, no code. Only numbers and rule IDs.
Privacy policy βStripe
Payment processing (PRO plan)Receives: Name, email, payment method (handled entirely by Stripe)
We store: CrawSecure stores only the Stripe customer ID and subscription status
Privacy policy β