What is pixelscan?
What you can do with it
l Generate a probabilistic browser fingerprint for each incoming request.
l Detect automation (headless browsers, automation frameworks, emulators) and anomalous execution environments.
l Identify proxy/VPN usage, datacenter vs residential IPs, and WebRTC leaks.
l Produce a per-request risk score and a breakdown of contributing signals for explainability.
l Stream detection results to your systems (webhooks, logs, analytics) for automated workflows.
l Use the live UI to debug client environments, reproduce edge cases, and export diagnostic reports.
Primary uses
l Real-time login and registration risk decisions (block, step-up, challenge, or allow).
l Fraud prevention and transaction screening for payments and account changes.
l Bot mitigation for scraping, click fraud, and API abuse.
l Protecting free trials, promotional flows, and rate-limited endpoints.
l Ad verification and invalid-traffic (IVT) reduction.
How to use / integrate
1. Sign up and obtain an API key.
2. Insert a lightweight client snippet or call the API server-side to collect fingerprint signals.
3. Receive a JSON response containing riskscore, signalbreakdown, ipevidence, and recommendedaction.
4. Map the risk_score to your business policy (e.g., score ≥ 80 → block; 50–79 → require MFA).
5. Log events, trigger webhooks for high-risk cases, and feed results into your SIEM or fraud pipeline.
Example (pseudo) API request
curl -X POST https://api.pixelscan.dev/v1/detect \
-H "Authorization: Bearer <API_KEY>" \
-H "Content-Type: application/json" \
-d '{"clientsignals": {...}, "ip": "203.0.113.45", "sessionid": "abc123"}'
Response includes: { "riskscore": 72, "flags": ["webrtcleak","canvas_anomaly"], "explanation": {...} }
Outputs & actions
l Numeric risk score (0–100).
l Discrete flags (e.g., headless, webrtcleak, residentialproxy).
l Signal weights and short textual explanation to support triage.
l Recommended action field you can map to business rules.
Best practices & privacy
l Combine pixelscan signals with other context (behavioral, device, historical) for highest fidelity.
l Set different thresholds per flow (login vs checkout vs API).
l Use explainability output to tune thresholds and reduce false positives.
l Follow privacy and compliance guidelines: minimize retention of raw identifiers, publish privacy policy, and support data-subject requests.
Metrics to track
l Request volume and processing latency.
l Conversion impact by threshold (false-positive rate vs prevented fraud).
l API call success rate and first-call integration success.
l Signal-level hit rates (how often specific flags appear).