Scaling online interviews requires trustworthy anti-cheat measures without invasive surveillance. We treat the browser-based approach pioneered by Huawei researchers as a blueprint—keeping inference on the candidate device and logging only anonymized events.
The study reports 75.4% and 72.0% accuracy for detecting phones and laptops using COCO-SSD MobileNetV2.
Local inference achieved 97.1% accuracy distinguishing human voices from background audio in the published results.
Face-api.js embeddings enable continuous presence validation paired with privacy-preserving segmentation in the reference design.
BodyPix segmentation is applied locally to blur non-candidate regions before any frames are eligible for upload.
Enterprise teams require clear data boundaries, explicit consent flows, and auditable logs.
Centralized inference increases infrastructure cost and complicates cross-border data transfer.
Manual review backlogs grow when reviewers triage ambiguous clips without context.
Opaque monitoring erodes candidate trust and creates compliance risk.
Favor on-device inference so costs scale with review volume rather than total traffic.
Keep blurring and evidence handling local until escalation criteria are met.
Provide reviewers with contextual timelines and candidate notifications to reinforce transparency.
TensorFlow.js runs directly in the candidate browser, with WASM fallbacks for edge devices.
Weighted signals (device, voice, face) feed a risk score that determines whether to store obfuscated evidence or simply log metadata.
Inline notifications explain detections in plain language and provide remediation guidance before penalties apply.