Dictionary/Legal & Regulatory

Biometric Data

Personal data derived from technical processing of physical or behavioral characteristics enabling unique identification

Biometric Data refers to personal data resulting from specific technical processing relating to the physical, physiological, or behavioral characteristics of a natural person, which allows or confirms unique identification. Unlike passwords or tokens, biometric data is usually immutable—you cannot change your fingerprint or retina if it is compromised. This permanence makes it among the most sensitive categories of personal data.

Under GDPR, a photograph of a face is not automatically biometric data. It only becomes biometric data when it undergoes "specific technical processing" (such as facial mapping) to create a unique template used for identification. A stored photo is personal data; a facial embedding vector derived from that photo is special category biometric data subject to Article 9 restrictions.

Regulatory definitions vary significantly across jurisdictions. Illinois BIPA strictly lists "Biometric Identifiers" as: retina/iris scan, fingerprint, voiceprint, and scan of hand or face geometry—explicitly excluding writing samples, photographs, and demographic data. HIPAA lists "biometric identifiers, including finger and voice prints" and "full face photographic images" among the 18 identifiers requiring removal for de-identification. California's CCPA takes a broader approach, including physiological, biological, or behavioral characteristics containing identifying information, along with DNA, sleep, health, or exercise data. GDPR includes facial images and fingerprint data but extends to any physical, physiological, or behavioral characteristic used for unique identification.

Behavioral biometrics represent an expanding category. These measure how a person acts rather than what they are: keystroke dynamics (typing rhythm and pressure), gait analysis (walking patterns), and mouse/swipe patterns. Companies may collect behavioral biometrics without realizing it—fraud detection systems analyzing typing speed can create "shadow liability" under biometric protection laws.

When a biometric is captured, the raw image is typically converted into a mathematical template—a hash or vector representing "minutiae points" like ridge endings in fingerprints. While storing only templates is safer than raw images, templates can often be reversed or used for replay attacks, so they remain high-risk biometric data under most regulatory frameworks.

BIPA creates particularly severe liability exposure with statutory damages of 1,000to1,000 to 5,000 per violation and a private right of action, leading to multi-million dollar class action settlements. The FTC's Everalbum enforcement action required deletion of both face embedding databases and AI models trained on improperly collected biometric data—establishing that derived biometric data, not just source images, must be destroyed when consent is lacking.

For liability quantification, biometric data commands the highest sensitivity multiplier due to immutability (breaches cannot be remediated by "changing" the compromised identifier), severe regulatory penalties, and the emerging body of enforcement precedent establishing both statutory damages and algorithmic disgorgement remedies.

Related Regulations

GDPRBIPACCPA