Cybersecurity is going through a transition that deserves more attention than it's getting. The foundational question of online authentication is shifting: no longer "what do you know?" (password, PIN), no longer "what do you have?" (token, smartphone), no longer "what do you look like?" (fingerprint, facial recognition), but "how do you behave?"
The idea is elegant, and the science behind it is solid. A recent article by Brandon Janes on Towards Data Science, titled "Behavior is the New Credential," describes how behavioral biometrics systems are becoming standard practice in banking. The starting point is a 2012 study from U.C. Berkeley called "Touchalytics," which demonstrated that just eleven scroll strokes on a smartphone were enough to identify a specific user within a group of forty-one people, with zero errors. Thirty unique behavioral features: stroke length, velocity, curvature, trajectory, inter-stroke time, even the area of the finger used turned out to be individually distinctive.
The underlying theory is computational motor control, an interdisciplinary field bridging neuroscience, biomechanics, and computer science. The unconscious corrections our brain performs during every gesture, those micro-adjustments happening at the millisecond scale, are so individual that a person's behavioral profile becomes nearly impossible to replicate. Paradoxically, what we think of as "robotic" (these automatic neural corrections) is exactly what makes each of us irreproducibly human.
Why the old defenses are no longer enough #
The context driving this transition is concrete and well documented. Modern malware has reached capabilities that render traditional verification mechanisms obsolete. Tools like ProKYC, a deepfake tool used in the cybercrime world, allow threat actors to bypass two-factor authentication, facial recognition, and even live video verification checks. BingoMod, a remote access trojan distributed via SMS phishing, masquerades as an antivirus app on Android and allows a remote attacker to intercept credentials, messages, and OTP codes, all the way to executing money transfers from the infected device.
Once the device is compromised, everything looks normal from the bank's perspective: the device fingerprint is correct, the IP address checks out, MFA codes align. Traditional verification, which operates at a single point in time (the login), is no longer sufficient. The security perimeter is no longer the gate. It's the entire session.
This is where behavioral biometrics enters, operating as continuous authentication. Anomaly detection models built on each user's specific profile monitor the session from start to finish. When risk signals spike, the system can request additional verification or halt the transaction entirely. When behavior matches the established profile, the session continues seamlessly. The result, ironically, is a better user experience: fewer OTPs, fewer interruptions, more fluidity. Passive security replacing active security.
The other side of the coin #
So far, this is the cybersecurity industry's narrative. It works, it's technically sound, and it's operationally effective. But it's also a narrative that systematically avoids a question: what does it mean, from a fundamental rights perspective, to turn human behavior into a credential?
Let's start with a regulatory fact that Janes's article doesn't mention. The GDPR, in Article 4(14), defines biometric data as "personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person." The key word is "behavioral." The European legislator explicitly included behavioral data in the definition of biometric data. Article 9 then classifies biometric data as a "special category" of personal data, whose processing is generally prohibited except under specific conditions: explicit consent, substantial public interest, protection of vital interests.
This means that every behavioral biometrics system operating in the European Union is processing special category data. Not generic personal data. Data that requires explicit consent, a Data Protection Impact Assessment (DPIA), purpose limitation, data minimization, and the right to erasure.
The question no cybersecurity vendor likes to face is: how do you reconcile the right to erasure with a behavioral profile built through continuous analysis of thousands of micro-interactions? You can delete a profile, certainly. But can you delete the knowledge derived from that profile, once it has been used to train a model? The GDPR raises specific questions about data minimization and purpose limitation when data has been used for AI model training.
The AI Act's double bind #
The picture gets more complex with the AI Act, whose regulatory framework for high-risk systems fully applies from August 2026. The intersection of GDPR and AI Act creates a layered regulatory framework for biometric technologies.
The AI Act distinguishes between several types of biometric systems. Real-time remote biometric identification systems are prohibited for law enforcement, with narrow exceptions. All other remote biometric identification systems are classified as high-risk. Biometric categorization systems that infer sensitive attributes (race, political opinions, trade union membership, sexual orientation) are prohibited. Emotion recognition systems are banned in workplaces and schools, and classified as high-risk in other contexts.
Where does banking behavioral biometrics fall in this taxonomy? The AI Act explicitly excludes from the definition of remote biometric identification those verification tools that confirm a person is who they claim to be, provided they require the individual's active participation. But behavioral biometrics, by definition, is passive. The user does not "actively participate" in their own behavioral authentication. The system observes them while they do something else. This grey zone between active verification and passive surveillance is precisely the territory where fundamental rights start to strain.
There's an additional element. The AI Act prohibits AI systems that classify people based on their social behavior when such classification leads to unfavorable treatment in contexts unrelated to the original data collection context, or treatment disproportionate to the gravity of the behavior. The line between "behavioral authentication for fraud prevention" and "behavioral profiling for user classification" is not as sharp as the industry would like to believe.
Function creep: the structural risk #
The history of technology teaches us that systems built for a specific purpose tend to expand their scope over time. This phenomenon, known as function creep, is particularly insidious in the field of behavioral biometrics.
A system that today analyzes how you scroll a page to verify you are who you claim to be could tomorrow use the same data to infer your emotional state, your attention level, your cognitive condition, your appetite for financial risk. Behavioral data is extraordinarily rich in implicit information. Your typing rhythm can reveal anxiety or fatigue. Your scrolling speed can indicate interest or boredom. Your touch pressure can suggest irritation or calm.
Banks using this data for fraud prevention today are sitting on an informational asset whose potential value vastly exceeds transaction security. The temptation to monetize this data, or to repurpose it for commercial goals (service personalization, credit scoring, insurance product profiling), is an economic force that internal policies alone will struggle to contain over the long term.
In Australia, a biometric database originally designed to prevent cross-border criminal activity was later used to identify individuals who had lost their documents in bushfires. In that case, the expanded use was well-intentioned. But the precedent was set: once the data exists and the system is operational, purposes expand.
The informatized body #
There's a deeper dimension here, one that goes beyond law and touches the anthropology of technology. Behavioral biometrics transforms the way we interact with our devices into a permanent identification datum. The U.S. National Research Council has described this process as the creation of an "informatized body": a body no longer represented by anatomical features observable to the human eye, but by digital information about the body stored in databases.
When your way of scrolling a page becomes a credential, your unconscious gesture becomes a data point. The spontaneity of your movement is captured, analyzed, modeled, stored. You are not actively providing information, as you do when typing a password. You are simply existing, and the system extracts value from your ordinary existence.
Shoshana Zuboff described this dynamic as the fundamental characteristic of surveillance capitalism: the appropriation of personal experience and its transformation into behavioral data, subsequently used to predict and modify behavior itself. Behavioral biometrics for cybersecurity is, in a sense, the "good" version of this mechanism. But the mechanism is the same. And the distance between the good version and the less good ones is only a matter of declared purposes, which can change.
The asymmetry of consent #
A particularly problematic aspect concerns the nature of consent in these systems. The GDPR requires consent to be "freely given, specific, informed and unambiguous," with an even higher bar when special categories of data are involved. But how can consent to a behavioral biometrics system in your banking app be genuinely free when the alternative is not being able to access your bank account?
European data protection authorities have already addressed this question in analogous contexts. The Dutch DPA fined a company €725,000 because employees perceived fingerprint scanning as an obligation, rendering consent unfree. The Swedish DPA sanctioned a school for using facial recognition to track attendance, citing the power imbalance between the institution and the students.
In the case of banking behavioral biometrics, the imbalance is analogous. Banking services are not optional in contemporary society. If your bank implements behavioral authentication, you have no real alternative to consent. The dynamic resembles an imposed condition more than an informed choice.
The irrevocability paradox #
Unlike a compromised password, which can be changed in seconds, a compromised behavioral profile presents a structural problem: you cannot change how you scroll a page or type on a keyboard with the same ease with which you change a string of characters. Your behavioral patterns are intrinsically tied to your physiology and neurology. They are, in a very concrete sense, you.
This introduces a long-term risk that the industry tends to downplay. If a database of behavioral profiles is breached, the exfiltrated data doesn't become obsolete through credential rotation. It remains usable for as long as the individual's biometric characteristics don't change significantly, which in most cases happens only through aging or following traumatic events.
Vendors of these systems emphasize that profiles are typically processed locally and that only a risk score is transmitted, not raw data. That's a real mitigation, but it doesn't eliminate the fundamental problem: somewhere, in some form, a representation of your unconscious behavior exists as digital data.
Algorithmic discrimination and behavioral bias #
A further critical aspect concerns the discriminatory potential of behavioral biometrics systems. Device interaction patterns are not universal. They vary by age, physical condition, motor disabilities, cultural differences in technology use, and the type of device being used.
An elderly user with arthritic hands will have significantly different scroll and typing patterns from a twenty-year-old. A user with a budget smartphone will produce different sensor data from someone with the latest flagship. A user who alternates between left and right hands, or who uses assistive technologies, will generate atypical profiles.
If the anomaly detection model was trained predominantly on profiles of able-bodied, middle-demographic users, those who deviate from this average profile will be subjected more frequently to additional verification, session blocks, and step-up authentication requests. In other words, the "passive" security that the industry advertises as a better user experience could translate into a systematically worse experience for the most vulnerable categories.
The European Accessibility Act (EAA), fully applicable since June 2025, requires digital products and services to be accessible to people with disabilities. A behavioral authentication system that systematically penalizes users with motor or cognitive disabilities raises compliance questions not only under the GDPR and the AI Act, but also under accessibility regulation.
The duty to look beyond the technical solution #
Nothing written above means behavioral biometrics is a bad idea. The cybersecurity problem is real, losses are enormous (the FBI Internet Crime Report documents billions of dollars in annual losses), and traditional defenses are genuinely inadequate against current threats. Continuous authentication is probably the future of digital security.
But the way the industry tells this story is incomplete in a manner that is not accidental. Janes's article, like most of the technical literature on the subject, presents behavioral biometrics exclusively from the standpoint of operational effectiveness. The subtext is: it works better, it's more secure, the user experience improves. All true. But not the whole truth.
For those of us working in European ICT, the responsibility is twofold. On one hand, implementing these technologies where they are needed and where they create real value. On the other, doing so with a regulatory and ethical awareness that the American market, from which most of the innovation in this field originates, does not have the same urgency to develop.
Europe, with its layered regulatory framework (GDPR, AI Act, EAA, NIS2), is not making life difficult for those who work with technology. It is asking the questions that the market, left to its own devices, does not ask. Questions like: who owns the way you move your finger across a screen? What rights do you have over an unconscious gesture turned into data? What happens when your informatized body becomes a commodity?
These are uncomfortable questions. But the moment behavior becomes a credential, we don't have the luxury of ignoring them.