Facial recognition is just one component within a larger ecosystem

Facial recognition tends to dominate the conversation around surveillance technology. That’s mainly because it’s visible, easy to understand, and controversial enough to attract headlines and regulatory attention. But the bigger story isn’t facial recognition, it’s what’s happening next.

AI has already moved beyond the face. Today, computers can identify and track individuals without seeing their face at all. Systems now use clothing type, color, body shape, movement patterns, and accessories like hats and backpacks. These models stitch together information from multiple data sources, CCTV footage, smartphones, police bodycams, drones, and even social media uploads, into a complete timeline of a person’s location and interactions. The result? Full-spectrum identification across time and space, without facial recognition as a dependency.

Companies like Veritone are building this system out at scale. Their product Track is already being used by more than 400 organizations, including the U.S. Department of Justice, Department of Homeland Security, and police departments. It runs on top of their AIWARE platform, which combines over 300 AI models trained for tasks like object detection, transcription, image recognition, and analytics. These systems work together to pull intelligence from raw, siloed data and create unified insights in seconds. That’s not science fiction, it’s operational tech, already deployed, and expanding fast.

For a C-suite leader, the reality is straightforward: Facial recognition is just the front door to a much larger ecosystem of machine-driven identification. This means privacy regulations aimed at facial data may be missing the larger picture. If you’re not looking at how your organization, or your customers, are being identified using non-facial attributes, you’re behind.

Convenience masks privacy and security risks

Facial recognition works. It’s consistent. It unlocks phones, speeds up border control, secures access points, and verifies transactions. It delivers measurable advantages in convenience and throughput. For many organizations, it’s a productivity win. But most people don’t consider what they’re giving up in exchange.

Biometric data is different from a password. You can change a password. You can’t change your face. If that data is compromised, there’s no reset button. Your face, once stolen or indexed, becomes a permanent identifier that can be weaponized, either by malicious actors or by systems that weren’t built with safeguards in mind. There’s also a psychological side. When people think they’re being watched or tracked, free expression takes a hit. Surveillance creates pressure. You get a culture where risk-taking, candor, or legitimate dissent feel unsafe.

This is more than a theory. Governments are already using biometric systems to suppress dissent and tighten control. Meanwhile, facial datasets are routinely sold on the dark web after breaches. In most cases, the people in the photos never gave consent for any of it, facial data gets extracted from images and indexed automatically by cloud platforms. No registration needed.

From a leadership standpoint, the trade-off here is convenience versus long-term cost. Leveraging facial recognition may unlock immediate operational gains, but the exposure risk is asymmetric. Once compromised, the reputational and compliance damage can scale very fast. It’s essential to implement controls: clear privacy policies, encrypted storage, and usage ceilings that prevent overreach.

The more advanced these technologies get, the more critical it is to handle them with precision. You’re not just designing systems, you’re setting norms. And in this space, norms evolve into regulation quickly. Be ahead of it.

AI-based attribute recognition systems

Facial recognition no longer operates alone. Leading platforms, Google Photos, Meta’s Facebook, and others, are using AI to connect much more than just your face to your identity. These tools now detect individuals based on clothing, posture, visible accessories, and contextual clues like when and where a photo was taken. In practice, this means a person can be identified without ever showing their face on camera.

The technology is already embedded in consumer platforms. Google Photos, for example, can identify a user in a series of images simply by recognizing consistent clothing or patterns across frames. Meta has developed their own algorithm that uses hair styles, clothing types, and body language to suggest photo tags, even when faces are turned away or partially hidden. Once labeled, people stay labeled, whether or not they’ve opted into the platform. These capabilities operate without subject consent, and in many cases, involve identifying individuals who don’t even use the service.

From a corporate perspective, the technology is clearly powerful, it enhances product personalization, improves content classification, and drives user engagement. But it also expands the surface area of privacy exposure. Most users aren’t aware that their clothing or posture can be used to track them. Business leaders integrating this tech need to factor in legal obligations under regional privacy laws, potential user backlash if usage isn’t transparent, and the reputational cost of perceived overreach.

The core issue here is not the accuracy of the technology, but the framework around consent, governance, and data control. Systems that make identity assumptions without clear opt-in create trust deficits. At scale, they invite regulatory action, especially in markets where privacy expectations are evolving fast. You don’t need to avoid the technology. But you do need to understand how deploying it will affect your brand, your compliance risk, and long-term customer loyalty.

Efforts to limit facial recognition through bans

Banning facial recognition doesn’t end biometric surveillance, it redirects it. When regulators impose rules focused only on faces, companies and governments turn to alternatives that fall outside those definitions. AI-powered systems now look at how people walk, what they’re wearing, the items they carry, or their movement patterns across time. These attributes aren’t covered by most current legislation, which means they remain deployable, often without restrictions or disclosures.

Veritone’s Track product is a case in point. It allows users, including law enforcement, to select from a long menu of attributes, clothing type, color, personal accessories, and instantly find video clips that match. The system works across different camera types and sources, pulling together fragmented footage into a full event timeline. That kind of system is legally viable even where facial recognition is banned, because it doesn’t use faces to identify individuals.

This shift exposes a problem in how regulation is being implemented. Laws are focusing on the front-end use of facial data without accounting for the back-end integration of alternative biometric and behavioral signals. As a result, surveillance doesn’t decline, it just camouflages. C-suite executives need to understand that bans are only effective if they consider the full scope of biometric identification tools being used today.

If your organization operates in markets considering or enforcing facial recognition bans, take a broader view. Assess all surveillance technologies you’re investing in, not just those currently being scrutinized. Prevention, transparency, and policy alignment aren’t compliance extras, they’re strategic differentiators. In an environment where public trust in technology is declining, playing narrowly to regulatory loopholes won’t hold up long-term. Visibility and accountability need to be built into your systems from the start.

The ubiquity and evolution of surveillance technologies

Surveillance technologies don’t just live in law enforcement databases or government-controlled infrastructure anymore. They’re integrated into everyday platforms, apps, services, and devices used by billions. While facial recognition drew early scrutiny, the broader suite of identification tools now being deployed remains largely underregulated and poorly understood by the public.

Today, individuals can be identified by how they move, what they wear, or what type of device they carry. AI models can ingest multiple inputs, video from security cameras, images from consumer smartphones, behavioral patterns, then combine them into a single, detailed profile. These systems do not rely on explicit consent or user involvement. In many cases, data is collected passively and analyzed automatically. This creates a surveillance environment that’s persistent, scalable, and largely invisible to those being tracked.

Most privacy laws were designed before these technologies existed in their current form. That’s a problem. When rules only address isolated elements, like facial recognition or geo-tracking, they become irrelevant the moment systems adapt. This misalignment puts organizations at risk of non-compliance, even when the intent was to operate within the limits of existing legislation. Business leaders who fail to account for the broader identification tech stack will eventually run into legal, ethical, or reputational issues, especially as countries move to tighten regulation around AI and personal data.

Organizations need to update how they think about identity, privacy, and exposure. It’s no longer just about preventing data breaches. It’s about understanding the ecosystem of surveillance your company might be contributing to, or relying on, and ensuring that internal practices reflect emerging expectations in transparency, ethical use, and user autonomy.

Leaders in IT, legal, product development, and public affairs all need to rethink privacy strategies in terms of system-wide identification capability. This means auditing what data you collect, how it’s used, where it’s stored, and whether users are truly aware of what’s being done. It also means anticipating where the next regulatory move is headed, not just reacting after the fact. Organizations working in advanced surveillance must lead on governance, not trail behind.

There’s opportunity here. Companies that take the lead in responsible AI implementation build trust faster, retain customers, and reduce long-term legal exposure. But that only works if leadership is fully informed and willing to act early. Don’t wait for regulation to catch up, operate as if it already has.

Key takeaways for leaders

  • Surveillance now identifies beyond the face: AI can track individuals using body type, clothing, gait, and behavior across multiple data sources. Leaders must evaluate all biometric tracking tools in use, not just facial recognition, to ensure accurate risk and compliance oversight.
  • Biometric convenience comes with permanent risks: Facial recognition improves efficiency but introduces irreversible privacy and security exposure. Leaders should implement strong data governance for biometric identifiers and assess long-term consequences of usage beyond short-term gains.
  • AI blends multiple identifiers without asking: Platforms like Google and Meta use clothing, hair, and context to identify people, even without consent or facial visibility. Executives should reexamine customer data practices and establish clear guidelines on implicit identification to maintain user trust.
  • Regulatory bans miss the broader surveillance shift: Limiting facial recognition leads organizations to pivot to other tracking methods not yet regulated. Policy and legal teams must proactively map and assess all identification technologies in use to avoid relying on outdated compliance frameworks.
  • Identity is now system-wide, not single-source: Surveillance ecosystems draw from diverse inputs to build persistent tracking models, requiring a shift in how privacy is framed. Leaders should push for cross-functional audits and align their privacy strategies with how AI-driven identification actually works today.

Alexander Procter

June 12, 2025

9 Min