As smart glasses evolve from experimental gadget to everyday device, a deeper question surfaces: when these glasses are used to harass, exploit, or defame, who As smart glasses evolve from experimental gadget to everyday device, a deeper question surfaces: when these glasses are used to harass, exploit, or defame, who

When smart glasses harm, who is accountable?

2026/02/27 01:01
10 min read

In September 2021, United States technology giant, Meta, entered the wearable hardware race with its first pair of smart glasses, developed in partnership with EssilorLuxottica, a global eyewear manufacturer. 

Branded Ray-Ban Stories, the glasses resembled ordinary eyewear but could discreetly capture photos and videos, collapsing the distance between accessory and recording device.

Five years and multiple iterations later, the technology is significantly more powerful. 

The second generation, Ray-Ban Meta Smart Glasses, added improved cameras and contextual AI features. 

By late 2025, the Ray-Ban Meta Display introduced a monocular display and gesture controls via a neural wristband. Meta sold more than seven million units last year and is racing to scale production, positioning smart glasses not as a novelty but as a potential mass-market computing platform.

But as smart glasses evolve from experimental gadget to everyday device, a more difficult question emerges: when they are used to harass, exploit, or defame, who is responsible?

Is it the platform that built the technology? The person wearing it? Or the victims, who often bear the emotional, reputational, and financial costs of being recorded without consent?

That debate moved from theory to lived reality in early 2026, when a Russian vlogger known as Yaytseslav—identified as Vyacheslav Trahov—was accused of travelling through Ghana and Kenya while secretly filming women during flirtatious “pick-up” encounters. 

According to allegations circulating online,  he used covert recording devices, suspected to include smart glasses, to capture intimate encounters without consent, then upload the footage to foreign-language forums and Telegram channels to build online notoriety.  Some clips reportedly included derogatory commentary about the women involved.

In many cases, the women were unaware they had been recorded, let alone broadcast to a global audience. By the time outrage spread across regional outlets and social media, the harm was done. Videos had circulated. Reputations had been bruised. In conservative communities, such exposure can carry social consequences that linger for years.

Fines that don’t fix the problem

On paper, tech giants operate under strict regulatory regimes. The European Union’s General Data Protection Regulation (GDPR) is often hailed as the global gold standard for privacy enforcement. Companies, including Meta, have faced repeated fines under GDPR and related frameworks.

In November 2025, a Madrid court ordered Meta to pay €479 million ($566 million) following a lawsuit from Asociación de Medios de Información, comprising 87 Spanish media companies. The court ruled that Meta’s “unlawful” use of user data for advertising between 2018 and 2023 gave it an unfair competitive advantage, siphoning revenue away from local publishers.

Under the Digital Markets Act, regulators also fined Meta €200 million ($236 million). Authorities argued that forcing users to either pay a subscription fee or consent to personalised tracking failed to provide a genuinely free and equivalent alternative, undermining the principle of freely given consent enshrined in EU law.

Yet the scale and frequency of these penalties raise a more pressing question: are fines alone enough to change corporate behaviour?

“No,” said Gabriel Rojas Hruska, a Canadian privacy researcher, in an interview with TechCabal. “They are too little, too late.”

He argued that enforcement often spans many years, during which companies continue to operate largely as before. The long-running Google Shopping antitrust case illustrates the problem. 

The European Commission first opened its investigation after rivals accused Google of favouring its own comparison-shopping service in search results while demoting competitors. 

Seven years later, the Commission imposed a record €2.42 billion ($2.86 billion) fine and ordered the company to amend its practices within 90 days.

But the legal battle did not end there. Google appealed to the EU’s General Court, which upheld the fine in 2021. The company then took the case to the European Court of Justice, prolonging the dispute even further. 

It was only after the appeal was dismissed that the 14-year saga concluded, underscoring how regulatory victories can take more than a decade to crystallise, long after the competitive damage may already have been done.

By the time a penalty lands, the technology has already iterated several times. A €100 million ($118 million) fine may sound significant, but for a company generating tens of billions in annual revenue, it risks becoming a line item, a cost of doing business.

Nana Nwachukwu, a Nigerian tech policy researcher, uses a simple analogy. Imagine parking illegally to close a ₦10 million ($7,700) deal. If the fine is ₦1,000 ($0.77), you pay it without hesitation.

“That’s what these companies are doing,” she said. “The fines never match the weight of the company.”

Repeat fines, she argues, prove the point. If a firm is repeatedly penalised, the sanctions are not correcting its behaviour. They are being absorbed.

The accountability maze

Even when harm occurs, identifying who to hold responsible is far from straightforward.

If someone uses smart glasses to record a stranger at the gym secretly and uploads the video online, the immediate harm stems from the wearer. But the device enabled the recording. The platform hosts the video. Algorithms may amplify it.

In North America, intermediary protections such as Section 230 in the United States shield online platforms from liability for user-generated content. Holding individuals accountable is possible in theory, through defamation, privacy, or non-consensual image-sharing laws, but in practice, it is costly, slow, and emotionally draining.

The complexity deepens when artificial intelligence is embedded in the device itself. If smart glasses deploy facial recognition to identify people in real time, consent becomes murky. Children cannot legally consent. Yet the system must scan their faces to determine they are minors. The regulatory logic becomes circular.

Meanwhile, victims wait.

So, again: who pays?

The vlogger? If authorities can find and prosecute him.

The platform hosting the content? Possibly, but only after notice-and-takedown processes that may move more slowly than viral distribution.

In Nigeria, the Nigerian Data Protection Act explicitly protects individuals against the unauthorized use of their personal data, including videos and images. Under Section 65 of the NDPA, “Personal Data” is defined as any information relating to an identified or identifiable natural person. 

The Act mandates that a person or entity (Data Controller) must obtain your specific, informed, and freely given consent before processing your data, which includes recording and uploading videos of you.

Holding a device maker accountable, however, is not straightforward. The NDPA offers three main avenues for redress. 

First, you may sue the data controller in a high court, though this can take years, especially when the entity involved is a large corporation like Meta. Second, you can file a formal complaint with the Nigeria Data Protection Commission by following the steps on its website or emailing the commission. 

Third, if the misuse of your data is connected to a broader crime, such as blackmail or corporate espionage, the NDPC can escalate the case for criminal prosecution.

Corporate capture and the policy dilemma

Nwachukwu argues that policymaking around emerging technologies often suffers from “corporate capture.” Companies lobby aggressively, secure seats on advisory committees, and influence regulatory drafts.

“Tech companies run a business. It’s not a charity,” she said. “Every business wants to be in bed with the government. They want to control what is in the law.”

There is also geopolitical asymmetry. Most major smart-glasses manufacturers are American. Beyond Meta, competitors include Snap’s Spectacles, Apple’s Vision Pro, and Vuzix’s Shield. 

Their primary regulator is the United States government, which has signalled ambitions to lead globally in AI and emerging technologies. Smaller countries often lack the political leverage to demand compliance.

If a company does not maintain a physical office in a country, enforcement becomes even harder. Governments can threaten fines, but if the platform exits the market or refuses to comply, citizens may still access the product through global distribution channels. Regulatory sovereignty collides with technological dependence.

Users as co-accountable actors

Apart from platforms, Nwachukwu also wants users to face consequences.

“Platforms are accountable,” she said. “But you, as a user, you’re also accountable.”

In contexts like elections, she suggests more nuanced rules. If a public official is performing official duties, such as counting ballots, consent to filming may not be required because transparency serves the public interest. But once that official leaves their official capacity, the rules change.

Consent pathways, she insists, can be defined. Exceptions can be carved out. The problem is not impossibility but political will.

She proposed starting not with sweeping new laws but with basic infrastructure, calling for the creation of a national tech-incident database. She said reports of technology-facilitated gender-based violence, privacy breaches, and AI-related harms should be captured in a central system, and added that police should be trained not just to use computers but to recognise digital harm as real harm.

Hruska points to notice-and-takedown systems as a critical enforcement tool. In the European Union’s digital services framework, platforms must remove illegal content when notified, rather than merely informing the uploader. Yet even here, delays can blunt effectiveness. Content spreads in minutes; legal processes unfold in weeks or months. A video viewed a million times cannot be unseen.

Design as regulation

If law struggles to keep pace, product design may carry part of the burden.

Hruska argues for tamper-resistant hardware. Smart glasses could be engineered so that disabling recording indicators renders the device inoperable. Bright LED recording lights could be mandatory and impossible to remove without destroying the product.

More radically, he floats the idea of a verbal override mechanism: 

“You could require a specific verbal command that anyone can use to disable a camera on a wearable device when they see someone using it,” said Hruska. “That command could even be backed by law. For example, if someone says, ‘camera off now’ to a person wearing a commercial device, the device would be required to shut off.”

Such proposals collide with other policy priorities, including the right to repair. The European Union has championed consumer repairability to reduce waste and empower users. Making devices tamperproof may conflict with that goal.

And there is another wrinkle: not all recording is harmful. Many civil society groups advocate for body cameras on police officers to curb abuse. For law enforcement, “always on” recording can protect citizens. For consumers, “always on” feels invasive.

The distinction between public-interest recording and private exploitation is thin and context-dependent.

Social norms in flux

Beyond law and hardware lies culture.

Tomiwa Aladekomo, CEO of Big Cabal Media, who uses smart glasses for hands-free filming in public spaces, says he seeks consent before recording.

“I use the video to record videos hands-free in the gym and in places/situations where it is best not use my phone (like airports or concerts, etc.),” Aladekomo said. “⁠⁠Yes, I’ve taken pictures of people, always with their consent. Same with videos.”

But social norms are still forming. Unlike smartphones, whose raised posture signals recording, smart glasses blur visibility. A passerby cannot easily tell whether they are being filmed.

Over time, etiquette may evolve. Just as people learned not to take loud phone calls in cinemas, society may converge on clearer rules around wearable cameras. But norms lag technology—and in that lag, harm can take root.

Market Opportunity
Smart Blockchain Logo
Smart Blockchain Price(SMART)
$0.004678
$0.004678$0.004678
+0.66%
USD
Smart Blockchain (SMART) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.