Ray-Ban Meta smart glasses are just one of many wearable tech devices on the market. The glasses, which first launched in 2021, are a collaboration between Meta and Italian-French eyewear company EssilorLuxottica, which owns Ray-Ban among many other brands.
The smart glasses feature two small cameras, open-ear speakers, a microphone and a touch panel built into the temple of the glasses. To access these features, users must pair them to their mobile phone using the Meta View app. Users can take photos or videos with the camera, listen to music from their phone and livestream to Meta’s social media platforms.
Users can operate the glasses using spoken commands or the built-in Meta AI assistant, which responds to prompts like “hey Meta.” For example, users can say, “hey Meta, look and…” followed by questions about their surroundings.
To take a photo or video, users press and hold a button on the frame, which activates an LED in the front of the glasses. The LED signals to others that the camera is actively capturing a photo or video. If the LED is covered, the camera won’t work and the user will be prompted by the Meta AI assistant to uncover it.
Although the LED helps to signal that the camera is in operation, the relatively small size of the LED garnered criticism from privacy regulators in Europe.
Data privacy concerns
As a company that makes nearly all of its money from advertising, there have been concerns raised about how images captured with the glasses will be used by the company.
Meta has a long history of privacy concerns. When it comes to user data, folks are rightly concerned about how their images — potentially captured without their consent — might be used by the company.
The Meta smart glasses add another layer to this debate by introducing AI into the equation. AI has already prompted numerous debates and criticism about how easy it is to decieve, how confidently it gives incorrect information and how racially biased it can be.
When users take photos or videos with the smart glasses, they are sent to Meta’s cloud to be processed via AI. According to Meta’s own website, “all photos processed with AI are stored and used to improve Meta products, and will be used to train Meta’s AI with help from trained reviewers.”
Meta states this processing includes the analysis of objects, text and other contents of photos, and that any information “will be collected, used and retained in accordance with Meta’s Privacy Policy.” In other words, images uploaded to the cloud will be used to train Meta’s AI.
Leaving it up to users
The ubiquity of portable digital cameras, including wearable ones, has had a significant impact on how we document our lives while also reigniting legal and ethical debates around privacy and surveillance.
In many Canadian jurisdictions, people can be photographed in a public place without their consent, unless there is a reasonable expectation of privacy. However, restrictions apply if the images are used for commercial purposes or in a way that could cause harm or distress. There are exceptions for journalistic purposes or matters of public interest, but these can be nuanced.
Meta has published a set of best practices to encourage users to be mindful of the rights of others when wearing the glasses. These guidelines suggest formally announcing when you plan to use the camera or livestream, and turning the device off when entering private spaces, such as a doctor’s office or public washrooms.
As someone who owns a pair, I can ask my Ray-Ban Meta glasses to comment on what I can see and it will describe buildings, translate signs and accurately guess the species of my mixed-breed dog, but will let me know that it is not allowed to tell me anything about people whenever a person appears in frame.
What remains unclear is the issue of bystander consent and how people who appear unintentionally in the background of someone else’s photos will be used by Meta for AI training purposes. As AI capabilities evolve and these technologies become more widespread, these concerns are likely to grow.
Meta’s reliance on user behaviour to uphold privacy norms may not be sufficient to address the complex questions surrounding consent, surveillance and data exploitation. Given the company’s track record with privacy concerns and its data-driven business model, it’s fair to question whether the current safeguards are enough to protect privacy in our increasingly digitized world still.
Victoria (Vicky) McArthur, Associate Professor, School of Journalism and Communication, Carleton University
This article is republished from The Conversation under a Creative Commons license. Read the original article.