Harvard Students Sound Alarm on Smart Glasses and Privacy Threats in the AI Age:
Students from Harvard University have raised a warning about the risks posed by smart glasses to privacy amid the rapid advancement of artificial intelligence. By utilizing large language models and facial recognition technologies, they have managed to transform the Meta Ray-Ban 2 smart glasses into a sophisticated spying tool capable of identifying any individual in mere seconds. By simply looking at a person, the glasses can reveal personal information such as their name, address, phone number, and even their social security number.
(Purchase today by clicking on the image)
How Can Smart Glasses Be Turned into Spying Tools?
The students, An Vo Nguyen and Ken Ardaview, have created a buzz online with their innovative project named I-XRAY. In this project, they modified the Meta Ray-Ban 2 glasses to be able to identify individuals by integrating them with the facial recognition engine PimEyes. Once a picture of someone is taken through the glasses, it links to the people-search tool FastPeopleSearch to obtain personal information such as address, date of birth, phone number, and even family connections by analyzing public records.
The tool FastPeopleSearch only requires a person’s name to find their personal data, which includes residence address, date of birth, phone number, and even family connections by analyzing public records like voter registrations, property records, and social media user profiles.
The students then used a large language model to help them quickly gather and analyze this information, presenting it in an organized and easily understandable manner. Consequently, the glasses can provide detailed data about individuals, such as their name, age, address, phone number, educational qualifications, job, and even social security numbers (in some cases), all within seconds, without the monitored person being aware.
Privacy Risks and New Technologies
These results raise numerous questions about the future of privacy and how new technologies can be used for harmful purposes, such as:
- Harassment and Threats: Criminals can use this technology to identify and track their victims.
- Fraud: Scammers can gather personal information about individuals to use in fraudulent activities.
- Illegal Spying and Surveillance: Anyone can potentially spy on the lives of others without their knowledge.
The students explained that their findings in the I-XRAY project were made possible by the advancements currently seen in the field of large language models. These models are characterized by their ability to gather massive amounts of data from diverse sources and analyze it quickly, deducing relationships between online sources, such as linking a person’s name mentioned in an article to a photo of them on a social media platform, and then logically inferring the person’s identity and personal details through the text. Thanks to this ability to logically connect data, these models have become an incredibly powerful tool for extracting personal information.
By combining advanced large language models with facial recognition techniques and reverse image search, it has become possible to extract a vast amount of personal data in record time, from a person’s name and address to their financial records—something that was previously unachievable with traditional methods alone.
What used to take hours or even days to manually search through various databases to find information based on just a photo of someone can now be accomplished by these smart glasses in mere seconds, offering unprecedented efficiency in information gathering.
The students emphasized that the goal of their I-XRAY project is to highlight the increasing risks to personal privacy in our digital age and the urgent need for regulations and laws to protect privacy.
Demonstration in the Subway and Growing Concerns
To illustrate the potential danger of their project, the students tested their modified glasses in a subway station, successfully identifying dozens of ordinary people. Even more alarming, they used the information they gathered to pretend they knew these individuals, underscoring how easily this technology could be exploited for malicious purposes.
In light of this concerning development, the students chose not to publish the code they used in their experiment, fearing it could be misused. But why did they specifically use Meta smart glasses? They explained that the Meta Ray-Ban 2 glasses have a design similar to traditional glasses, making them the ideal tool for spying without raising suspicion.
However, this raises many questions about the new Orion glasses currently being developed by Meta, which feature a similar design but are more advanced than the Meta Ray-Ban 2.
Response from Meta
A spokesperson for Meta commented on this experiment in a statement to the Daily Mail, confirming that the glasses underwent significant modifications to perform these tasks. They clarified that the AI technologies used in the experiment can recognize faces in any image, whether taken with a phone camera or any other device.
Nonetheless, phones and other devices possess an element of transparency, which cannot be said for Meta smart glasses. The Meta Ray-Ban 2 glasses feature a design resembling traditional glasses, with a hidden camera in the frame, making the recording process unclear to others.
When recording begins, the glasses emit a sound and a small LED light turns on, but this may not be sufficient to alert individuals that the glasses are capturing their image.
Expert Opinions and Recommendations
Jake Moore, a security expert at ESET, noted that the development of smart glasses capable of identifying people so quickly represents a significant leap in technology, but this leap brings with it serious risks to privacy and security, as it can be easily misused.
The students advised users to remove their images from facial recognition search engines like PimEyes and Facecheck ID, as well as delete their data from people-search engines like FastPeopleSearch, CheckThem, and Instant Checkmate.
For instance, the attempt to use I-XRAY to identify Joseph Cox, a reporter from 404 Media, was unsuccessful because he had removed his data from PimEyes. However, the glasses were still able to identify one of the participants in their test, even after his information had been deleted from these sites.
Thus, simply removing personal data from these websites is not sufficient, as technology evolves rapidly, and new methods of exploitation may emerge.
This situation has led both Google and Meta to retract the launch of similar technologies they had developed to connect smart glasses to facial recognition search engines, according to the New York Times.
Nevertheless, other companies, like Clearview AI, are working on similar technologies, aiming to create a massive database containing the faces of billions of people, which increases the risks associated with this technology.
Future of Privacy and Security
This project raises numerous questions about the future of privacy and security. If such technology becomes available to everyone, how can we protect ourselves from constant tracking and surveillance? What legal restrictions need to be established regarding such technologies?
The risks posed by this project extend beyond individuals to include companies and governments. It is easy to envision how this technology could be exploited for political or commercial purposes, or even for committing crimes.
Therefore, there is an urgent need for laws and regulations to protect individual privacy and limit the illegal use of these technologies.
(Purchase today by clicking on the image)