Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the company’s CEO wants to use artificial intelligence to make Clearview’s surveillance tool even more powerful.
It may make it more dangerous and error-prone as well.
Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the company’s face database to help identify suspects in photos by tying them to online profiles.
The company’s cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web—more than three times as many as has been previously reported.
Ton-That says the larger pool of photos means users, most often law enforcement, are more likely to find a match when searching for someone. He also claims the larger data set makes the company’s tool more accurate.
Clearview combined web-crawling techniques, advances in machine learning that have improved facial recognition, and a disregard for personal privacy to create a surprisingly powerful tool.
Ton-That demonstrated the technology through a smartphone app by taking a photo of the reporter. The app produced dozens of images from numerous US and international websites, each showing the correct person in images captured over more than a decade. The allure of such a tool is obvious, but so is the potential for it to be misused.
Clearview’s actions sparked public outrage and a broader debate over expectations of privacy in an era of smartphones, social media, and AI. Critics say the company is eroding personal privacy. The ACLU sued Clearview in Illinois under a law that restricts the collection of biometric information; the company also faces class action lawsuits in New York and California. Facebook and Twitter have demanded that Clearview stop scraping their sites.
The pushback has not deterred Ton-That. He says he believes most people accept or support the idea of using facial recognition to solve crimes. “The people who are worried about it, they are very vocal, and that’s a good thing, because I think over time we can address more and more of their concerns,” he says.
Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.
These capabilities could make Clearview’s technology more attractive but also more problematic. It remains unclear how accurately the new techniques work, but experts say they could increase the risk that a person is wrongly identified and could exacerbate biases inherent to the system.
“I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process I would expect a plethora of unintended bias to creep in,” says Aleksander Madry, a professor at MIT who specializes in machine learning. Without due care, for example, the approach might make people with certain features more likely to be wrongly identified.
Even if the technology works as promised, Madry says, the ethics of unmasking people is problematic. “Think of people who masked themselves to take part in a peaceful protest or were blurred to protect their privacy,” he says.
Source link : wired