Do Our Faces Deserve the Same Protection as Our Phones?

0
6
Do Our Faces Deserve the Same Protection as Our Phones?

In June 2002, Steven Spielberg premiered a new movie he had directed, Minority Report, based on a famous 1956 short story by the science fiction writer Philip K. Dick. Set in 2054 in a crime­-free Washington, DC, the film stars Tom Cruise, who plays the head of Precrime—an elite police unit that arrests killers before they commit their crimes. The team has the authority to make its arrests based on the visions of three clairvoyant individuals who can see into the future. But soon Cruise is evading his own unit—in a city where everyone and everything is tracked—when the psychics predict he will commit a murder of his own.

More than 15 years later, this approach to law enforcement happily seems far­-fetched. But today, one aspect of Minority Report seems to be on track to arrive much earlier than 2054. As Cruise is on the run, he walks into the Gap. The retailer has technology that recognizes each entering customer and immediately starts displaying on a kiosk the images of clothes it believes the customer will like. Some people might find the offers attractive. Others might find them annoying or even creepy. In short, entering a store becomes a bit like we sometimes feel after browsing the web and then turning to our social media feed only to find new ads promoting what we just viewed.

Adapted from Tools and Weapons: The Promise and the Peril of the Digital Age by Microsoft president Brad Smith and Carol Ann Browne, senior director of communications and external relations. Buy on Amazon.

Penguin Press

In Minority Report, Spielberg asked theatergoers to think about how technology could be both used and abused—to eliminate crimes before they could be committed, as well as to abuse people’s rights when things go wrong. The technology that recognizes Cruise in the Gap store is informed by a chip embedded inside him. But the real­-world technology advances of the first two decades of the 21st century have outpaced even Spielberg’s imagination, as today no such chip is needed. Facial­ recognition technology, utilizing AI­-powered computer vision with cameras and data in the cloud, can identify the faces of customers as they walk into a store based on their visit last week—or an hour ago. It is creating one of the first opportunities for the tech sector and governments to address ethical and human rights issues for Artificial Intelligence in a focused and concrete way, by deciding how facial recognition should be regulated.

What started for most people as a simple scenario, such as cataloging and searching photos, has rapidly become much more sophisticated. Already many people have become comfortable relying on facial recognition rather than a password to unlock an iPhone or a Windows laptop. And it’s not stopping there.

A computer can now accomplish what almost all of us as human beings have done almost since birth—recognize people’s faces. For most of us, this probably began with the ability to recognize our mother. One of the joys of parenting comes when a toddler erupts enthusiastically when you return home. This reaction, which lasts until the onset of the teenage years, relies on the innate facial recognition capabilities of human beings. While this is fundamental to our daily lives, we almost never pause to think about what makes it possible.

As it turns out, our faces are as unique as our fingerprints. Our facial characteristics include the distance of our pupils from each other, the size of our nose, the shape of our smile, and the cut of our jaw. When computers use photographs to chart these features and knit them together, they create the foundation for a mathematical equation that can be accessed by algorithms.

People are putting this technology to work around the world in ways that will make life better. In some cases, it may be a matter of consumer convenience. National Australia Bank, using Microsoft’s facial recognition technology, is developing the capability for you to walk up to an automated teller machine so you can withdraw money securely without a bank card. The ATM will recognize your face and you can then enter your PIN and complete your transaction.

In other scenarios, the benefits are more far­-reaching. In Washington, DC, the National Human Genome Research Institute is using facial recognition to help physicians diagnose a disease known as DiGeorge syndrome, or 22q11.2 deletion syndrome. It’s a disease that more often afflicts people who are African, Asian, or Latin American. It can lead to a variety of severe health problems, including damage to the heart and kidneys. But it also often manifests itself in subtle facial characteristics that can be identified by computers using facial­ recognition systems, which can help a doctor diagnose a patient in need.

These scenarios illustrate important and concrete ways that facial recognition can be used to benefit society. It’s a new tool for the 21st century.

Like so many other tools, however, it can also be turned into a weapon. A government might use facial recognition to identify every individual attending a peaceful rally, following up in ways that could chill free expression and the ability to assemble. And even in a democratic society, the police might rely excessively on this tool to identify a suspect without appreciating that facial recognition, like every technology, doesn’t always work perfectly.

For all these reasons, facial recognition easily becomes inter­twined with broader political and social issues and raises a vital question: What role do we want this form of Artificial Intelligence to play in our society?


A glimpse of what lies ahead emerged suddenly in the summer of 2018, in relation to one of the hottest political topics of the season. In June, a gentleman in Virginia, a self-described “free software tinkerer,” also clearly had a strong interest in broader political issues. He had posted a series of tweets about a contract Microsoft had with the US Immigration and Customs Enforcement, or ICE, based on a story posted on the company’s marketing blog in January. It was a post that frankly everyone at the company had forgotten. But it says that Microsoft’s technology for ICE passed a high security threshold and will be deployed by the agency. It says the company is proud to support the agency’s work, and it includes a sentence about the resulting potential for ICE to use facial recognition.

In June 2018, the Trump administration’s decision to separate children from parents at the southern US border had become an explosive issue. A marketing statement made several months earlier now looked a good deal different. And the use of facial­ recognition technology looked different as well. People worried about how ICE and other immigration authorities might put something like facial recognition to work. Did this mean that cameras connected to the cloud could be used to identify immigrants as they walked down a city street? Did it mean, given the state of this technology, with its risk of bias, that it might misidentify individuals and lead to the de­tention of the wrong people? These were but two of many questions.

Keep Reading



The latest on artificial intelligence, from machine learning to computer vision and more

By dinnertime in Seattle, the tweets about the marketing blog were tearing through the internet, and our communications team was working on a response. Some employees on the engineering and marketing teams suggested that we should just pull the post down, saying, “It is quite old and not of any business impact at this point.” Three times, Frank Shaw, Microsoft’s communications head, advised them not to take it down. “It will only make things worse,” he said. Nonetheless, someone couldn’t resist the temptation and deleted part of the post. Sure enough, things then got worse and another round of negative coverage followed. By the next morning, people had learned the obvious lesson and the post was back up in its original form.

As so often happens, we had to sort out what the company’s contract with ICE really covered.

As we dug to the bottom of the matter, we learned that the contract wasn’t being used for facial recognition at all. Nor, thank goodness, was Microsoft working on any projects to separate children from their families at the border. The contract instead was helping ICE move its email, calendar, messaging, and document management work to the cloud. It was similar to projects we were working on with customers, including other government agencies, in the US and around the world.

Nonetheless, a new controversy was born.

Some suggested that Microsoft cancel our contract and cease all work with ICE, a persistent theme about government use of technology that would take hold that summer. One group of employees circulated a petition to halt the ICE contract. The issue began to roil the tech sector more broadly. There was similar employee activism at the cloud­-based software company Salesforce, focused on its contract with US Customs and Border Protection. This followed employee activism at Google, which had led the company to cancel a project to develop Artificial Intelligence for the US military. And the ACLU targeted Amazon, backing Amazon employees who voiced concern about Rekognition, its facial recognition service.

For the tech sector and the business community more broadly, this type of employee activism was new. Some saw a connection to the role that unions had played in certain industries for well over a century. But unions had focused principally on the economic and working conditions of their members. Employee activism in the summer of 2018 was different. This activism called on employers to adopt positions on specific societal issues. The employees had nothing directly or even indirectly to gain. They instead wanted their employers to stand up for societal values and positions that they thought were important.

The change thrust business leaders into new terrain. At a small dinner I attended in Seattle, the CEO of one tech company summed up the collective angst. “I feel well prepared for most of my job,” he said, describing how he’d risen up the ranks. “But now I’m being thrust into something completely different. I really don’t know how to respond to employees who want me to take on their concerns about immigration, climate issues, and so many other problems.”

It became increasingly important to spell out when we thought it made sense for the company to take a position on a public issue and when we should not. We didn’t view corporate leadership as a license to use the company’s name to address any issue under the sun. There needed to be some vital connection to us. We felt our responsibility was fundamentally to address public issues that impacted our customers and their use of our technology, our employees both at work and in their community, and our business and the needs of our share­holders and partners. This didn’t answer every question, but it provided a useful framework for discussions with our employees.

Employee questions also pushed us in a constructive way to think harder about our relationship with the government and the challenges posed by new technology such as facial recognition.

On the one hand, we were not comfortable with the suggestion that we react to the events of the day by boycotting government agencies, especially in democratic societies governed by the rule of law. In part this was a principled reaction. As I often tried to remind people, no one elected us. It seemed not just odd but undemocratic to want tech companies to police the government. As a general principle, it seemed more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government. Satya Nad

Read More

Leave a reply