This article is part of a VB special issue. Read the full series here: Power in AI.


At the Common Good in the Digital Age tech conference recently held in Vatican City, Pope Francis urged Facebook executives, venture capitalists, and government regulators to be wary of the impact of AI and other technologies. “If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest,” he said.

In a related but contextually different conversation, this summer Joy Buolamwini testified before Congress with Rep. Alexandria Ocasio-Cortez (D-NY) that multiple audits found facial recognition technology generally works best on white men and worst on women of color.

What these two events have in common is their relationship to power dynamics in the AI ethics debate.

Arguments about AI ethics can wage without mention of the word “power,” but it’s often there just under the surface. In fact, it’s rarely the direct focus, but it needs to be. Power in AI is like gravity, an invisible force that influences every consideration of ethics in Artificial Intelligence.

Power provides the means to influence which use cases are relevant; which problems are priorities; and who the tools, products, and services are made to serve.

It underlies debates about how corporations and countries create policy governing use of the technology.

It’s there in AI conversations about democratization, fairness, and responsible AI. It’s there when Google CEO Sundar Pichai moves AI researchers into his office and top machine learning practitioners are treated like modern day philosopher kings.

It’s there when people like Elon Musk expound on the horrors that future AI technologies may wreak on humanity in decades to come, even though facial recognition technology is already being used today to track and detain China’s Uighur Muslim population on a massive scale.

And it’s there when a consumer feels data protection is hopeless or an engineer knows something is ethically questionable but sees no avenue for recourse.

Broadly speaking, startups may regard ethics as a nice addition but not a must-have. Engineers working to be first to market and meet product release deadlines can scoff at the notion that precious time be put aside to consider ethics. CEOs and politicians may pay ethics lip service but end up only sending sympathetic signals or engaging in ethics washing.

But AI ethics isn’t just a feel-good add-on — a want but not a need. AI has been called one of the great human rights challenges of the 21st century. And it’s not just about doing the right thing or making the best AI systems possible, it’s about who wields power and how AI affects the balance of power in everything it touches.

These power dynamics are set to define business, society, government, the lives of individuals around the world, the future of privacy, and even our right to a future. As virtually every AI product manager likes to say, things are just getting started, but failure to address uneven power dynamics in the age of AI can have perilous consequences.

The labor market and the new Gilded Age

A confluence of trends led to the present-day reemergence of AI at a precarious time in history. Deep learning, cloud computing, processors like GPUs, and compute power required to train neural networks faster — technology that’s become a cornerstone of major tech companies — fuel today’s revival.

The fourth industrial revolution is happening alongside historic income inequality and the new Gilded Age. Like the railroad barons who took advantage of farmers anxious to get their crop to market in the 1800s, tech companies with proprietary data sets use AI to further entrench their market position and monopolies.

When data is more valuable than oil, the companies with valuable data have the advantage and are most likely to consolidate wealth or the position of industry leaders. This applies of course to big-name companies like Apple, Facebook, Google, IBM, and Microsoft, but it’s also true of legacy businesses.

At the same time, mergers and acquisitions continue to accelerate and further consolidate power, a trend that cements other trends, as research and development belongs almost entirely to large businesses. A 2018 SSTI analysis found that companies with 250 employees or more account for 88.5% of R&D spending, while companies with 5,000 employees or more account for nearly two-thirds of R&D spending.

The growing proliferation of AI could lead to great imbalance in society, according to a recent report from the Stanford Institute for Human-Centered AI (HAI).

“The potential financial advantages of AI are so great, and the chasm between AI haves and have-nots so deep, that the global economic balance as we know it could be rocked by a series of catastrophic tectonic shifts,” reads a proposal from HAI that calls for the U.S. government to invest $120 billion in education, research, and entrepreneurship investments over the next decade.

The proposal’s coauthor is former Google Cloud chief AI scientist Dr. Fei-Fei Li. “If guided properly, the age of AI could usher in an era of productivity and prosperity for all,” she said. “PwC estimates AI will deliver $15.7 trillion to the global economy by 2030. However, if we don’t harness it responsibly and share the gains equitably, it will lead to greater concentrations of wealth and power for the elite few who usher in this new age — and poverty, powerlessness, and a lost sense of purpose for the global majority.”

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, studies the impact of AI on the future of work and spoke recently at a Stanford AI ethics symposium. Regarding the number of jobs suitable for machine learning that are likely to be replaced in the years ahead, Brynjolfsson said, “If you look at the economy overall, there’s a tidal wave coming. It’s barely hit yet.”

Machine intelligence can be used to redesign and augment workplace tasks, but it is most often used to replace jobs, Brynjolfsson said.

Automation’s impact on job loss is predicted to differ city to city and state to state, according to both Brookings Institution analysis and research by Brynjolfsson and Tom Mitchell of Carnegie Mellon University. Fresno, California is expected to get hit harder than Minneapolis, for example, but job instability or loss is expected to disproportionately impact low income households and people of color. A recent McKinsey report says that African-American men are expected to see the greatest job loss as a result of automation.

This follows a trend of median income in the United States remaining stagnant since 2000. The end of the rise of median income tied to a rise in productivity is what Brynjolfsson calls “the great decoupling.”

“For most of the 20th century, those roles in tandem — more production, more wealth, more productivity — went hand in hand with the typical person being better off, but recently those lines have diverged,” he said. “Well, the pie is getting bigger, we’re creating more wealth, but it’s going to a smaller and smaller subset of people.”

Brynjolfsson believes AI community challenges have led to great leaps forward in state-of-the-art AI like the DARPA autonomous vehicle challenge and ImageNet for computer vision, but businesses and the AI community should begin to turn their attention toward shared prosperity.

“It’s possible for many people to be left behind and indeed, many people have. And that’s why I think the challenge that is most urgent now is not simply more better technology, though I’m all for that, but creating shared prosperity,” he said.

Tech giants and access to power

Another major trend underway as AI spreads is that for the first time in U.S. history, the majority of the workforce are people of color. Most cities in the U.S. — and in time, the nation as a whole — will no longer have a racial majority by 2030, according to U.S. Census projections.

These demographic shifts make lack of diversity within AI companies all the more glaring. Critically, there’s a lack of race and gender diversity in the creation of decision-making systems — what AI Now Institute director Kate Crawford calls AI’s “white guy problem.”

Only 18% of research published at major AI conferences is done by women, while at Facebook and Google, only 15% and 10% of research staff, respectively, are women, according to a 2018 analysis by Wired and Element AI. Google and Facebook do not supply AI research diversity numbers, spokespeople from both companies said.

A report released in April by the AI Now Institute details a “stark cultural divide between the engineering cohort responsible for technical research and the vastly diverse populations where AI systems are deployed.” The group refers to this as “the AI accountability gap.”

The report also recognizes the human labor hidden within AI systems, like the tens of thousands of moderators necessary for Facebook or YouTube content, or the telepresence drivers in Colombia who are remotely driving Kiwibot delivery robots near UC Berkeley in the San Francisco Bay Area.

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

One of those fellows, Mutale Nkonde, is coauthor of the Algorithmic Accountability Act, legislation introduced in both houses of Congress earlier this year that charges the Federal Trade Commission (FTC) with assessment of algorithmic bias and allows the agency to issue fines based on company size.

She’s also executive director of AI for the People and a Berkman Klein Center of Internet and Society fellow at Harvard University. She’s now working to assess how Artificial Intelligence and misinformation may be used to target African-Americans during the 2020 election. A Senate Intelligence committee investigation released in October found that steps to interfere in the 2016 election singled out African-Americans on Facebook, Twitter, and Instagram.

Before that, she and a small team worked on advancing the idea of racial literacy.

Nkonde and coauthors posit that things like implicit bias training and diversity initiatives — championed by tech giants that release annual diversity reports — have failed to move the needle on creating a tech workforce that looks like its users. In order to make meaningful progress going forward, businesses should put aside vague aspirations and begin taking practical steps toward educating people in racial literacy.

“The real goal of building capacity for racial literacy in tech is to imagine a different world, one where we can break free from old patterns,” reads a paper explaining the racial literacy framework. “Without a deliberate effort to address race in technology, it’s inevitable that new tech will recreate old divisions. But it doesn’t have to be that way.”

Coauthors want racial literacy to become part of the curriculum for computer science students and training for employees at tech companies. Their approach draws on Howard Stevenson’s racial literacy training for schools and includes implicit association tests to determine the stereotypes people hold.

Racial literacy aims to equip people with the training and emotional intelligence to resolve racially stressful situations in the workplace. This could include computer scientists and designers, as well as machine learning engineers, allowing them to speak openly about how a product or service can perpetuate structural racism or lead to adverse effects for a diverse group of users.

The objective is to allow people to speak in an open and non-confrontational way about what can go wrong with a product or service. In interviews with employees from mid-sized and large tech companies, the researchers found that in many tech firms, confronting issues associated with race was taboo.

“Many of the barriers that came up in the interviews, and even anecdotally in our lives, is that people don’t want to acknowledge race. They want to pretend that it doesn’t matter and that everybody is the same, and what that actually does is reinforce racist patterns and behavior,” Nkonde said. “It would mean companies have to be clear about their values, instead of trying to be all things to all people by avoiding an articulation of their values.”

Racial literacy will be increasingly important, Nkonde believes, as companies like Alphabet create products that are of critical importance to people’s lives, such as healthcare services or facial recognition software sold to governments.

The other intended result of racial literacy training is to create a culture within companies that sees value in a diverse workforce. A Boston Consulting Group study released last year found higher rates of revenue and innovation in organizations that had more diversity. But if hiring and retention numbers are any indication, that message doesn’t seem to have reached Silicon Valley tech giants.

LinkedIn senior software engineer Guillaume Saint-Jacques thinks AI ethics isn’t just the right thing to do; it makes sound business sense. One of the people behind the Fairness Project launched this summer, Saint-Jacques says bias can get in the way of profit.

“If you’re very biased, you might only cater to one population, and eventually that limits the growth of your user base, so from a business perspective you actually want to have everyone come on board … it’s actually a good business decision in the long run,” he said.

Individual autonomy and automation

Powerful companies may yield their might in different ways, but their business plans have consequences for individuals.

Perhaps the best summary of this new power dynamic comes from The Age of Surveillance Capitalism by retired Harvard Business School professor Shoshana Zuboff. The book details the creation of a new form of capitalism that combines sensors like cameras, smart home devices, and smartphones to gather data that feeds into AI systems to make predictions about our lives — like how we will behave as consumers — in order to “know and shape our behavior at scale.”

“Surveillance capitalism unilaterally claims human experience as a free raw resource material for translation into behavioral data. Although some of these data [points] are applied to product or service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence’ and fabricated into prediction products that anticipate what you do now, soon, and later,” wrote Zuboff.

She argues that this economic order was created in Silicon Valley by Google but has since been adopted by Amazon, Facebook, and Microsoft, as well as counterparts in China like Baidu and Tencent.

Zuboff describes surveillance capitalism as an unprecedented form of power that few fully understand yet and says no effective means of collective or political action currently exists to confront it.

She questions the havoc that surveillance capitalism might wreak on human nature, when the market “transforms into a project of total certainty.” Zuboff says that left unchecked, this relatively new market force can overthrow people’s sovereignty and become a threat to both Western liberal democracies and the very notion of being able to “imagine, intend, promise, and construct a future.”

These large companies “accumulate vast domains of new knowledge from us, but not for us,” she wrote. “They predict our futures for the sake of others’ gain, not ours. As long as surveillance capitalism and its behavi