intelligence, the AI world was getting a serious warning call. there have been some incredible advancements in AI research in 2018—from reinforcement learning to generative adversarial networks (GANs) to raised natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause once they are deployed too hastily.
A Tesla crashed on Autopilot, killing the driving force, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including enforcement. At the start of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.
In some ways, my wish did come true. In 2019, there was more talk about AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to determine responsible AI teams and parade them ahead of the media. It’s hard to attend an AI-related conference anymore without a part of the programming being dedicated to an ethics-related message: How can we protect people’s privacy when AI needs such a lot data? How can we empower marginalized communities rather than exploiting them? How can we still trust media within the face of algorithmically created and distributed disinformation?
Sign up for The Algorithm — AI, demystified
Enter your email
Also stay updated on MIT Technology Review initiatives and events?YesNo
But talk is simply that—it’s not enough. For all the hypocrisy paid to those issues, many organizations’ AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. within the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a few of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.
Meanwhile, the necessity for greater ethical responsibility has only grown more urgent. an equivalent advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now getting used to focus on women and erode people’s belief in documentation and evidence. New findings have shed light on the huge climate impact of deep learning, but organizations have continued to coach ever larger and more energy-guzzling models. Scholars and journalists have also revealed just what percentage humans are behind the algorithmic curtain. The AI industry is creating a completely new class of hidden laborers—content moderators, data labelers, transcribers—who toil away in often brutal conditions.
But not all is dark and gloomy: 2019 was the year of the best grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several cities—including San Francisco and Oakland, California, and Somerville, Massachusetts—banned public use of face recognition, and proposed federal legislation could soon ban it from US housing project also. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies’ use of AI for tracking migrants and for drone surveillance.
Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that cause the field’s runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislation meant to rein in unintended consequences without dampening innovation. At the most important annual gathering within the field this year, i used to be both touched and surprised by what percentage of the keynotes, workshops, and posters focused on real-world problems—both those created by AI and people it could help solve.
So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldn’t lose sight of the dream animating the sector. Decades ago, humans began the search to create intelligent machines in order that they could at some point help us solve a number of our toughest challenges.
AI, in other words, is supposed to assist humanity prosper. Let’s not forget.