Silicon Valley is Inserting its Biases into Nearly Every Technology We Use

创业投资 Motherboard (源链)

In 2015, a Google Photo algorithm auto-tagged
two black friends as “gorillas,” a result of the program having been under-trained to recognize dark-skinned faces. That same year, a British pediatrician was denied
access to the women’s locker room at her gym because the software it used to manage its membership system automatically coded her title—”doctor”—as male. Around the same time, a young father weighing
his two-and-a-half-year-old toddler on a smart scale was told by the accompanying app not to be discouraged by the weight gain—he could still shed those pounds!

These examples are just a glimpse of the embedded biases encoded in our technology, catalogued in Sara Wachter-Boettcher’s new book,
T
echnically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech

. Watcher-Boettcher also chronicles more alarming instances of biased tech, like crime prediction software programs that
mistakenly code black defendants as having a higher risk of committing another offense than white defendants, and design flaws in social media platforms that leave women and people of color wide open to online harassment.

Nearly all of these examples, she writes, are the result of an insular, mostly-white, tech industry that has built its own biases into the foundations of the technology we use and depend on. “For a really long time, tech companies thrived off of an audience that wasn’t asking the tough questions,” Wachter-Boettcher told me during a recent phone interview. “I want people to feel less flummoxed by technology and more prepared to talk about it.”

Wachter-Boettcher, a web consultant focused on helping companies improve their user experience and content strategy, said she wrote the book for non-industry readers (the kind who “find Facebook kind of creepy but [can’t] quite articulate what is going on”), whom she believes have a right to push back against the industry’s flaws.

The following conversation between Watcher-Boettcher and I has been edited for length and clarity.

In the final chapter of the book, you wonder if talking about apps and algorithms during this “weird moment in America” even matters. You write, “Who cares about tech in the face of deportations, propaganda, and the threat of a second cold war?” Why should we care?

I had to write that six months ago, and turns out it’s still the same in a lot of ways.

I would say technology is a massive industry that’s extremely powerful. It’s something that touches literally every aspect of our lives, it’s shaping our culture, and it is affecting how we feel. If we ignore technology, or if we treat technology as if it’s neutral, we are doing ourselves a disservice.

I look at a lot of these biases as microaggressions, the paper cuts of technology. At an individual level it’s maybe not a huge deal that this one online form doesn’t accept people who don’t identify as male or female. But when you look at those over time, they add up. It’s death by a thousand paper cuts, and I think that is important, because it is important to think about humans.

But the other reason the small stuff is important is that the little biases and little failures are red flags for greater abuses. They are visible in ways that a biased algorithm is not visible. If a tech company has paid so little attention to the values of people that it creates an interface that only works if you’re straight and cis, or a photo algorithm that only recognizes you if you’re white, do you want to trust that algorithm with other aspects of your life? If they cannot make an interface that includes you, do you trust them to make an algorithm that you cannot even see?

Did you get pushback from the tech industry while you were reporting the book?

Some of the pushback has been things like, ‘Oh you know, first world problems,’ or ‘Oh gosh you have too much times on your hands.’ It was general dismissiveness. I have not gotten too much pushback of the variety that’s actually engaging with the details, yet. The book isn’t out.

I think that there are a lot of people who have good intentions but haven’t necessarily wanted to do the difficult work of realizing how much of their own perspective is informed and driven by a white supremacist, sexist culture. So much of our worldview is tied up in a history that is messy at best, and that’s true for people who make technology as well. It’s a lot of people who are well-meaning but think, ‘Oh, well, I’m not racist,’ as opposed to thinking of the underlying systems and structures that are in place. You’re not just not doing the work of undoing bias, you’re embedding it into things that are going to outlive you.

One of the really scary things is [the neural network] Word2vec. It was created by Google researchers a couple of years ago to create word embeddings in order to inform natural language processing. They fed it this huge amount of text from Google news articles to give it content to churn through and learn from. It learned things really well. It could answer analogies, it understood relationships between words. But it would also do things, like, if you asked, ‘man is to woman as computer scientist is to…’ and it would answer, ‘homemaker.’ That’s just a little bit of technology that can be used to process language. But what happens is people do research and build a tool like that embedded with bias.

So apps and algorithms create social feedback loops that can then influence user behavior?

That’s kind of the deal of an algorithm. An algorithm has to be tuned to something. It’s doing a series of steps and someone decided what those steps were. If you look at the work that’s been done on algorithmic software for predictive policing, one of the things it does is say, ‘this is a high crime area,’ and it sends more police there, and then more police see more crimes, and it’s labeled as an even higher crime area. If you have a high population of black people, they’ll be over-policed. Although rates of crimes being committed are similar in black and white neighborhoods, arrests are higher in black neighborhoods, so more police are sent there, and it perpetuates
.

One of the things that people who make software want to do is pretend they can predict the future, and it’s not really about predicting the future, it’s about re-inscribing the past. That’s what happens if you don’t specifically design against it.

What are the most common areas in tech where biases are really pervasive?

If you were going to start paying more attention to the technology you interact with every day, I would say start paying attention to areas where an interface is asking information from you. Anytime it’s making assumptions or guesses about who you are or what you want, or anytime where the design or content of that tech product is interacting with your own content. Anything that involves altering photos, or anything that involves them putting their copy around something you created. You see this a lot when tech companies are trying to increase engagement—they’ll try to do clever and funny and cute things like surface a post from your past on Facebook.

The other day, Facebook reminded me of my own birthday, as if I would forget. But in the book, you provide a much more jarring example of a father who was shown a Year in Review album, complete with balloons and streamers, featuring a photograph of his young daughter who had died that year.

When a tech company like Facebook assumes everyone who posted something to its platform had a good year, it’s essentially assuming that it knows you better than you do. Most tech companies haven’t hired many people who actually have training in the social sciences. Soft skills tend to be denigrated, and in that kind of culture—where skills like empathy and communication are not valued—it’s easy for people to assume they understand the impact of their world and wildly miss huge assumptions that they’ve made.

I’ve known a lot of people who have worked at Facebook who have had a lot of influence on one narrow feature, but those conversations aren’t happening upstream where the question is: Should we be doing this in the first place?

Did anything really surprise you in your research?

I didn’t find biases I hadn’t thought about before, but that might be because of my own biases. I was surprised about just how skewed some tech products could be. I was a little surprised that FaceApp, [a photo editing app] this year, had framed its entire algorithm—its “hotness” setting—around white people. So it essentially had learned what beauty was from white people. If you were using that setting and you were a person of color, it would lighten your skin or take a lot people’s noses and make them narrower. It’s like it just didn’t occur to them. They admitted that it was a problem in their training data set. I was surprised that they could get all the way to market and have it never occur to them that their algorithm would totally screw up for people who weren’t white. I’m not surprised that tech companies have loads of bias towards whiteness. I was surprised at how obvious it was—that it was so blatant.

What can tech users do about these biased algorithms?

I think at an individual level, it’s hard to feel like you can do something. The answer is always: delete the app. That’s a fine choice to make, but I realize there are many apps you don’t want to delete. There are many I don’t want to delete. The first thing is to look more critically at the technology you use. When technology makes you feel alienated or uncomfortable, for a lot of people an instinct is to feel like they don’t “get” it. Whenever you have those feelings, stop and say, ‘Wait a second, maybe this isn’t about me. Maybe it’s about this product, and this product is wrong.’ I do think we internalize a lot of this stuff instead of assessing the product we’re engaging with. It’s not you, it’s the technology.

When you find those small visible examples of bias, I would call tech companies out about them. Contact them, tell them on social media. I think they’ve gotten a free pass for a long time. I do think they need to hear this kind of pushback from people. Particularly when we’re getting into anything related to AI and algorithms, it’s going to have to come down to regulation. But that’s not going to happen if there’s not a loud and large number of people who are being critical.

Wachter-Boettcher’s

book

is available in stores starting today.

您可能感兴趣的

6 Smart Money-Making Strategies Most Entrepreneurs... When it comes to business,cash flow is king. AsSimon Sinekexplains, if your company is the car within your business, then money is the fuel that g...
被投资人质疑“市场不够大”,创业者该如何怼回去?... 文 / 腾讯创业编译组 创业者在寻求融资时,常收到投资人拒绝的理由:“市场不够大”。 对于一家风险投资公司来说,想要获得不俗的资本回报,所投资的企业价值至少要达到数十亿美元。为了实现这个目标,多数风险投资家都认为:他们所投资的每一笔交易都要拥有足够大的市场想象空间,以便最终获得更高的投资回报...
投资人看共享经济:做共享要有开拓性 创新在于对成本结构和流量获取的改变... 就共享来讲,其实它是一种资源协调和匹配,能够把资源成本降低,不管是闲置的资源还是闲置的人力,去做新型的服务形态。 2017年7月5日,76家投资机构和跨越14个共享经济领域的企业齐聚“2017共享经济CEO峰会”,碰撞交流共享经济的商业模式和机会。 在共享单车和共享充电宝成为共享经...
This Entrepreneur Says He Isn’t Talented. He... In this video, Entrepreneur Network partner Business Rockstars sits down with Hudson Jeans CEO Peter Kim, who talks about why he has a passion for den...
对于游戏 我们应该谈谈“匠人精神”了... 精雕细琢、精益求精谓之“匠人”。 借用百度百科中的定义,工匠:有工艺专长的匠人。 当人类告别原始的茹毛饮血的生活,进入到文明初期阶段,工匠就已经作为一个职业出现了,当然那个时候并没有什么所谓的“匠人精神”,工匠的工作也大多数是根据人们的生活需发明或创造一些生产工具。后来,开始出现阶...
Motherboard责编内容来自:Motherboard (源链) | 更多关于

阅读提示:酷辣虫无法对本内容的真实性提供任何保证,请自行验证并承担相关的风险与后果!
本站遵循[CC BY-NC-SA 4.0]。如您有版权、意见投诉等问题,请通过eMail联系我们处理。
酷辣虫 » Silicon Valley is Inserting its Biases into Nearly Every Technology We Use



专业 x 专注 x 聚合 x 分享 CC BY-NC-SA 4.0

使用声明 | 英豪名录