YOU really wouldn’t fit in here.”

YOU wouldn’t like the client.”

So ended my first two job interviews on the old Madison Avenue.

Having grown up in a fairly siloed white liberal environment and being a bit of a wanna be revolutionary, I immediately assumed they meant because I had long hair, a big mustache, sideburns and a suit that was obviously new.

Then came the truth… “YOU would like the Jewish family name”… “YOU would fit in well with this department…”

And the proverbial shoe dropped. I understood. It was my first encounter with bias—genteel anti-Semitism—but not my last.

That emphasized “YOU” has followed me across the years…sadly familiar…but now clear.

But bias—workplace or elsewhere—is not always genteel nor does it always take multiple encounters to be clear.Racism, gender bias, religious bias and lifestyle bias are often perniciously obvious and manifest themselves not in thinly veiled innuendo but rather in overt body language, in disgusting comments, in leering looks and in direct interactions that can result in a lack of promotions, lower salaries, fewer raises and less-than-choice assignments, despite objective qualifications.

My experience is that many (most?) bias offenders are serial in nature. Race, gender and religion are one long continuum for these folks, making them easy to spot and, one would hope, easy to deal with… (a sad story for another time).

Yet today we are faced with a new kind of bias—an insidious bias that is not so obvious and, worse, that has potentially far-reaching consequences as it infects the very systems that have become core to our daily lives.

For example, a recent piece about LinkedIn shows an inherent bias in their search engine:

A search for “Stephanie Williams,” for example, brings up a prompt asking if the searcher meant to type “Stephen Williams” instead.

It’s not that there aren’t any people by that name—about 2,500 profiles included Stephanie Williams.

But similar searches of popular female first names, paired with placeholder last names, bring up LinkedIn’s suggestion to change “Andrea Jones” to “Andrew Jones,” Danielle to Daniel, Michaela to Michael and Alexa to Alex.

The pattern repeats for at least a dozen of the most common female names in the U.S.

Searches for the 100 most common male names in the U.S., on the other hand, bring up no prompts asking if users meant predominantly female names…

The article concludes:

“Histories of discrimination can live on in digital platforms,” Kate Crawford, a Microsoft researcher, wrote in the New York Times earlier this year. “And if they go unquestioned, they become part of the logic of everyday algorithmic systems.”

Ever try to fire or litigate against an algorithm?

According to recent research from Princeton as quoted in Quartz, “When computers learn human languages, they also learn human prejudices”:

Implicit biases are a well-documented and pernicious feature of human languages. These associations, which we’re often not even aware of, can be relatively harmless: We associate flowers with positive words and insects with negative ones, for example. But they can also encode human prejudices, such as when we associate positive words with European American names and negative words with African American ones.

New research from computer scientists at Princeton suggests that computers learning human languages will also inevitably learn those human biases. In a draft paper, researchers describe how they used a common language-learning algorithm to infer associations between English words. The results demonstrated biases similar to those found in traditional psychology research and across a variety of topics. In fact, the authors were able to replicate every implicit bias study they tested using the computer model.

For example, a gender bias was identified in male names being more strongly associated with words such as “management” and “salary”. Female names were more strongly associated with words such as “home” and “family”.

Makes you wonder who’s doing all that programming. But maybe there is hope in professional “nonbiased” sources?

“Is Your Data Sexist?” asks Prophet, quoting MIT Technology Review:

Researchers from Boston University and Microsoft have discovered that a popular data set called “Word2Vec”, which is based on a group of 300 million words taken from Google News, is “blatantly sexist.”

This is important because, according to the authors, “Numerous researchers have begun to use [Word2Vec] to better understand everything from machine translation to intelligent Web searching.” In other words, biases in the Google Word2Vec data set could potentially spread to and distort other research and applications that use this set of data.

The analysis was conducted using vector space mathematics, essentially a mathematical model that shows the relationships between terms. Simply put, they’re like analogies: Paris is to France as Tokyo is to X, where X = Japan. But, says the MIT article:

“Ask the database “father : doctor :: mother : x” and it will say x = nurse. And the query “man : computer programmer :: woman : x” gives x = homemaker.”

Not good.

This may not seem surprising; of course language embeds biases. But, say the researchers, “One might have hoped that the Google News embedding would exhibit little gender bias because many of its authors are professional journalists.”

So much for professional journalism…

The bottom line, as the article points out, is that the biggest, deepest danger is that the “data set could potentially spread to and distort other research and applications….”

Think about that in our ever-growing interconnected world, where data sets are shared and cyber-attacks are commonplace. The potential for people being exposed to biased sources of information from which important decisions are made is not only possible, it’s already happening.

And, worse, think about the huge number of people who receive most of their information via sources they trust and believe in…again, we’re already seeing the results of programmed bias.

The Wall Street Journal sums it up well in an article titled, “How Social Bias Creeps into Web Technology”:

While automation is often thought to eliminate flaws in human judgment, bias—or the tendency to favor one outcome over another, in potentially unfair ways—can creep into complex computer code.

“Computers aren’t magically less biased than people, and people don’t know their blind spots,” said Vivienne Ming, a data scientist and entrepreneur who advises venture capitalists on artificial intelligence technology.

My view?

It’s all about PEOPLE FIRST. Time to begin dealing with our own inherent biases and skews. Understand that whatever flaws we have, they will only be amplified and more efficiently spread by digital means, not eradicated…not at all!

And as we ponder the hatred that Microsoft’s Tay spewed or as we examine Gawker’s MeinCoke, let’s be cognizant that we are at fault—not the software—and that perhaps it’s time to listen to Stephen Hawkings:

We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

PEOPLE FIRST.

What do you think?

Author

David Sable
Global CEO
Young & Rubicam

1677total visits,1visits today