As our online worlds expand, the opportunity for targeted ads and promoted content to infiltrate our Facebook news feeds continues to grow. Algorithms shape the everyday online community we immerse ourselves in — the suggested Facebook friends we should make, the search results we see on Google and the trending news we should be following on Twitter. The data collected is considered objective, at least most of the time. However, as the Internet of Things becomes more prevalent, the research of human influence on these algorithms and big data software says otherwise. As a result, it seems facts do have a bias.

A gender bias to be precise.

A recent study by Carnegie Mellon University and researchers at the International Computer Science Institute revealed gender discrimination and a lack of transparency in Google’s ad targeting system.

Researchers found that fake web users believed by Google to be male job seekers were much more likely than equivalent female job seekers to be shown a pair of ads for high-paying executive positions when they later visited a news website.

Anupam Datta, an associate professor at Carnegie Mellon University who helped develop AdFisher, discusses the need for tools that uncover how online ad companies differentiate between people.

“I think our findings suggest that there are parts of the ad ecosystem where kinds of discrimination are beginning to emerge and there’s a lack of transparency,” said Datta in the report. “This is concerning from a societal standpoint.”

The report goes on to caution that it’s difficult to definitively determine how ads are being targeted because Google’s system is so complex. Therefore, determining where the gender bias in ad serving is coming from needs more in-depth research

“We cannot determine who caused these findings due to our limited visibility into the ad ecosystem, which includes Google, advertisers, websites and users,” the report concludes. “Nevertheless, these results can form the starting point for deeper investigations by either the companies themselves or by regulatory bodies.”

Concern about the overuse of big data was brought up in a White House report last year, stating “data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education and the marketplace.”

What are your thoughts on Carnegie Mellon’s findings? Let us know in the comments section below.