Facebook is reaffirming its commitment to its 1.86 billion monthly active users that it’s a place where everyone has the same access by strengthening its stance on discriminatory ads. The social networking company revealed on Wednesday that it has added stronger language to its advertising policies to counter any hateful or illegal actions. To minimize the chances of violations occurring, Facebook has launched an education section, while also starting to trial the use of machine learning to detect housing, employment, or credit ads that may be in violation of its policy.
“Over the past several months, we’ve met with policymakers and civil rights leaders to gather feedback about ways to improve our enforcement while preserving the beneficial uses of our advertising tools,” Facebook wrote in a blog post. In October, a report from ProPublica alleged that the company’s system allowed marketers to exclude certain races from seeing their ads.
“Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers. That’s basically what Facebook is doing nowadays,” wrote ProPublica journalists Julia Angwin and Terry Parris, Jr. This applied to any type of ad, but was most impactful when talking about housing, employment, or credit information.
Facebook has said its policies prohibit this type of targeting and, in November, took action, suspending these ad types: “There are many nondiscriminatory uses of our ethnic affinity solution in these areas, but we have decided that we can best guard against discrimination by suspending these types of ads,” wrote Erin Egan, the company’s chief privacy officer.
While an education section has been created to offer advice on how to properly (and legally) use Facebook Ads, the company wants to be more proactive and has turned to machine learning for help. It has begun testing technology to identify ads that offer housing, employment, or credit opportunities. “This will allow us to more quickly provide notices and educational information to advertisers — and more quickly respond to violations of our policy,” Facebook explained.
When an advertiser creates a campaign, Facebook’s system will send a warning when the ad is for housing, employment, or credit opportunities and either “includes or excludes” a multicultural advertising segment. It will be disapproved and advertisers can submit it for manual review. Additionally, should an ad be flagged by the system, advertisers will have to certify that they’re complying with Facebook’s policies and with “applicable anti-discrimination laws.”
“Since committing to these changes last fall, we’ve heard from public and private sector organizations that want us to know there’s value in being able to reach specific groups with information about products, services, and causes that they might find relevant,” Facebook said. It plans on working with several organizations in the future to adjust its policies and advertising products accordingly.
The Leadership Conference on Civil and Human Rights applauded Facebook’s move to prohibit ethnic affinity marketing, saying in a statement:
We ‘like’ Facebook for following up on its commitment to combatting discriminatory targeting in online advertisements. Our nation’s nondiscrimination laws apply in both the real and virtual worlds, and we applaud Facebook for working with the civil rights community to reach today’s announcement.
Facebook’s actions today are bold — they not only strengthened their advertising policies, but they are committed to working to educate new advertisers and have put in place robust enforcement rules.
We hope other companies will review their own policies and follow Facebook’s example. Online companies must have strong safeguards against discriminatory ad targeting based on gender, sexual orientation, religion, and other protected characteristics.
This article was written by Ken Yeung from VentureBeat and was legally licensed through the NewsCred publisher network.