It’s always sunny and warm at Facebook headquarters. There’s a never-ending buffet of food for the workers; even the grits are perfect, prepared by a Southern chef.

Working at Facebook is like living in The Truman Show, a curated community free of bullies and trolls – on the surface at least.

If only the Facebook that 2 billion people log on to every month felt this safe. Instead of politeness, the online public space has been afflicted by dishonest discourse and factionalism.

Over the past three years, Facebook has become something of a bogeyman. In 2016, it was overrun by bad actors and foreign agents who tried to expose fissures in the U.S. and other Western democracies. False news proliferated, explicitly designed to set people against each other. At the same time, Facebook has found that even inoffensive content in the form of viral videos and clickbait headlines do not contribute to well-being.

It’s perhaps fitting that Facebook’s campus feels intelligently artificial, because the social network increasingly relies on one technology more than any other to restore civility and fix its other problems: artificial intelligence. It’s the only approach that can potentially grapple with more than 2 billion users, 7 million advertisers and trillions of decisions a day.

With the right data, brands can anticipate people’s purchases and predict the likelihood that they’ll add a product to an online shopping cart, says George Manas, president of Resolution Media, (owned by Omnicom Media Group).

“There’s a lot of momentum right now around how we can bring more automated intelligence and machine learning in the ads management and campaign management space,” Manas says. “AI is woven into the fabric of the Facebook platform.”

What that means beyond advertising is still coming into focus. Will Wiseman, chief strategy officer at PHD US, does not fear the coming AI revolution. He literally wrote the book on it: Sentience: The Coming AI Revolution and the Implications for Marketing, published in 2015.

“AI taking over? We’re not there yet,” Wiseman says. “We’re just getting to the point where it’s becoming useful.”

Facebook may share some similarities with the fictitious Delos corporation from Westworld, a company that clandestinely creates near-perfect facsimiles of customers based on data collected through secret surveillance. But Wiseman is more concerned with how AI and algorithms shape real people by filtering the information they see. Facebook’s machines could get so good at giving people what they want, they could neglect to expose people to a needed variety. It’s the “filter bubble” effect, much discussed after so many people were shocked by the outcome of the 2016 presidential election, taken to new heights with every advance in AI.

“The biggest risk is that AI experiences and algorithms will be programming so many of our decisions and have the ability to narrow our reality,” Wiseman concludes.

The full article is authored by Garett Sloane and published in AdAge.com