Facebook Maintains Elite VIP Whitelist of Influencers Allowed to Skirt TOS

Facebook Maintains Elite VIP Whitelist of Influencers Allowed to Skirt TOS

A recent leak of internal Facebook documents to the Wall Street Journal discloses some information like the existence of preferential treatment for high profile members to avoid Terms of Service. But what is the real goal of the story coming out? Who leaked the documents and why now, after Facebook has repeatedly called for government regulation. There appears to be more to the story than initially thought.

September 14, 2021 — Is this an influence operation? Facebook, which eerily was created the very same day a controversial DARPA project called LIFELOG was shut down. It just so happened that FB mirrored the LIFELOG project completely.

The purpose of the operation was to persuade people surreptitiously, to give up their data quite willingly, as part of the “total information awareness” program.

According to Q, Facebook even employed DARPA senior employees. This would later be extensively expounded upon by investigative Journalist Whitney Webb. But first, let’s look at the latest revelations coming to light regarding the VIP Whitelist. The Wall Street Journal obtained a massive cache of Facebook documents and in the first part of their investigative series, exposed the list:

The Facebook Files, an investigative series from The Wall Street Journal, dives into an extensive array of internal Facebook documents, giving an unparalleled look inside the social media giant. In our first episode, WSJ’s Jeff Horwitz explains how high-profile users from celebrities to politicians are shielded from the site’s rules and protected from enforcement measures. The company does this in secret, even as CEO Mark Zuckerberg says publicly that all users are treated equally.

Mark Zuckerberg has publicly said Facebook Inc. allows its more than three billion users to speak on equal footing with the elites of politics, culture and journalism, and that its standards of behavior apply to everyone, no matter their status or fame.

In private, the company has built a system that has exempted high-profile users from some or all of its rules, according to company documents reviewed by The Wall Street Journal.

The program, known as ‘cross check’ or ‘XCheck,’ was initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists. Today, it shields millions of VIP users from the company’s normal enforcement process, the documents show. Some users are ‘whitelisted’—rendered immune from enforcement actions—while others are allowed to post rule-violating material pending Facebook employee reviews that often never come.

At times, the documents show, XCheck has protected public figures whose posts contain harassment or incitement to violence, violations that would typically lead to sanctions for regular users. In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook. Whitelisted accounts shared inflammatory claims that Facebook’s fact checkers deemed false, including that vaccines are deadly, that Hillary Clinton had covered up ‘pedophile rings,’ and that then-President Donald Trump had called all refugees seeking asylum ‘animals,’ according to the documents.

A 2019 internal review of Facebook’s whitelisting practices, marked attorney-client privileged, found favoritism to those users to be both widespread and ‘not publicly defensible.’

‘We are not actually doing what we say we do publicly,’ said the confidential review. It called the company’s actions ‘a breach of trust’ and added: ‘Unlike the rest of our community, these people can violate our standards without any consequences.’

Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.

In describing the system, Facebook has misled the public and its own Oversight Board, a body that Facebook created to ensure the accountability of the company’s enforcement systems.

In June, Facebook told the Oversight Board in writing that its system for high-profile users was used in ‘a small number of decisions.’

In a written statement, Facebook spokesman Andy Stone said criticism of XCheck was fair, but added that the system ‘was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding.’

He said Facebook has been accurate in its communications to the board and that the company is continuing to work to phase out the practice of whitelisting. ‘A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross check and has been working to address them,’ he said.

The documents that describe XCheck are part of an extensive array of internal Facebook communications reviewed by The Wall Street Journal. They show that Facebook knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands.

Moreover, the documents show, Facebook often lacks the will or the ability to address them.

This is the first in a series of articles based on those documents and on interviews with dozens of current and former employees.

At least some of the documents have been turned over to the Securities and Exchange Commission and to Congress by a person seeking federal whistleblower protection, according to people familiar with the matter.

Facebook’s stated ambition has long been to connect people. As it expanded over the past 17 years, from Harvard undergraduates to billions of global users, it struggled with the messy reality of bringing together disparate voices with different motivations—from people wishing each other happy birthday to Mexican drug cartels conducting business on the platform. Those problems increasingly consume the company.

Time and again, the documents show, in the U.S. and overseas, Facebook’s own researchers have identified the platform’s ill effects, in areas including teen mental health, political discourse and human trafficking. Time and again, despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them.

Sometimes the company held back for fear of hurting its business. In other cases, Facebook made changes that backfired. Even Mr. Zuckerberg’s pet initiatives have been thwarted by his own systems and algorithms.

The documents include research reports, online employee discussions and drafts of presentations to senior management, including Mr. Zuckerberg. They aren’t the result of idle grumbling, but rather the formal work of teams whose job was to examine the social network and figure out how it could improve.

They offer perhaps the clearest picture thus far of how broadly Facebook’s problems are known inside the company, up to the CEO himself. And when Facebook speaks publicly about many of these issues, to lawmakers, regulators and, in the case of XCheck, its own Oversight Board, it often provides misleading or partial answers, masking how much it knows.

One area in which the company hasn’t struggled is profitability. In the past five years, during which it has been under intense scrutiny and roiled by internal debate, Facebook has generated profit of more than $100 billion. The company is currently valued at more than $1 trillion.

For ordinary users, Facebook dispenses a kind of rough justice in assessing whether posts meet the company’s rules against bullying, sexual content, hate speech and incitement to violence. Sometimes the company’s automated systems summarily delete or bury content suspected of rule violations without a human review. At other times, material flagged by those systems or by users is assessed by content moderators employed by outside companies.

Mr. Zuckerberg estimated in 2018 that Facebook gets 10% of its content removal decisions wrong, and, depending on the enforcement action taken, users might never be told what rule they violated or be given a chance to appeal.

Users designated for XCheck review, however, are treated more deferentially. Facebook designed the system to minimize what its employees have described in the documents as ‘PR fires’—negative media attention that comes from botched enforcement actions taken against VIPs.

If Facebook’s systems conclude that one of those accounts might have broken its rules, they don’t remove the content—at least not right away, the documents indicate. They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.

Most Facebook employees were able to add users into the XCheck system, the documents say, and a 2019 audit found that at least 45 teams around the company were involved in whitelisting. Users aren’t generally told that they have been tagged for special treatment. An internal guide to XCheck eligibility cites qualifications including being ‘newsworthy,’ ‘influential or popular’ or ‘PR risky.’

Neymar, the Brazilian soccer star whose full name is Neymar da Silva Santos Jr., easily qualified. With more than 150 million followers, Neymar’s account on Instagram, which is owned by Facebook, is one of the most popular in the world.

After a woman accused Neymar of rape in 2019, he posted Facebook and Instagram videos defending himself—and showing viewers his WhatsApp correspondence with his accuser, which included her name and nude photos of her. He accused the woman of extorting him.

Facebook’s standard procedure for handling the posting of ‘nonconsensual intimate imagery’ is simple: Delete it. But Neymar was protected by XCheck.

For more than a day, the system blocked Facebook’s moderators from removing the video. An internal review of the incident found that 56 million Facebook and Instagram users saw what Facebook described in a separate document as ‘revenge porn,’ exposing the woman to what an employee referred to in the review as abuse from other users.

‘This included the video being reposted more than 6,000 times, bullying and harassment about her character,’ the review found.

Facebook’s operational guidelines stipulate that not only should unauthorized nude photos be deleted, but that people who post them should have their accounts deleted.

‘After escalating the case to leadership,’ the review said, ‘we decided to leave Neymar’s accounts active, a departure from our usual ‘one strike’ profile disable policy.’

Neymar denied the rape allegation, and no charges were filed against him. The woman was charged by Brazilian authorities with slander, extortion and fraud. The first two charges were dropped, and she was acquitted of the third. A spokesperson for Neymar said the athlete adheres to Facebook’s rules and declined to comment further.

The lists of those enrolled in XCheck were ‘scattered throughout the company, without clear governance or ownership,’ according to a ‘Get Well Plan’ from last year. ‘This results in not applying XCheck to those who pose real risks and on the flip-side, applying XCheck to those that do not deserve it (such as abusive accounts, persistent violators). These have created PR fires.’

In practice, Facebook appeared more concerned with avoiding gaffes than mitigating high-profile abuse. One Facebook review in 2019 of major XCheck errors showed that of 18 incidents investigated, 16 involved instances where the company erred in actions taken against prominent users.

Four of the 18 touched on inadvertent enforcement actions against content from Mr. Trump and his son, Donald Trump Jr. Other flubbed enforcement actions were taken against the accounts of Sen. Elizabeth Warren, fashion model Sunnaya Nash, and Mr. Zuckerberg himself, whose live-streamed employee Q&A had been suppressed after an algorithm classified it as containing misinformation. – WSJ

It is important to note that elite and influential members of both and left and right were part of the elite program. Some images of internal documents provided by the WSJ are available but they have not released the full cache of what they obtained.

This document, entitled “Political Influence on Content Policy at Facebook” which discusses special treatment for high powered users.

The documents highlight what appear to be ideological differences within the company.

In the internal documents, Facebook acknowledges this “whitelist” is a “problem” for the company.

XCheck, a “shielding” system was something applied to high profile accounts, to prevent them from getting algorithmic violations of content.

Interestingly, Facebook has been calling for internet censorship and government regulation in the months before this expose was published.

Could this “leak” of documents to the Wall Street Journal have come from Facebook itself?

There appears to be more going in here than initially meets the eye. I highly recommend the article by Neon Revolt “The Controlled Demolition of Facebook AKA DARPA Project Lifelog“.

Now getting back to Whitney Webb. Earlier this year she published a major article entitled “The Military Origins of Facebook” that exposes the DARPA connections and calls into question the firm’s status as a “private company”:

In mid-February, Daniel Baker, a US veteran described by the media as “anti-Trump, anti-government, anti-white supremacists, and anti-police,” was charged by a Florida grand jury with two counts of “transmitting a communication in interstate commerce containing a threat to kidnap or injure.”

The communication in question had been posted by Baker on Facebook, where he had created an event page to organize an armed counter-rally to one planned by Donald Trump supporters at the Florida capital of Tallahassee on January 6. “If you are afraid to die fighting the enemy, then stay in bed and live. Call all of your friends and Rise Up!,” Baker had written on his Facebook event page.

Baker’s case is notable as it is one of the first “precrime” arrests based entirely on social media posts—the logical conclusion of the Trump administration’s, and now Biden administration’s, push to normalize arresting individuals for online posts to prevent violent acts before they can happen. From the increasing sophistication of US intelligence/military contractor Palantir’s predictive policing programs to the formal announcement of the Justice Department’s Disruption and Early Engagement Program in 2019 to Biden’s first budget, which contains $111 million for pursuing and managing “increasing domestic terrorism caseloads,” the steady advance toward a precrime-centered “war on domestic terror” has been notable under every post-9/11 presidential administration.

This new so-called war on domestic terror has actually resulted in many of these types of posts on Facebook. And, while Facebook has long sought to portray itself as a “town square” that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world’s largest social network was always intended to act as a surveillance tool to identify and target domestic dissent.

Part 1 of this two-part series on Facebook and the US national-security state explores the social media network’s origins and the timing and nature of its rise as it relates to a controversial military program that was shut down the same day that Facebook launched. The program, known as LifeLog, was one of several controversial post-9/11 surveillance programs pursued by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) that threatened to destroy privacy and civil liberties in the United States while also seeking to harvest data for producing “humanized” artificial intelligence (AI). 

As this report will show, Facebook is not the only Silicon Valley giant whose origins coincide closely with this same series of DARPA initiatives and whose current activities are providing both the engine and the fuel for a hi-tech war on domestic dissent.

In the aftermath of the September 11 attacks, DARPA, in close collaboration with the US intelligence community (specifically the CIA), began developing a “precrime” approach to combatting terrorism known as Total Information Awareness or TIA. The purpose of TIA was to develop an “all-seeing” military-surveillance apparatus. The official logic behind TIA was that invasive surveillance of the entire US population was necessary to prevent terrorist attacks, bioterrorism events, and even naturally occurring disease outbreaks. 

The architect of TIA, and the man who led it during its relatively brief existence, was John Poindexter, best known for being Ronald Reagan’s National Security Advisor during the Iran-Contra affair and for being convicted of five felonies in relation to that scandal. A less well-known activity of Iran-Contra figures like Poindexter and Oliver North was their development of the Main Core database to be used in “continuity of government” protocols. Main Core was used to compile a list of US dissidents and “potential troublemakers” to be dealt with if the COG protocols were ever invoked. These protocols could be invoked for a variety of reasons, including widespread public opposition to a US military intervention abroad, widespread internal dissent, or a vaguely defined moment of “national crisis” or “time of panic.” Americans were not informed if their name was placed on the list, and a person could be added to the list for merely having attended a protest in the past, for failing to pay taxes, or for other, “often trivial,” behaviors deemed “unfriendly” by its architects in the Reagan administration. 

In light of this, it was no exaggeration when New York Times columnist William Safire remarked that, with TIA, “Poindexter is now realizing his twenty-year dream: getting the ‘data-mining’ power to snoop on every public and private act of every American.”

The TIA program met with considerable citizen outrage after it was revealed to the public in early 2003. TIA’s critics included the American Civil Liberties Union, which claimed that the surveillance effort would “kill privacy in America” because “every aspect of our lives would be catalogued,” while several mainstream media outlets warned that TIA was “fighting terror by terrifying US citizens.” As a result of the pressure, DARPA changed the program’s name to Terrorist Information Awareness to make it sound less like a national-security panopticon and more like a program aiming specifically at terrorists in the post-9/11 era. 

The TIA projects were not actually closed down, however, with most moved to the classified portfolios of the Pentagon and US intelligence community. Some became intelligence funded and guided private-sector endeavors, such as Peter Thiel’s Palantir, while others resurfaced years later under the guise of combatting the COVID-19 crisis. 

Soon after TIA was initiated, a similar DARPA program was taking shape under the direction of a close friend of Poindexter’s, DARPA program manager Douglas Gage. Gage’s project, LifeLog, sought to “build a database tracking a person’s entire existence” that included an individual’s relationships and communications (phone calls, mail, etc.), their media-consumption habits, their purchases, and much more in order to build a digital record of “everything an individual says, sees, or does.” LifeLog would then take this unstructured data and organize it into “discreet episodes” or snapshots while also “mapping out relationships, memories, events and experiences.”

LifeLog, per Gage and supporters of the program, would create a permanent and searchable electronic diary of a person’s entire life, which DARPA argued could be used to create next-generation “digital assistants” and offer users a “near-perfect digital memory.” Gage insisted, even after the program was shut down, that individuals would have had “complete control of their own data-collection efforts” as they could “decide when to turn the sensors on or off and decide who will share the data.” In the years since then, analogous promises of user control have been made by the tech giants of Silicon Valley, only to be broken repeatedly for profit and to feed the government’s domestic-surveillance apparatus.

The information that LifeLog gleaned from an individual’s every interaction with technology would be combined with information obtained from a GPS transmitter that tracked and documented the person’s location, audio-visual sensors that recorded what the person saw and said, as well as biomedical monitors that gauged the person’s health. Like TIA, LifeLog was promoted by DARPA as potentially supporting “medical research and the early detection of an emerging epidemic.”

Critics in mainstream media outlets and elsewhere were quick to point out that the program would inevitably be used to build profiles on dissidents as well as suspected terrorists. Combined with TIA’s surveillance of individuals at multiple levels, LifeLog went farther by “adding physical information (like how we feel) and media data (like what we read) to this transactional data.” One critic, Lee Tien of the Electronic Frontier Foundation, warned at the time that the programs that DARPA was pursuing, including LifeLog, “have obvious, easy paths to Homeland Security deployments.” 

At the time, DARPA publicly insisted that LifeLog and TIA were not connected, despite their obvious parallels, and that LifeLog would not be used for “clandestine surveillance.” However, DARPA’s own documentation on LifeLog noted that the project “will be able . . . to infer the user’s routines, habits and relationships with other people, organizations, places and objects, and to exploit these patterns to ease its task,” which acknowledged its potential use as a tool of mass surveillance.

In addition to the ability to profile potential enemies of the state, LifeLog had another goal that was arguably more important to the national-security state and its academic partners—the “humanization” and advancement of artificial intelligence. In late 2002, just months prior to announcing the existence of LifeLog, DARPA released a strategy document detailing development of artificial intelligence by feeding it with massive floods of data from various sources. 

The post-9/11 military-surveillance projects—LifeLog and TIA being only two of them—offered quantities of data that had previously been unthinkable to obtain and that could potentially hold the key to achieving the hypothesized “technological singularity.” The 2002 DARPA document even discusses DARPA’s effort to create a brain-machine interface that would feed human thoughts directly into machines to advance AI by keeping it constantly awash in freshly mined data. 

One of the projects outlined by DARPA, the Cognitive Computing Initiative, sought to develop sophisticated artificial intelligence through the creation of an “enduring personalized cognitive assistant,” later termed the Perceptive Assistant that Learns, or PAL. PAL, from the very beginning was tied to LifeLog, which was originally intended to result in granting an AI “assistant” human-like decision-making and comprehension abilities by spinning masses of unstructured data into narrative format. 

The would-be main researchers for the LifeLog project also reflect the program’s end goal of creating humanized AI. For instance, Howard Shrobe at the MIT Artificial Intelligence Laboratory and his team at the time were set to be intimately involved in LifeLog. Shrobe had previously worked for DARPA on the “evolutionary design of complex software” before becoming associate director of the AI Lab at MIT and has devoted his lengthy career to building “cognitive-style AI.” In the years after LifeLog was cancelled, he again worked for DARPA as well as on intelligence community–related AI research projects. In addition, the AI Lab at MIT was intimately connected with the 1980s corporation and DARPA contractor called Thinking Machines, which was founded by and/or employed many of the lab’s luminaries—including Danny Hillis, Marvin Minsky, and Eric Lander—and sought to build AI supercomputers capable of human-like thought. All three of these individuals were later revealed to be close associates of and/or sponsored by the intelligence-linked pedophile Jeffrey Epstein, who also generously donated to MIT as an institution and was a leading funder of and advocate for transhumanist-related scientific research.

Soon after the LifeLog program was shuttered, critics worried that, like TIA, it would continue under a different name. For example, Lee Tien of the Electronic Frontier Foundation told VICE at the time of LifeLog’s cancellation, “It would not surprise me to learn that the government continued to fund research that pushed this area forward without calling it LifeLog.”

Along with its critics, one of the would-be researchers working on LifeLog, MIT’s David Karger, was also certain that the DARPA project would continue in a repackaged form. He told Wired that “I am sure such research will continue to be funded under some other title . . . I can’t imagine DARPA ‘dropping out’ of a such a key research area.” 

The answer to these speculations appears to lie with the company that launched the exact same day that LifeLog was shuttered by the Pentagon: Facebook. – Whitney Webb, Unlimited Hangout

So what is really going on here with Facebook and what is the real purpose of the reporting and document leak that was handed to the Wall Street Journal exclusively? Webb has referred to Facebook as a front company. She states that all of these Silicon Valley tech companies are intertwined with the U.S. national security apparatus. Is this an attempt to influence public opinion into calling for more government regulation and involvement with the internet? Will this be used to attack Section 230, which allows smaller tech companies like Gab to exist? What is the real purpose of this leak? I think we all suspected that Facebook had these policies.

What was NOT disclosed was documentation showcasing the different treatment for conservatives and just how much they censor. We should all be skeptical and on alert as the story continues to unfold.

See a spelling or grammar error? Let us know! Highlight the text and press Ctrl+Enter.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments