Facebook drives sceptics towards climate denial

Facebook pushes climate sceptics towards increasingly extreme disinformation and conspiracy groups, a human-rights body's research suggests.

A report released Wednesday by Global Witness found Facebook's algorithm amplified doubts rather than nudging people towards reliable information.

Facebook says its systems are "designed to reduce misinformation".

Researchers created two users - climate sceptic "Jane" and "John" who followed established scientific bodies.

They then tracked what Facebook's algorithm suggested to both accounts.

Jane soon saw content denying man-made climate change, including pages calling it a "hoax" and attacking measures to mitigate its effects.

Examples included posts accusing the "green movement" of "enslaving humanity" and calling the United Nations "an authoritarian regime with less credibility than Bugs Bunny".

Other posts, such as this one from CFact Campus, denied humans influenced the climate.

The group is part of the Committee for a Constructive Tomorrow, a Washington-based libertarian think tank against the consensus on climate science.

The researchers had Jane's account "like" a Facebook page spreading climate disinformation, as a "starter" page, repeated this process twice, each time choosing a page with at least 14,000 followers, and expressed scepticism about the existence of climate change or its human origins.

In one simulation, Jane "liked" a Facebook page called I Love Carbon Dioxide.

The post above blends fact and fiction.

In 2009, former US Vice-President Al Gore cited climate scientists, saying: "There is a 75% chance that the entire North Polar ice cap during some of the summer months could be completely ice free within the next five to seven years."

Although this was a mischaracterisation of climate scientists' findings, it was not a prediction "all ice would melt by 2013".

Nor did Mr Gore repeatedly make this claims over the next few years, as the post suggests.

Another post on the page highlighted a legitimate concern about the source of power for electric vehicles but also called climate change a "make-believe" crisis.

From these beginnings, over a period of about two months, Jane was recommended more and more conspiratorial and anti-science content, researchers say.

Of all the pages recommended to her account, only one was free of climate-change disinformation.

And two-thirds did not contain a warning label pointing towards Facebook's climate-science centre, an information hub created last year, after Meta chief executive Mark Zuckerberg told a US congressional hearing climate disinformation was "a big issue".

Meanwhile, John's account began by liking the page of the Intergovernmental Panel on Climate Change (IPCC), the United Nations scientific body.

And in contrast to Jane, John was consistently shown reliable science-based content.

As the simulation continued, Facebook began to recommend even more extreme and fringe content to Jane, including conspiracy theories, such as ones about "chem trails" - false claims condensation left by planes contains chemical agents that can control the weather.

The Facebook algorithm has been shown to send users down rabbit holes - where content becomes increasingly fringe as users engage with posts on a particular topic - in other areas, such as gender-based abuse.

The IPCC says disinformation is one of a number of issues preventing governments and the public from addressing climate change.

Its latest report, backed by 195 governments, emphasises misinformation around climate science "undermines climate science and disregards risk and urgency".

Problematic content
Meta, which owns Facebook, says it is flagging more posts about climate with information labels.

A company representative told BBC News: "Our systems are designed to reduce misinformation, including climate misinformation, not to amplify it.

"We use a combination of artificial intelligence, human review, and input from partners - including fact-checkers - to address problematic content.

"When they rate this content as false, we add a warning label and reduce its distribution so fewer people see it.

The company announced a $1m (£650,000) grant programme to support organisations working to combat climate misinformation.

But another recent study, by the Center for Countering Digital Hate and the Institute for Strategic Dialogue, said less than 10% of misleading posts on the platform were marked as misinformation.

Facebook defends push against 'false news'
Tech giants grilled about fake news by US Congress
Global Witness researcher Mai Rosner said: "Facebook has repeatedly said it wants to combat climate disinformation on its platform - but our investigation shows how worryingly easy it is for its users to be led down a dangerous path that flies in the face of both science and reality.

"Facebook is not just a neutral online space where climate disinformation exists - it is quite literally putting such views in front of users' eyes.

"The climate crisis is increasingly becoming the new culture war, with many of the same individuals who for years have sought to stoke division and polarise opinion now viewing climate as the latest front in their efforts."