A state monitoring program in a Yorkshire industrial area seems improbable. However, these desolate warehouses hold an artificial intelligence (AI) corporation that the government uses to monitor people’s social media posts.
Logically has been paid more than £1.2 million in taxpayer funds to analyze what the government refers to as “disinformation” – false information intentionally sown online – and “misinformation” – misleading information distributed accidently.
It accomplishes this by “ingesting” material from hundreds of thousands of media sources and “all public posts on major social media platforms,” then using AI to identify potentially problematic posts.
Lyric Jain, a 27-year-old Cambridge engineering graduate who first put the technology to the test during elections in his native India, founded the company six years ago.
He uses it alongside “one of the world’s largest dedicated fact-checking teams,” which is dispersed across the UK, Europe, and India, according to Logically.
It is a model that has assisted the company in securing a number of contracts.
It has a £1.2 million contract with the Department of Culture, Media, and Sport (DCMS) and another worth up to £1.4 million with the Department of Health and Social Care to monitor threats to high-profile individuals in the vaccine program.
Other high-profile clients include federal agencies in the United States, the Indian Electoral Commission, and TikTok.
‘Collaboration’ with Facebook
It also has a “partnership” with Facebook, which appears to give Logically’s fact-checkers a lot of say over what other people see.
According to a joint press release issued in July 2021, Facebook will limit the reach of some content if Logically claims they are false.
“When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know the content has been rated false, and notify people who try to share it,” according to the press release.
Logically claims it does not share the evidence it collects for the UK government with Facebook; yet, Logically’s cooperation with the social media business has raised worries among free speech advocates.
The DCMS hired the AI business for the first time in January 2021, many months into the pandemic, to provide “analytical support.”
Its responsibilities appear to have expanded over time, to the point where it was assisting “cross-Government efforts to build a comprehensive picture of potentially harmful misinformation and disinformation.”
According to documents obtained by The Telegraph, it generated regular “Covid-19 Mis/Disinformation Platform Terms of Service Reports” for the DCMS’s Counter-disinformation Unit.
The title implies that the goal was to target posts that violated the terms of service of sites such as Twitter and Facebook.
However, data laws revealed that the reports also included records of legitimate posts by recognized experts such as Dr Alex De Figueirido, statisticians lead at the Vaccines Confidence Project.
Former minister Nadhim Zahawi told The Telegraph that he believes the tweet was included by mistake. However, Logically stated that it occasionally includes legitimate-looking posts in its findings if they can be “weaponised.”
“Context matters,” a spokeswoman stated.
“It is possible for content that isn’t specifically mis- or disinformation to be included in a report if there is evidence or the potential for a narrative to be weaponised.”
The representative went on to say that the information gathered under data laws “often removes the reason for why content has been flagged and can thus be very misleading.”
A public paper prepared by Logically, on the other hand, appears to give at least some light on the company’s thinking.
The 21-page “Covid-19 Disinformation in the UK” report mentioned “anti-lockdown” and “anti-Covid-19 vaccine sentiment” several times.
It also highlighted the hashtags “#sackvallance” and “#sackwhitty” as evidence of “a strong disdain for expert advice” rather than disdain for the recommendations of Sir Patrick Vallance, the Government’s former chief scientific adviser, or Sir Chris Whitty, England’s Chief Medical Officer, in particular.
‘Preventing Real-World Harms’ is the Goal
A spokeswoman for Logically stated that the company “strongly believes” in free speech and that “any suggestion that we limit [it] is inaccurate and incorrect.”
“We do not specifically monitor individuals and their behavior, nor do we make any recommendations that limit their right to free expression,” the official continued.
“When serving clients, we monitor content, including narratives and trends across public information environments online, to assist in combating the spread of online harms, mis- and disinformation, and preventing real-world harms.”