Artikkel

Israeli AI firm claims fake profiles targeted Norwegian PM – investigation finds several were real

A report by Cyabra claimed that Norwegian PM was subject to a coordinated attack by fake profiles on X ahead of elections. One account belongs to Chris. «I am definitely a real human being,» he tells Faktisk.no.

This profile commented on Jonas Gahr Støre's post and was labelled as fake. We contacted the man behind who confirms he is a real human being.
Publisert

Lukk

Bygg inn artikkelen

Kopier og lim inn HTML-koden nedenfor på siden der du vil bygge inn denne artikkelen.

Tilpass innebygd innhold







The Israeli company Cyabra has garnered significant attention for its reports uncovering disinformation and targeted campaigns featuring fake profiles on social media.

In August, Cyabra claimed it had found a coordinated attempt to influence the Norwegian general election, where fake profiles on X (formerly Twitter) targeted Prime Minister Jonas Gahr Støre.

«269 fake profiles were identified as actively spreading large volumes of critical content targeting Jonas Gahr Støre within the Norwegian digital sphere.»

The report provides 12 examples of posts about Støre, written by what Cyabra claims are fake profiles. Faktisk.no looked into the accounts and found that several of them are authentic and belong to Norwegian citizens using their own name.

Faktisk.no spoke with them.

Real People Classified as Fake Profiles

On their website, Cyabra states their clients include some of the world's largest companies. Their reports their reports on election influence and bot networks have been widely covered by the likes of BBC, The Guardian, CNN, and The New York Times.

In Norway, too, Faktisk.no, Aftenposten, Morgenbladet, and Forskning.no have referenced the company's report on the October 7th attack in Israel.

Published on August 10th, less than a month before the Norwegian general election, the report focused on the Norwegian political sphere and identified a disinformation network targeting Prime Minister Jonas Gahr Støre.

According to Cyabra, findings amongst the profiles on X discussing Støre and contributing to spreading a negative image of him, «over a third of the profiles were identified as fake».

The report displays twelve anonymised screenshots of example posts from what Cyabra believes are fake profiles. However, clicking on the images links directly to the X post.

Faktisk.no investigated the posts and the profiles, finding that at least four of the profiles are authentic. 

This means the individuals behind the accounts are who they claim to be on X. Faktisk.no spoke to the individuals behind two of the accounts. Both confirmed that they published the content themselves.

Faktisk.no has made repeated attempts to question Cyabra, but they have not responded to our enquiries.

Chris Found in a Flash

The first example of an account claimed to be fake by the Cyabra report, includes both a full name and a picture. A simple Google search provided contact information for the man.

Faktisk.no reached out to Chris Vonholm, the man behind the account CVonholm86512. He confirmed the account is his and that he wrote the X post highlighted by Cyabra.

– I am definitely a real human being, Vonholm told Faktisk.no.

Cyabra thought this comment came from a fake account, but the comment was made by the person who's name and picture appears on his account.

– Did you write to Støre that «you are not my representative, you pathetic little man»?

– Yes, that's correct.

When asked by Faktisk.no what he thinks about his profile being flagged as fake, Vonholm expresses surprise. He questions Cyabra's use of artificial intelligence to detect fake accounts.

– This tells us that the AI model might not be very well trained, and that it perhaps dismisses that type of politically coloured comment. Algorithms can thus be used to create false narratives by AI eliminating comments so that they don't become part of the data foundation behind a «desired consensus», says Vonholm.

Martin is Also Authentic

Not all profiles were as easy to identify.

The account «Martin» does not display a name or picture, but the owner has posted pictures from his local area and frequently writes about his local football team. The account gave no reason to doubt it was a normal resident, but would Cyabra have been able to identify Martin?

By looking at pictures taken from his neighbourhood and out of his window, and comparing them with satellite images and maps, Faktisk.no located Martins address and name within an hour.

– I cannot understand how my profile can be categorised as fake, X user Martin28913120 tells Faktisk.no. 

He wished to only be cited by his username, but Faktisk.no knows his identity. He confirmed that the profile and the tweet belong to him.

Martin's account was pinned as fake by Cyabra, but the account is real and Faktisk.no knows his identity.

Faktisk.no has contacted two other individuals who are likely behind X accounts that Cyabra considers fake. They have not responded to our enquiries.

No Explanation from Cyabra

The report does not provide the names of the individuals behind the accounts.

Faktisk.no has repeatedly attempted to obtain a comment from Cyabra’s management, without success.

The company has been asked what verification processes it employs given that profiles used as examples of fake accounts are subsequently shown to be authentic. 

Faktisk.no has requested to see the complete list of accounts Cyabra believe are fake and part of a coordinated campaign, as well as an explanation of the methodology used.

Lack of Methodology

One person who has read the Cyabra report is Eskil Grendahl Sivertsen, a special adviser at the Norwegian Defence Research Establishment (FFI). His area of expertise is hybrid threats, with a particular focus on influence operations.

– My initial thought was that this is interesting. I am not aware of others having mapped Norwegian-language networks in connection with Norwegian elections, says Sivertsen.

Special adviser at the Norwegian Defence Research Establishment (FFI), Eskil Grendahl Sivertsen.

– The problem is that the report does not share its methods or define the criteria for what they consider as a «bot». Without these insights, the findings cannot be verified, he says.

Cyabra has previously received similar criticism. 

In October, the Nepalese newspaper Kathmandu Post covered a Cyabra report that claimed to have found over 1000 «fake» accounts on X. The accounts were allegedly amplifying anti-government statements and actively encouraging protest.

The report linked the accounts to the so-called «Gen Z» protests in Nepal in September, where young people demonstrated against corruption and social inequality. 

The criticism is similar to that from Sivertsen, namely that Cyabra is not transparent about its methods or verification. Experts consulted by the Kathmandu Post made similar points, also highlighting the unclear definitions of fake profiles.

– False Positives Are Not Unusual

Sivertsen explains that one must primarily look at the behaviour, not just the content, to expose fake profiles. Researchers look for unnatural patterns that might indicate a machine, and not a human, is behind the keyboard.

This could include a profile publishing many messages within seconds, having an abnormally high output, being active around the clock over time, or following a specific behaviour pattern, such as always reposting messages from the same profiles at consistent time intervals.

– In network analyses, one looks for profiles and groups of profiles that systematically share and amplify each other's content, even if they do not have a visible relationship to the naked eye, he says.

It is not unusual for false positives to occur in large analyses, says the FFI researcher. 

He explains that when researchers look for bots, they define clear criteria that the profiles must meet to be considered. One such criterion might be the number of messages posted in a given time frame.

– With a low limit, such as «over 50 messages per day», the data collection might capture many ordinary users who are simply very active. If the limit is set high, for example 1000 messages, the probability that the profiles are bots increases, but then you simultaneously fail to capture the more advanced bots that try to imitate humans.

Requires Manual, Human Labour

Sivertsen emphasises that quality control in research is extremely time-consuming. While commercial entities can lean on artificial intelligence, researchers often have to do the groundwork themselves.

– We can use AI as a support tool, but to assess whether a single profile is a bot often requires manual, human labour. We also have to comply with requirements for privacy and data processing. Research must follow documented, scientific methods and, to the extent possible with social media datasets, be verifiable.

– The better the bots are at imitating humans, the more time-consuming the manual control required to avoid errors becomes, says the FFI researcher.

Lukk

Bygg inn artikkelen

Kopier og lim inn HTML-koden nedenfor på siden der du vil bygge inn denne artikkelen.

Tilpass innebygd innhold