A selection of our most recent and relevant publications is listed below. For the full list of GATE publications, visit the Publications page on the main GATE website.4 October 2021
Categorising fine-to-coarse grained misinformation: An empirical study of COVID-19 infodemic
Download (PDF, 881KB)
The spreading COVID-19 misinformation over social media already draws the attention of many researchers. According to Google Scholar, about 26,000 COVID-19 related misinformation studies have been published to date. Most of these studies focusing on detecting and/or analysing the characteristics of COVID-19 related misinformation. However, the study of the social behaviours related to misinformation is often neglected.
In this paper, we introduce a fine-grained annotated misinformation tweets dataset including social behaviours annotation (eg comment or question to the misinformation). The dataset not only allows social behaviours analysis but also suitable for both evidence-based or non-evidence-based misinformation classification task. In addition, we introduce leave claim out validation in our experiments and demonstrate the misinformation classification performance could be significantly different when applying to real-world unseen misinformation.4 October 2021
Infodemic: disinformation and media literacy in the context of COVID-19
Download (PDF, 4.4MB)
The World Health Organization (WHO) has described the disinformation swirling amidst the COVID-19 pandemic as a "massive infodemic" – a major driver of the pandemic itself. Disinformation long predates COVID-19. The fabrications that contaminate public health information today rely on the same dissemination tools traditionally used to distribute disinformation. What's novel are the themes and their very direct impacts. COVID-19 disinformation creates confusion about medical science with an immediate impact on every person on the planet, and upon whole societies. It is more toxic and more deadly than disinformation about other subjects. That is why this article coins the term disinfodemic.3 October 2021
The false COVID-19 narratives that keep being debunked: A spatiotemporal analysis
Download (PDF, 3.6MB)
The onset of the Coronavirus disease 2019 (COVID-19) pandemic instigated a global infodemic that has brought unprecedented challenges for society as a whole. During this time, a number of manual fact-checking initiatives have emerged to alleviate the spread of dis/mis-information. This study is about COVID-19 debunks published in multiple languages by different fact-checking organisations, sometimes as far as several months apart, despite the fact that the claim has already been fact-checked before.
The spatiotemporal analysis reveals that similar or nearly duplicate false COVID-19 narratives have been spreading in multifarious modalities on various social media platforms in different countries. We also find that misinformation involving general medical advice has spread across multiple countries and hence has the highest proportion of false COVID-19 narratives that keep being debunked.
Furthermore, as manual fact-checking is an onerous task in itself, therefore debunking similar claims recurrently is leading to a waste of resources. To this end, we propound the idea of the inclusion of multilingual debunk search in the fact-checking pipeline.4 October 2021
MP Twitter engagement and abuse post-first COVID-19 lockdown in the UK: White paper
Download (PDF, 571KB)
The UK has had a volatile political environment for some years now, with Brexit and leadership crises marking the past five years. With this work, we wanted to understand more about how the global health emergency, COVID-19, influences the amount, type or topics of abuse that UK politicians receive when engaging with the public. This work covers the period of June to December 2020 and analyses Twitter abuse in replies to UK MPs. This work is a follow-up from our analysis of online abuse during the first four months of the COVID-19 pandemic in the UK.
The paper examines overall abuse levels during this new seven month period, analyses reactions to members of different political parties and the UK government, and the relationship between online abuse and topics such as Brexit, government's COVID-19 response and policies, and social issues. In addition, we have also examined the presence of conspiracy theories posted in abusive replies to MPs during the period.
We have found that abuse levels toward UK MPs were at an all-time high in December 2020 (5.4% of all reply tweets sent to MPs). This is almost 1% higher than the two months preceding the general election. In a departure from the trend seen in the first four months of the pandemic, MPs from the Tory party received the highest percentage of abusive replies from July 2020 onward, which stays above 5% starting from September 2020 onward, as the COVID-19 crisis deepened and the Brexit negotiations with the EU started nearing completion.4 October 2021
Classification aware neural topic model for COVID-19 disinformation categorisation
Download (PDF, 1.5MB)
The explosion of disinformation accompanying the COVID-19 pandemic has overloaded fact-checkers and media worldwide, and brought a new major challenge to government responses worldwide. Not only is disinformation creating confusion about medical science amongst citizens, but it is also amplifying distrust in policy makers and governments. To help tackle this, we developed computational methods to categorise COVID-19 disinformation. The COVID-19 disinformation categories could be used for
focusing fact-checking efforts on the most damaging kinds of COVID-19 disinformation
guiding policy makers who are trying to deliver effective public health messages and counter effectively COVID-19 disinformation.
This paper presents:
A corpus containing what is currently the largest available set of manually annotated COVID-19 disinformation categories.
A classification-aware neural topic model (CANTM) designed for COVID-19 disinformation category classification and topic discovery
An extensive analysis of COVID-19 disinformation categories with respect to time, volume, false type, media type and origin source.
Which politicians receive abuse? Four factors illuminated in the UK general election 2019
Download (PDF, 3MB)
The 2019 UK general election took place against a background of rising online hostility levels toward politicians, and concerns about the impact of this on democracy, as a record number of politicians cited the abuse they had been receiving as a reason for not standing for re-election. We present a four-factor framework in understanding who receives online abuse and why. The four factors are prominence, events, online engagement and personal characteristics.
Toxic language detection in social media for Brazilian Portuguese: New dataset and multilingual analysis
Download (PDF, 355KB)
Hate speech and toxic comments are a common concern of social media platform users. Although these comments are, fortunately, the minority in these platforms, they are still capable of causing harm. Therefore, identifying these comments is an important task for studying and preventing the proliferation of toxicity in social media. Previous work in automatically detecting toxic comments focus mainly in English, with very few work in languages like Brazilian Portuguese.
In this paper, we propose a new large-scale dataset for Brazilian Portuguese with tweets annotated as either toxic or non-toxic or in different types of toxicity. We present our dataset collection and annotation process, where we aimed to select candidates covering multiple demographic groups. State-of-the-art BERT models were able to achieve 76% macro-F1 score using monolingual data in the binary case. We also show that large-scale monolingual data is still needed to create more accurate models, despite recent advances in multilingual approaches.
An error analysis and experiments with multi-label classification show the difficulty of classifying certain types of toxic comments that appear less frequently in our data and highlights the need to develop models that are aware of different categories of toxicity.4 October 2021
Revisiting rumour stance classification: Dealing with imbalanced data
Download (PDF, 544KB)
Correctly classifying stances of replies can be significantly helpful for the automatic detection and classification of online rumours. One major challenge is that there are considerably more non-relevant replies (comments) than informative ones (supports and denies), making the task highly imbalanced. In this paper we revisit the task of rumour stance classification, aiming to improve the performance over the informative minority classes. We experiment with traditional methods for imbalanced data treatment with feature-and BERT-based classifiers. Our models outperform all systems in RumourEval 2017 shared task and rank second in RumourEval 2019.4 October 2021
Measuring the impact of readability features in fake news detection
Download (PDF, 421KB)
The proliferation of fake news is a current issue that influences a number of important areas of society, such as politics, economy and health. In the natural language processing area, recent initiatives tried to detect fake news in different ways, ranging from language-based approaches to content-based verification. In such approaches, the choice of the features for the classification of fake and true news is one of the most important parts of the process.
This paper presents a study on the impact of readability features to detect fake news for the Brazilian Portuguese language. The results show that such features are relevant to the task (achieving, alone, up to 92% classification accuracy) and may improve previous classification results.4 October 2021
Vindication, virtue, and vitriol
Download (PDF, 1.7MB)
COVID-19 has given rise to a lot of malicious content online, including hate speech, online abuse, and misinformation. British MPs have also received abuse and hate on social media during this time. To understand and contextualise the level of abuse MPs receive, we consider how ministers use social media to communicate about the pandemic, and the citizen engagement that this generates.
The focus of the paper is on a large-scale, mixed-methods study of abusive and antagonistic responses to UK politicians on Twitter, during the pandemic from early February to late May 2020. We find that pressing subjects such as financial concerns attract high levels of engagement, but not necessarily abusive dialogue. Rather, criticising authorities appears to attract higher levels of abuse during this period of the pandemic. In addition, communicating about subjects like racism and inequality may result in accusations of virtue signalling or pandering by some users. This work contributes to the wider understanding of abusive language online, in particular that which is directed at public officials.4 October 2021
MP Twitter abuse in the age of COVID-19: White paper
Download (PDF, 1.1MB)
As COVID-19 sweeps the globe, outcomes depend on effective relationships between the public and decision-makers. In the UK there were uncivil tweets to MPs about perceived UK tardiness to go into lockdown. The pandemic has led to increased attention on ministers with a role in the crisis. However, generally this surge has been civil. Prime minister Boris Johnson's severe illness with COVID-19 resulted in an unusual peak of supportive responses on Twitter. Those who receive more COVID-19 mentions in their replies tend to receive less abuse (significant negative correlation).
Following Mr Johnson's recovery, with rising economic concerns and anger about lockdown violations by influential figures, abuse levels began to rise in May. 1,902 replies to MPs within the study period were found containing hashtags or terms that refute the existence of the virus (eg #coronahoax, #coronabollocks, 0.04% of a total 4.7 million replies, or 9% of the number of mentions of "stay home save lives" and variants). These have tended to be more abusive. Evidence of some members of the public believing in COVID-19 conspiracy theories was also found. Higher abuse levels were associated with hashtags blaming China for the pandemic.31 July 2019
Partisanship, propaganda and post-truth politics: Quantifying impact in online debate
Download (PDF, 1.8MB)
The recent past has highlighted the influential role of social networks and online media in shaping public debate on current affairs and political issues. This paper is focused on studying the role of politically-motivated actors and their strategies for influencing and manipulating public opinion online: partisan media, state-backed propaganda, and post-truth politics. In particular, we present quantitative research on the presence and impact of these three "Ps" in online Twitter debates in two contexts:
The run up to the UK EU membership referendum ("Brexit").
The information operations of Russia-backed online troll accounts.
We first compare the impact of highly partisan versus mainstream media during the Brexit referendum, specifically comparing tweets by half a million "leave" and "remain" supporters. Next, online propaganda strategies are examined, specifically left- and right-wing troll accounts. Lastly, we study the impact of misleading claims made by the political leaders of the leave and remain campaigns. This is then compared to the impact of the Russia-backed partisan media and propaganda accounts during the referendum.
In particular, just two of the many misleading claims made by politicians during the referendum were found to be cited in 4.6 times more tweets than the 7,103 tweets related to Russia Today and Sputnik and in 10.2 times more tweets than the 3,200 Brexit-related tweets by the Russian troll accounts.4 October 2021
WeVerify: Wider and enhanced verification for you – project overview and tools
Download (PDF, 2.3MB)
This paper presents an overview of the WeVerify H2020 EU project, which develops intelligent human-in-the-loop content verification and disinformation analysis methods, tools and services. Social media and web content are analysed and contextualised within the broader online ecosystem, in order to expose fabricated content, through cross-modal content verification, social network analysis, micro-targeted debunking, and a blockchain-based public database of known fakes.4 October 2021
Journalist-in-the-loop: Continuous learning as a service for rumour analysis
Download (PDF, 421KB)
Automatically identifying rumours in social media and assessing their veracity is an important task with downstream applications in journalism. A significant challenge is how to keep rumour analysis tools up-to-date as new information becomes available for particular rumours that spread in a social network.
This paper presents a novel open-source web-based rumour analysis tool that can continuous learn from journalists. The system features a rumour annotation service that allows journalists to easily provide feedback for a given social media post through a web-based interface. The feedback allows the system to improve an underlying state-of-the-art neural network-based rumour classification model. The system can be easily integrated as a service into existing tools and platforms used by journalists using a REST API.4 October 2021
Rumour verification through recurring information and an inner-attention mechanism
Download (PDF, 918KB)
Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering.
Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours.
Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.4 October 2021
Automated tackling of disinformation: Major challenges ahead
Download (PDF, 2.4MB)
This study maps and analyses current and future threats from online misinformation, alongside currently adopted socio-technical and legal approaches. The challenges of evaluating their effectiveness and practical adoption are also discussed. Drawing on and complementing existing literature, the study summarises and analyses the findings of relevant journalist and scientific studies and policy reports in relation to detecting, containing and countering online disinformation and propaganda campaigns. It traces recent development and trends and identifies significant new or emerging challenges. It also addresses potential policy implications of current socio-technical solutions for the EU.4 October 2021
Predicting news source credibility
Download (PDF, 222KB)
Assessing the credibility of a source of information is important in combating with misinformation. In this work we tackle the source credibility assessment as regression task. For this purpose we release a dataset containing around 700 news sources along with detailed credibility and transparency scores. These scores are manually assigned to every news source. We merge these scores to have final credibility score for every news source. The merged scores are then used to train prediction models.
Our results show highly satisfactory performances in predicting the merged credibility scores. Along with the dataset we also plan to release our models to allow the use for a wider community.4 October 2021
Credibility and transparency of news sources: Data collection and feature analysis
Download (PDF, 224KB)
The ability to discern news sources based on their credibility and transparency is useful for users in making decisions about news consumption. In this paper, we release a dataset of 673 sources with credibility and transparency scores manually assigned. Upon acceptance we will make this dataset publicly available. Furthermore, we compared features which can be computed automatically and measured their correlation with credibility and transparency scores annotated by human experts. Our correlation analysis shows that there are indeed features which highly correlate with the manual judgments.20 March 2019
Quantifying media influence and partisan attention on Twitter during the UK EU Referendum
Download (PDF, 380KB)
User generated media, and their influence on the information individuals are exposed to, have the potential to affect political outcomes. This is increasingly a focus for attention and concern. The British EU membership referendum provided an opportunity for researchers to explore the nature and impact of the new infosphere in a politically charged situation. This work contributes by reviewing websites that were linked in a Brexit tweet dataset of 13.2 million tweets, by 1.8 million distinct users, collected in the run-up to the referendum.
Research materials relating to the work (ODS, 1.1MB)30 July 2019
Twits, twats and twaddle: Trends in online abuse towards UK politicians
Download (PDF, 186KB)
Concerns have reached the mainstream about how social media are affecting political outcomes. One trajectory for this is the exposure of politicians to online abuse. In this paper we use 1.4 million tweets from the months before the 2015 and 2017 UK general elections to explore the abuse directed at politicians. Results show that abuse increased substantially in 2017 compared with 2015.
Abusive tweets show a strong relationship with total tweets received, indicating for the most part impersonality, but a second pathway targets less prominent individuals, suggesting different kinds of abuse. Accounts that send abuse are more likely to be throwaway. Economy and immigration were major foci of abusive tweets in 2015, whereas terrorism came to the fore in 2017.
Gazetteer of abusive terms used in the work (TXT, 12KB)11 October 2019
What matters most to people around the world? Retrieving Better Life Index priorities on Twitter
Download (PDF, 591KB)
Better Life Index (BLI), the measure of wellbeing proposed by the OECD, contains many metrics, which enable it to include a detailed overview of the social, economic, and environmental performances of different countries. However, this also increases the difficulty in evaluating the big picture. In order to overcome this, many composite BLI procedures have been proposed, but none of them takes into account societal priorities in the aggregation. One of the reasons for this is that at the moment there is no representative survey about the relative priorities of the BLI topics for each country. Using these priorities could help to design Composite Indices that better reflect the needs of the people.
The largest collection of information about society is found in social media such as Twitter. This paper proposes a composite BLI based on the weighted average of the national performances in each dimension of the BLI, using the relative importance that the topics have on Twitter as weights. The idea is that the aggregate of millions of tweets may provide a representation of the priorities (the relative appreciations) among the eleven topics of the BLI, both at a general level and at a country-specific level. By combining topic performances and related Twitter trends, we produce new evidences about the relations between people's priorities and policy makers' activity in the BLI framework.4 October 2021
RumourEval 2019: Determining rumour veracity and support for rumours
Download (PDF, 128KB)
This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year's SemEval event. Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of "fake news" have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase.
We therefore propose a continuation in which the veracity of further rumours is determined, and as previously, supportive of this goal, tweets discussing them are classified according to the stance they take regarding the rumour. Scope is extended compared with the first RumourEval, in that the dataset is substantially expanded to include Reddit as well as Twitter data, and additional languages are also included.4 October 2021
Can rumour stance alone predict veracity?
Download (PDF, 230KB)
Prior manual studies of rumours suggested that crowd stance can give insights into the actual rumour veracity. Even though numerous studies of automatic veracity classification of social media rumours have been carried out, none explored the effectiveness of leveraging crowd stance to determine veracity. We use stance as an additional feature to those commonly used in earlier studies. We also model the veracity of a rumour using variants of Hidden Markov Models (HMM) and the collective stance information.
This paper demonstrates that HMMs that use stance and tweets' times as the only features for modelling true and false rumours achieve F1 scores in the range of 80%, outperforming those approaches where stance is used jointly with content and user based features.4 October 2021
Detection and resolution of rumours in social media: A survey
Download (PDF, 505KB)
Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, ie items of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how to automatically assess their veracity, using natural language processing and data mining techniques.
In this article, we introduce and discuss two types of rumours that circulate on social media: long-standing rumours that circulate for long periods of time, and newly emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification, and rumour veracity classification.
We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far toward the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for the detection and resolution of rumours.4 October 2021
Discourse-aware rumour stance classification in social media using sequential classifiers
Download (PDF, 917KB)
Rumour stance classification, defined as classifying the stance of specific social media posts into one of supporting, denying, querying or commenting on an earlier post, is becoming of increasing interest to researchers. While most previous work has focused on using individual tweets as classifier inputs, here we report on the performance of sequential classifiers that exploit the discourse features inherent in social media interactions or "conversational threads".
Testing the effectiveness of four sequential classifiers – Hawkes Processes, Linear-Chain Conditional Random Fields (Linear CRF), Tree-Structured Conditional Random Fields (Tree CRF) and Long Short Term Memory networks (LSTM) – on eight datasets associated with breaking news stories, and looking at different types of local and contextual features, our work sheds new light on the development of accurate stance classifiers. We show that sequential classifiers that exploit the use of discourse properties in social media conversations while using only local features, outperform non-sequential classifiers. Furthermore, we show that LSTM using a reduced set of features can outperform the other sequential classifiers; this performance is consistent across datasets and across types of stances.
To conclude, our work also analyses the different features under study, identifying those that best help characterise and distinguish between stances, such as supporting tweets being more likely to be accompanied by evidence than denying tweets. We also set forth a number of directions for future research.4 October 2021
Stance classification in out-of-domain rumours: A case study around mental health disorders
Download (PDF, 572KB)
Social media being a prolific source of rumours, stance classification of individual posts towards rumours has gained attention in the past few years. Classification of stance in individual posts can then be useful to determine the veracity of a rumour. Research in this direction has looked at rumours in different domains, such as politics, natural disasters or terrorist attacks. However, work has been limited to in-domain experiments, ie training and testing data belong to the same domain. This presents the caveat that when one wants to deal with rumours in domains that are more obscure, training data tends to be scarce.
This is the case of mental health disorders, which we explore here. Having annotated collections of tweets around rumours emerged in the context of breaking news, we study the performance stability when switching to the new domain of mental health disorders. Our study confirms that performance drops when we apply our trained model on a new domain, emphasising the differences in rumours across domains. We overcome this issue by using a little portion of the target domain data for training, which leads to a substantial boost in performance. We also release the new dataset with mental health rumours annotated for stance.4 October 2021
Simple open stance classification for rumour analysis
Download (PDF, 156KB)
Stance classification determines the attitude, or stance, in a (typically short) text. The task has powerful applications, such as the detection of fake news or the automatic extraction of attitudes toward entities or events in the media.
This paper describes a surprisingly simple and efficient classification approach to open stance classification in Twitter, for rumour and veracity classification. The approach profits from a novel set of automatically identifiable problem-specific features, which significantly boost classifier accuracy and achieve above state-of-the-art results on recent benchmark datasets. This calls into question the value of using complex sophisticated models for stance classification without first doing informed feature extraction.4 October 2021
SemEval-2017 Task 8 – RumourEval: Determining rumour veracity and support for rumours
Download (PDF, 115KB)
Media is full of false claims. Even Oxford Dictionaries named "post-truth" as the word of 2016. This makes it more important than ever to build systems that can identify the veracity of a story, and the kind of discourse there is around it. RumourEval is a SemEval shared task that aims to identify and handle rumours and reactions to them, in text.
We present an annotation scheme, a large dataset covering multiple topics – each having their own families of claims and replies – and use these to pose two concrete challenges as well as the results achieved by participants on these challenges.20 March 2019
A framework for real-time semantic social media analysis
Download (PDF, 766KB)
This paper presents a framework for collecting and analysing large volume social media content. The real-time analytics framework comprises semantic annotation, Linked Open Data, semantic search, and dynamic result aggregation components. In addition, exploratory search and sense-making are supported through information visualisation interfaces, such as co-occurrence matrices, term clouds, tree maps, and choropleths. There is also an interactive semantic search interface (Prospector), where users can save, refine, and analyse the results of semantic search queries over time.