Language selection


Sign in

Sign in

Fake News and Clickbait: Identifying Disinformation and Misinformation

A group of people texting while having lunch on the patio with emojis emerging from their phones.

#FakeNews started trending in 2016 and has been a phrase to discuss disinformation and misinformation. It started to raise flags and discussions about news sources, journalistic integrity, the role of social media, facts and falsehoods. In this episode, we’re going to define disinformation and misinformation with examples. A little disclaimer before we continue, this episode will mention and discuss various topics and examples which do not reflect the values of the writer and/or the CSPS Digital Academy. We encourage you to keep a critical lens when reading any piece of information.

The definitions

Disinformation is false information, which is intended to mislead, it’s part of propaganda campaign issued by state or non-state actors to an opponent power or to the media. “climate change is not real.”

Misinformation is also false information, but that is spread with no intent to harm. It is when you are sharing false information that you thought was true.

Disinformation refers to false information that is intended to manipulate, cause damage, or guide people, organizations, and countries in the wrong direction; misinformation is information that stems from the truth but is often exaggerated in a way that misleads and causes potential harm
- Canadian Centre for Cybersecurity (CCCS)

Diving into disinformation

Disinformation can come in different mediums, including but not limited to to articles, emails, social media accounts sharing wrong and made up information, or sharing accurate information from years ago as if it’s current news. It can also be an account impersonating another person or organization.

The audience of disinformation is people, like you and I, using online platforms and tools. Disinformation can be described as a simple narrative appealing to emotions, and that can easily be shared. Reality Team’s founder, Deb Lavoy describes this narrative as a set of beliefs that interprets the way we see things, and helps us make the complicated easy.

The disinformation tactics are simple: ensure audiences are interpreting the situation with a specific narrative. We know that disinformation can be harmful with statements such as “climate change isn't real,” “the earth is flat,” and “5G is being used to read your brain.” The examples given are simple to understand, appeal to emotions, can easily be shared, and resonate with audiences that are ready to accept this narrative.

The examples

You may be thinking these claims aren’t that serious and therefore disinformation isn’t harmful. Let’s take a look at a case study which may change your mind. According to a New York Times investigation, “Myanmar’s military was behind a disinformation effort on Facebook [Meta] that helped foment ethnic cleansing against the country’s Rohingya Muslim minority.” This disinformation operation focused on spreading propaganda against the Rohingya people by taking over several Facebook pages with large followings. Seven hundred people were employed by the military to work on this disinformation social media campaign. Facebook responded by stating it had taken down pages and accounts with a following of at least 1.35 million people. This disinformation campaign is part of the propaganda that in part led to the genocide, massacres, and violence against the Rohingya people.

COVID-19 created a new level misinformation and disinformation. The International Fact-Checking Network (IFCN) has been working on finding and denouncing the fake news related to this topic. “More than 60 fact-checkers from different countries have been working together in a collaborative project coordinated by the IFCN to debunk hoaxes related to the lethal virus. So far, the group has flagged more than 80 pieces of misleading content — mainly regarding the origins of the fatal virus, a false patent created years ago and some weird ways to prevent or cure it.” In a time of crisis, it’s important that the information being delivered is as accurate and truthful as possible to keep people safe. By checking the validity of facts, we can keep ourselves and others accountable.

Disinformation before & after: What’s changed?

An original image of a person compared to the same image created using deepfake technology.
Photo via Meta AI

If you’re wondering how disinformation has changed in the last 12-15 years, just think of the number of apps on your phone where anyone can write and share content. Disinformation spreaders (individuals or groups) target recent news topics and information in the media to augment the visibility of their messages to audiences online. Moreover, it’s extremely difficult to detect disinformation with technologies such as deep learning, and artificial intelligence (AI). It’s important to stay critical and vigilant while reading, watching or listening to content due to conspiracy theories, rumours, speculations and falsehoods which can have consequences in reality.

The photo above shows a deepfake. Deepfakes are where an existing image or video is replaced by someone else. Technology like this can destroy our trust in video proof as it’s used to create plausible deniability. If while you are scrolling on social media, you find a video of a trusted public figure saying something you feel might be suspicious, trust your gut and verify the information. You can also pay attention to the reflections, lighting, and facial lines.

The age of clickbait

Disinformation has always existed and it’s important to recognize the pace that media delivering information has grown compared to the 1970s such as the number of websites, channels, blogs, podcasts etc. You will often find articles with clickbait titles – which is a fancy way of saying “doesn’t this headline make you want to click me?” The reason behind it is advertisement dollars. The more people that click on their website, the better the advertisement’s value. So the internet, from a business perspective, has programmed advertising, which means more visitors equals clicks, and more clicks leads to more advertisement visibility, and therefore a larger profit.

How to fact check

What should you keep in mind as a public servant? Before clicking any link that might be suspicious, consider that links can be dangerous not only to you, but to your colleagues and entire organization. Not sure why links can be dangerous? Take a look at the STRIDE model to help you out:

Threat on: Integrity (Tampering with information in a hospital system to harm patients)
STRIDE model (download)
STRIDE Explained

S for Spoofing - Impersonating someone else or something else (organization, entity). You think you are talking to your bank, a federal department, but you are not.

T for Tampering - It is the modifying of data. When someone enters the system to change an information (students breaking in the system to change their grades, cybercriminal hacking into a hospital system to change patients’ medication dosage information).

R for Repudiation - It is when an attacker deletes or changes a login information, action or transaction in order to refute that it ever took place. The security system is broken into to destroy the camera of surveillance footage

I for Information Disclosure - It is when information is exposed to a non-authorized person.

D for Denial of Service - It is when service is denied to users because a website is crashed on purpose. This happens when resources of the system are overwhelmed and stop processing. An attacker targets the IRCC website, and no one can log into their profiles to check the status of their applications.

E for Elevation of Privilege - It is when an attacker gains capabilities without authorization. Attacker elevates its own security level and goes from being a limited user to an administrator.

Threat on: Integrity (Tampering with information in a hospital system to harm patients)

Threat on: Confidentiality (Stealing sensitive information to sell it)

Threat on: Availability (Distributed Denial of Service (DDoS) attack)

Links can be tempting. After all, it’s meant to be clicked, right? Not always. An example of an integrity attack would be tampering with information in a hospital system in order to harm or hack patients. In this case, a malicious actor can perform a phishing attack via email. The email recipient would think the message came from an organization or person they know and/or can trust. Those emails can include forged email headers, creating a very similar email address to the targeted organization’s. An example of disinformation that can be included in these types of emails is fake events (asking you to register to an event).

By clicking these links and opening emails, a hacker may gain access to the system and can tamper with the information for dangerous purposes. This type of integrity attack can also be done through a USB plugged into a computer. In the example of a hospital, a malicious actor can access/hack the system and change patients’ information or tamper with medication dosages.

Why you need to know about this

Disinformation and misinformation have probably crossed your path more than once. As technology continues to advance, it’s important to detect truths and learn to recognize misleading information in your work and private lives. As a human, you’re likely to have trust in people and organizations around you, and as a public servant it is also your responsibility to ensure you are protecting government resources to keep malicious actors out and people in Canada safe.

Keep the STRIDE model in mind, verify sources, be vigilant and safe while consuming content on any platform

Thoughts to take back to your team

  1. What is the difference between disinformation and misinformation and how are we combatting it?
  2. What are the steps we need to take if we are confronted with disinformation?
  3. Are we cyber safe from an attack?

Job Aid

Courses & events

Self-paced |Discovering Cyber Security (DDN225) (1 hour)

Self-paced| Network Security Threats and Their Impact (DDN109) (less than 1 hour)


Aicha-Hanna Agrane

Aicha-Hanna Agrane

Policy analyst with an expertise in global affairs, cybersecurity, and countering disinformation. | Analyste politique spécialisée en affaires internationales, cybersécurité et contre-désinformation.


Recommended for you

Topic: Security

Safeguarding Hardware Devices and Data

Think of it this way: you safeguard your privacy by protecting your devices against theft.  

7 days ago6 min read

Topic: Digital Transformation

Six Risks to Avoid When Leading a Digital Transformation

The GC Key Leadership Competencies encourage leaders to support intelligent risk-taking and emphasize the need to learn from setbacks and mistakes.

21 days ago9 min read

Topic: Design

Delivering Effective Public Services in the Digital Age

Service delivery in the digital age requires that public servants adopt new organizational structures and cultures, apply new best practices, and develop new knowledge and skills.

a month ago9 min read