
Social media has become the new front line in the battle for political power, where truth is often the first casualty. As seen in the 2024 Presidential election, these platforms allow misinformation to disseminate further and faster than ever before. As traditional news outlets continue to lose credibility, especially among younger and more diverse audiences, social media platforms have emerged as a major source of political news for many. But in the rush to gain attention and clicks, facts are often obscured by sensational headlines, misleading narratives, and algorithm-driven content designed to prioritize engagement over accuracy. The spread of technology influences public opinion, election outcomes, and even the behavior of government officials. In this digital frenzy, the very foundations of democracy are put at risk, as lies spread unchecked across the world with the click of a button.
The powerful benefits of spreading news easily and hastily to a broad audience undoubtedly come with significant risks. Misinformation—information that is false or misleading that is shared unintentionally—is not a new phenomenon. However, the digital age has risen along with disinformation. Disinformation is purposefully created to deceive, confuse, or manipulate public perception. The rise of social media platforms such as TikTok, Meta, and X (formerly Twitter) have exponentially exacerbated both of these issues. These platforms allow anyone with internet access to become a potential broadcaster, capable of projecting information to millions, whether true or false.
X has become an influential platform in shaping political discourse. According to the Pew Research Center, 59% of X users report turning to the platform to stay informed on political news and updates. X thrives on an algorithm pushing polarizing content that triggers highly emotional responses and subsequent engagement. Clicks, likes, and shares are statistics that are always financially profitable, regardless of its substantial accuracy. A study in the Harvard Misinformation Review highlights that misinformation often disperses from “domestic political actors, untrustworthy websites, and hostile foreign governments.” Once an idea is broadcast, it can circulate indefinitely across various social media platforms. Tweets appear in Instagram carousels and TikToks are republished on YouTube Shorts. This viral cycle ensures that false information remains in the public eye, distorting the political narrative.
Clicks, likes, and shares are statistics that are always financially profitable, regardless of its substantial accuracy.
Consequently, the increasing role of social media influencers, such as podcasters or online personalities, has complicated the political landscape. Unlike traditional journalists who adhere to established ethical guidelines designed to ensure accuracy, objectivity, and transparency, social media influencers are often free from similar regulations. Instead, they are motivated by engagement metrics and virality which directly translates into financial gain through advertising revenue and paid sponsorships. This creates an environment in which sensational, emotional content is prioritized because it gains the most attention.
This trend was only worsened by a decision made by the Federal Election Commission (FEC) in December 2023, which ruled that content creators are not required to disclose payments received from a political campaign. This decision has made transparency online even more elusive because of hidden motivations. For example, the Democratic National Convention (DNC) gave 15,000 passes to journalists, as well as over 200 to content creators with additional benefits like free food, airfare, or hotel rooms. The expanding overlap between objective journalism and politically motivated advocacy makes it increasingly difficult to define fact from opinion on the internet. The DNC’s actions reflect a broader trend of political campaigns and special interest groups leveraging influencers as a means to sway public opinion. However, the risk here is not just misinformation, but the erosion of ethical journalism. Without transparent disclosure of payment or motivations, the line between independent journalism and paid propaganda becomes blurred.
Notably, conspiracy theories about election tampering spread like wildfire, fueled by the development of generative artificial intelligence. The novel technology has made it easier than ever to produce convincing yet entirely fabricated content. For instance, a video of election officials allegedly tampering with ballots went viral, amplified by the algorithm. Eventually, the FBI uncovered that the video was a deepfake fabricated as part of a Russian disinformation campaign aimed at eroding the American public’s trust in democratic processes. This has amplified the challenges posed to fact-checkers, leading to general confusion and deeper partisan divides.
The consequences of misinformation extend beyond the digital realm into tangible threats. Local election workers have become the targets of unjust violence and harassment due to the toxic environment online, leading to a spike in resignations and job abandonment. A recent survey revealed 38% of local election officials have experienced some form of abuse. NBC News reported specific cases of anonymous death threats, fentanyl arriving in the mail, and even plots to intimidate other voters. Such actions not only threaten the safety of election officials but also disrupt the election process, contributing to the chaos and inefficiency at the polls. The wrestling inefficiency further feeds public suspicion and mistrust in the electoral system, continuing a vicious cycle of skepticism and alienation.
In addition, artificial intelligence has been weaponized to suppress voting outright. According to NPR, a project titled EagleAI compiled data from state and federal sources into spreadsheets that conservative activists then utilized to file mass challenges to voter registration. The system detects “suspicious registrants,” which are typically clerical issues like address changes or misspellings. This disproportionately affects vulnerable populations such as the elderly, disabled, military service members overseas, and college students. By creating arbitrary voting barriers, EagleAI perpetuates fears of voter fraud and undermines democracy.
Technology also influenced voting patterns, as many voters showed up to the polls with misconstrued opinions. In particular, the issue of immigrant crime became a central theme in the 2024 election cycle. Sensationalized stories circulated online about immigrants allegedly endangering public security through rampant violent and drug-related crime. However, research funded by the National Institute of Justice discovered that “undocumented immigrants are arrested at less than half the rate of native-born U.S. citizens for violent and drug crimes and a quarter the rate of native-born citizens for property crimes.” This illustrates how misleading narratives about immigration and crime contributed to an atmosphere of fear and distrust which affected how people viewed the presidential candidates, political parties, and even their own fellow citizens. The gap between public perception and reality underscores the dangerous power of misinformation in shaping electoral results.
Moreover, repeated exposure to false information has profound psychological effects, particularly as social media algorithms prioritize sensational content. This constant reinforcement can gradually prime the public to accept conspiracy theories and misleading narratives as the truth. The psychological phenomenon known as the illusory truth effect explains that as a false claim becomes more familiar, it simultaneously becomes more believable. This effect can distort memory, making it easier to recall false information as fact, even if it directly contradicts prior knowledge or is implausible.
At the same time, confirmation bias plays a critical role in how misinformation spreads. People naturally seek information aligning with their pre-existing beliefs or values while avoiding dissent. Social media platforms exacerbate this tendency by tailoring personalized feeds to curate more of what users have previously engaged with. It makes stepping out of these echo chambers incredibly difficult. Users are then trapped in feedback loops that further polarize the political landscape and intensify ideological divides.
Once inaugurated, he intends to “ban federal money from being used to label domestic speech as mis- or disinformation.”
Therefore, social media companies have an important role in moderating discussion. While platforms have implemented differing policies on misinformation and political advertising, they often lack consistency and adequate resources. Meta, in particular, labels AI-generated content and collaborates with fact-checkers, yet severed ties with The Associated Press this year to cut costs. Instead of relying on automated tools and algorithms, companies should invest in more human oversight through independent fact-checkers. Additionally, social media platforms must become more transparent about how their algorithms work and the promotion of political content. Without these efforts, companies struggle to maintain meaningful content moderation, allowing misinformation to thrive unchecked.
The companies defend their reluctance to regulate content with a critical point: free speech. Federal governments have largely steered clear of intervening in these debates. Especially following the ongoing tumultuous journey of the Misinformation and Disinformation Bill in Australia’s legislative system. Besides, Donald Trump has vehemently opposed any suppression of lawful free speech. Once inaugurated, he intends to “ban federal money from being used to label domestic speech as mis- or disinformation.” This hands-off approach has left social media companies alone to navigate the complex balance between moderating dangerous content and upholding free speech.
No matter the belief or party, all groups are guilty of spreading the narrative they prefer to be true.
Thus, there needs to be a concerted push to improve digital literacy among the public, educating citizens on how to identify credible sources from the myriad of misinformation and increasingly sophisticated AI-generated content online. This would consist of the development of critical thinking skills in school through various internet and news sources. Then, individuals could report false posts on social media themselves and communicate with curiosity instead of hate. These efforts would help voters make more informed decisions and safeguard democratic processes.
As the 2024 election demonstrated, social media has the power to both enforce and undermine democracy. Misinformation is no longer confined within fringe groups but embedded in the very fabric of how we consume politics. No matter the belief or party, all groups are guilty of spreading the narrative they prefer to be true. The consequences of this—distrust, polarization, even violence—are impossible to ignore. To safeguard democracy, we must reclaim the integrity of our digital spaces. We must rebuild the public’s ability to discern truth in a world where reality itself can be manufactured and monetized.
Quinn Prouty ‘28 studies in the College of Arts & Sciences. She can be reached at pquinn@wustl.edu.