Verify any claim · lenz.io
Claim analyzed
Tech“Social media algorithms are intentionally designed to amplify outrage and contribute to the spread of cancel culture.”
The conclusion
The claim has a real empirical core: engagement-optimizing algorithms do amplify emotionally charged and outrage-driven content, as demonstrated by randomized experiments. However, the claim overstates the evidence in two key ways. First, "intentionally designed to amplify outrage" conflates engagement optimization (a documented design goal) with deliberate outrage engineering (not established). Second, the link to cancel culture is plausible but not rigorously demonstrated—cancel culture is driven by multiple social, cultural, and media factors beyond algorithmic design.
Caveats
- The word 'intentionally' is doing heavy lifting: evidence shows outrage amplification is a predictable byproduct of engagement optimization, not a stated design goal of platforms.
- The connection between algorithmic outrage amplification and cancel culture specifically is asserted but not rigorously demonstrated in the research—cancel culture has multiple drivers including social norms and offline media.
- Findings from one platform (e.g., Twitter/X) may not generalize to all social media platforms or their current algorithm configurations.
Sources
Sources used in the analysis
Social media newsfeed algorithms can directly affect how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback affects users’ outrage expressions over time, this suggests that newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning... Such norm learning processes, combined with social reinforcement learning, might encourage more moderate users to become less moderate over time.
Evidence for algorithms driving or reinforcing unhealthy dynamics is very thin, supporting, at best, a small effect on mood. Furthermore, algorithms that emphasize nuanced content could help decrease paralyzing climate anxieties or highlight constructive perspectives that motivate action. However, direct evidence supporting these conclusions remains scarce.
Cancel culture, operating through social media denouncements, not only shapes public discourse but also amplifies false narratives, contributing to the dissemination of misinformation. The canceling phenomenon, often driven by genuine concerns for justice and accountability, can inadvertently lead to the silencing of diverse perspectives and a potential erosion of democratic values. This is directly related to the dynamics of the cancel culture: the new narrative imposed as politically correct penetrates all areas and establishes the rules of the game of the new cultural wars conducted with the cancelation of inconvenient and non-aligned content.
Our study reveals that the engagement-based algorithm tends to amplify emotionally charged content, particularly that which expresses anger and out-group animosity... Of the political tweets chosen by Twitter’s algorithm, 62 percent expressed anger and 46 percent contained out-group animosity... Our randomized experiment provides causal evidence that, indeed, the algorithm amplifies emotionally-charged content.
Here, we show how social learning processes amplify online moral outrage expressions over time... positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning... Social reinforcement and norm learning interact with social media design to amplify moral outrage in online social networks.
This research suggests that — in addition to amplification in emotion production — there is a complementary process by which perceivers interpret moral outrage. Social media algorithms amplify emotional content, including moral outrage, through engagement optimization.
Less talked about is the way algorithms actually perpetuate cancel culture. Algorithms love outrage. My own research has shown how content that sparks an intense emotional response —positive or negative — is more likely to go viral. Outrage is the perfect negative emotion to attract attention and engagement — and algorithms are primed to pounce.
Social media's business model of personalized virality is incompatible with democracy, agreed experts at a recent Harvard Law School discussion. Social media, he said, erodes both education and social solidarity. “[It] can’t not polarize the population. No matter where you stand — if masks are your thing, or vaccines, or critical race theory — it doubles down on your perspective or reminds you why the other side is wrong."
If algorithms are trained on unrepresented, incomplete, or skewed data, it can lead to automation bias against certain groups regarding their ethnicity, political affiliation, sexual preference, gender, or race. For example, algorithms might recommend or amplify divisive content that reinforces racial stereotypes, ultimately perpetuating historical inequities. Conversely, algorithms may disproportionately screen and suppress content that challenges cultural norms, which may reinforce prejudiced viewpoints.
New research hijacks social media platform rankings to study how great an impact the algorithm has on political polarization. The browser extension uses a large language model (LLM) to classify and rerank posts that appear on the subjects’ X feeds according to, as they call it, “antidemocratic attitudes and partisan animosity.”
A new Tulane University study explains why politically charged content gets more engagement from those who disagree... This shows how outrage drives engagement, which algorithms optimize for, amplifying such content.
If you ever feel as if social media apps are pushing enraging content to you, that's because they are. Social media platforms are designed to maximize engagement, and studies show that outrage is one of the most efficient strategies for keeping users engaged. “(Social media platforms) notice that more individuals are becoming engaged in either sensationalized content or content that makes people angry. And so they then amp up that content.”
New research published in Science and Nature suggest that Facebook and Instagram are not causing political polarization. It reportedly found that the algorithms increased user engagement but didn't affect people's existing political attitudes or polarization. However, the behavioral data show that the algorithms increased exposure to uncivil content and removed political content, particularly content from moderate friends and ideologically mixed audiences.
Because canceling shuts down the opportunity to learn and grow from mistakes, this makes the “canceled” more prone to becoming a target for hate ...
In practice, though, every piece of content a user encounters has already been filtered, ranked and shaped by algorithms designed primarily to maximize engagement on the platform. The most important first step, he argues, is recognizing that what appears in a feed is the result of deliberate design, not a neutral window onto the world — and that understanding how these systems work is ultimately what gives users the agency to make more informed choices about where and how they participate online.
The “Facebook Papers” exposé in October 2021 alleges the social media giant was aware its algorithm changes were making users angrier, and might even have been inciting violence. The algorithm change was designed to decrease disengagement by prioritising comments rather than just scrolling or a “like”. The problem is that it's divisive content that's most likely to attract comments. The algorithm “preys on the most primal parts of your brain [and] triggers your strongest emotions”.
In the era of social media, cancel culture has spread widely and affected people from all walks of life. Its psychological impacts are significant and include ... This creates a permanently open exposure where one’s behavior and words may be put under public comment regardless of the motives. A new behavioral pattern called “always online,” which is characterized by the expectation of being constantly connected to social networks, further aggravates the psychological pressure on individuals.
YouTube's 2019 internal changes to its recommendation algorithm explicitly aimed to reduce 'borderline content' that promotes outrage and sensationalism, indicating prior design prioritized engagement over harm reduction. This was in response to criticisms that algorithms amplified divisive content for profit.
Regardless of your perspective, the ripple effects of cancel culture are undeniable. Careers have been derailed, companies have lost significant market value overnight, and brands have scrambled to distance themselves from controversy—all stemming from viral moments online.
Expert review
How each expert evaluated the evidence and arguments
The evidence pool establishes two distinct logical chains that must be evaluated separately against the claim's two components: (1) that algorithms are intentionally designed to amplify outrage, and (2) that this contributes to cancel culture's spread. On the first component, Sources 4, 1, 5, 6, 7, and 11 provide strong causal and mechanistic evidence that engagement-optimizing algorithms systematically amplify emotionally charged, anger-driven content — Source 4 (Knight First Amendment Institute) even provides randomized experimental causal evidence. However, the claim's word "intentionally" introduces a critical inferential gap: the opponent correctly identifies that optimizing for engagement (a stated design goal) is not logically equivalent to intentionally designing to amplify outrage specifically. Source 16 (Monash Lens/Facebook Papers) partially bridges this gap by alleging platforms knew their algorithms were making users angrier and continued anyway, which approaches intentionality, but this is indirect evidence. Source 2 (PMC-NIH) and Source 13 (AlgorithmWatch) introduce genuine countervailing evidence — particularly that attitude/polarization effects are small — though the proponent correctly notes these sources address broader "unhealthy dynamics" rather than the specific outrage-amplification mechanism. On the second component (contribution to cancel culture), the logical chain is weaker: Source 3 describes cancel culture's dynamics but the inferential leap from "algorithms amplify outrage" to "algorithms contribute to cancel culture's spread" requires an intermediate premise (that cancel culture propagates via outrage cascades) that is asserted but not rigorously demonstrated across sources. The opponent's "false cause" objection has merit here, though it is not a complete refutation since the proponent's argument is about contribution/facilitation rather than sole causation. Overall, the claim is mostly true in its empirical core — algorithms do amplify outrage as a predictable and documented design outcome — but the word "intentionally" overstates the evidence (conflating engagement optimization with deliberate outrage engineering), and the cancel culture link, while plausible, involves an inferential gap that the evidence does not fully close.
The claim omits that most evidence supports engagement-optimization that predictably elevates anger/outrage as a byproduct (causal amplification shown in a Twitter/X ranking experiment) rather than proof that platforms explicitly set out to “amplify outrage” or “spread cancel culture,” and it also blurs outcomes like polarization/attitude change (often small or mixed) with exposure/uncivility effects (Sources 4, 13, 2). With full context, it's fair to say engagement-based ranking can amplify emotionally charged/outrage content and plausibly intensify pile-on dynamics, but the strong framing of intentional design to amplify outrage and to drive cancel culture overstates what the evidence establishes (Sources 4, 1, 6 vs. 2, 13).
The most reliable and directly probative evidence is Source 4 (Knight First Amendment Institute, 2025), a randomized experiment showing engagement-based ranking causally amplifies anger/out‑group animosity, and Sources 1/5 (peer‑reviewed paper hosted on PMC/Yale) plus Source 6 (HBS working paper) support a mechanism where engagement/social feedback dynamics increase outrage expression; however, none of these high-authority sources demonstrate platform intent to amplify outrage (they describe engagement optimization and downstream effects), and Source 2 (PMC/NIH review) cautions that direct evidence for algorithms driving broader unhealthy dynamics is thin. On the “cancel culture” prong, Source 3 (PMC, 2024) discusses cancel culture's dynamics but does not provide strong, independent evidence that algorithms were intentionally designed to spread it, so the claim overstates intent and specificity even though algorithmic ranking can amplify outrage-like content.
Expert summary
What do you think of the claim?
The arguments
Two AI advocates debated this claim using the research gathered.
Multiple independent studies show that major platforms' engagement-optimizing ranking systems systematically boost anger and moral outrage—i.e., the very content most likely to trigger pile-ons—providing causal and mechanistic evidence that amplification is a product of deliberate algorithmic design (Source 4, Knight First Amendment Institute; Source 1, PMC; Source 6, Harvard Business School). Because cancel culture operates through mass denouncement dynamics that thrive on rapid, high-reach outrage cascades, the documented outrage-amplification design directly contributes to the spread and intensity of cancel culture online (Source 3, PMC; Source 7, MSUToday).
Your central logical leap is conflating "optimizing for engagement" with "intentionally designed to amplify outrage" — Source 4 (Knight First Amendment Institute) itself shows that engagement algorithms boost emotionally charged content as a byproduct of maximizing user interaction, not as a deliberate goal to spread outrage, and Source 2 (PMC - NIH) — which you conspicuously ignored — explicitly states that "evidence for algorithms driving or reinforcing unhealthy dynamics is very thin," directly undercutting your causal framing. Furthermore, your link between outrage amplification and cancel culture relies on a non-sequitur: Source 3 (PMC) describes cancel culture as driven by "genuine concerns for justice and accountability," not algorithmic design, meaning you've committed a false cause fallacy by assuming that because algorithms amplify emotional content and cancel culture exists online, one must intentionally cause the other.
The claim that algorithms are intentionally designed to amplify outrage is directly undermined by Source 2 (PMC - NIH), which explicitly states that "evidence for algorithms driving or reinforcing unhealthy dynamics is very thin, supporting, at best, a small effect," and Source 13 (AlgorithmWatch) corroborates this, citing peer-reviewed research in Science and Nature finding that Facebook and Instagram algorithms "didn't affect people's existing political attitudes or polarization." Furthermore, even the most supportive sources — such as Source 1 (PMC/PubMed Central) and Source 5 (Yale ISPS) — describe algorithms that incidentally exploit reinforcement learning tendencies, not systems intentionally engineered to spread outrage or cancel culture, a critical distinction that fatally undermines the claim's assertion of deliberate design intent.
Your reliance on Source 2 and Source 13 is a bait-and-switch: they discuss thin evidence for broad “unhealthy dynamics” or attitude/polarization change, but they don't rebut the specific, experimentally demonstrated ranking effect that engagement-based algorithms amplify anger and out-group animosity (Source 4, Knight First Amendment Institute) and increase exposure to uncivil content (Source 13, AlgorithmWatch). And your “incidental vs intentional” distinction collapses under the brief's evidence that platforms deliberately optimize for engagement while outrage is a high-engagement signal—i.e., the amplification is a predictable design outcome (Source 4; Source 6, Harvard Business School) that, given cancel culture's denouncement dynamics (Source 3, PMC; Source 7, MSUToday), directly contributes to its spread even if the stated goal isn't “make people outraged.”