Verify any claim · lenz.io
Claim analyzed
General“Problems attributed to technologies are often caused by underlying social issues in society rather than by the technology itself.”
Submitted by Cosmic Zebra 18ef
The conclusion
The claim captures a real and well-supported insight — social context, governance, and usage patterns significantly shape technology outcomes — but frames it too one-sidedly. By stating problems are caused by social issues "rather than by the technology itself," it implies technology is a neutral vessel, which multiple high-authority medical and public health sources contradict. Platform design features like addictive engagement mechanics and algorithmic amplification are documented as independent contributors to harms such as youth mental health deterioration and political polarization. The reality is one of co-causation, not an either/or.
Based on 17 sources: 6 supporting, 5 refuting, 6 neutral.
Caveats
- The claim sets up a false dichotomy between social causes and technological causes; credible evidence supports co-causation, where technology design and social conditions jointly produce harms.
- Multiple peer-reviewed and public health sources document that platform-specific design choices (e.g., engagement optimization, recommendation algorithms) independently contribute to mental health and polarization harms, not merely reflect pre-existing social problems.
- The word 'often' implies a frequency of social-over-technological causation that the available evidence does not quantify or establish — it overgeneralizes from a valid but narrower 'context matters' thesis.
Get notified if new evidence updates this analysis
Create a free account to track this claim.
Sources
Sources used in the analysis
The neutral tool perspective contends that technology is neither inherently good nor bad but a tool whose impact hinges on usage. The chapter furnishes examples of positive and negative technology outcomes, underscoring the pivotal roles of context and intentions.
This compulsive engagement with digital platforms has been associated with increased symptoms of anxiety, depression, and attention disorders, raising concerns about the broader implications for youth mental health.
Mental health and behavioral addiction, along with political polarization and its impact on public health precautions such as wearing masks, were all mentioned as key public health concerns that are exacerbated by technology. The panelists all agreed that tech companies have a moral obligation to be proactive in thinking about the consequences of their actions, and how their platforms have the power to make impacts on a global scale. Similar to climate change, it was noted that individual choices regarding social media can only do so much, pointing to the need for systemic remedies.
The majority of research shows that using technology helps people stay connected to their social ties, which can decrease loneliness and social isolation. Technology is not a panacea to solve society’s ills, nor is it leading to the demise of society. Recent research, in fact, has found that excessive social media use among youth is less harmful for self-esteem than youth being socially disconnected from their social ties.
Whether technology helps or hinders social interactions between people is a subject of debate. A 2017 study of young adults in the United States ages 19 to 32 years found that those with higher social media use were more than three times as likely to feel socially isolated than those who did not use it as often. However, this study does not establish a causal relationship between social media and depression.
An infrastructure architect and internet pioneer wrote, 'The kind of social innovation required to resolve the problems caused by our current technologies relies on a movement back toward individual responsibility and a specific willingness to engage in community.' Bulbul Gupta responded, 'Until government policies, regulators, can keep up with the speed of technology and AI, there is an inherent imbalance of power.' A computing science professor emeritus commented, 'Social/civic innovation will occur but most likely lag well behind technological innovation.'
Critics of TD argue variously that technology itself is socially determined, that technology and social structures co-evolve in a non-deterministic, emergent process, or that the effects of any given technology depend mainly on how it is implemented which is in turn socially determined.
In the U.S., for example, we often blame technology for many things that go wrong in society. Challenges, such as the impact of automation on jobs, threat to democracy due to disinformation and compromised social harmony due to hate speech and radicalization, have been front and center for quite some time. Algorithmic bias is already a significant problem. Inequalities have been increasing, which has already started affecting outcomes from technology-based interventions.
Recognizing that technology use is associated with both harms and benefits, researchers have begun to distinguish between healthy and unhealthy patterns of use (U.S. Surgeon General, 2023; Lippke et al., 2021; European Parliament, 2020). For example, Bonsaksen et al., (2023) demonstrates that greater time on social media is associated with greater loneliness – but primarily for those who use social media for social purposes.
However, some technological advances cause people to be distracted, overly stressed, and increasingly isolated. Many people are involved in an abundant number of social networking sites, which can lead to problems such as Internet addictions, cheating on significant others, job loss, and narcissism manifested online.
Social issues have always been the preeminent force behind innovation, and technological innovation is no exception. Technology and social change have always gone hand-in-hand. In today’s tech-driven world, though, the connection between social issues and technological innovation is more important and more apparent than ever.
We consider how social relations have changed over time, specifically how technological advances engender new modes of contact for older adults.
The spread of misinformation or outright lies, the prevalence of bullying and cyber attacks, and other unpleasantries are all common side effects of our cyberspace—and they can lead to anxiety, depression, addiction, and other mental health disorders. Social media alone has had a dramatic impact on feelings of self-worth, success, and inclusion and belonging.
Since different people can experience different benefits and harms, technologies raise questions of ethics and social justice. As a consequence, engineers must grapple with the societal impacts of the technologies they create.
In science and technology studies, the social construction of technology (SCOT) theory argues that artifacts like technologies are products of social processes and negotiations, implying that problems with technology often stem from social contexts rather than inherent technological flaws. This contrasts with technological determinism, which posits technology as the primary driver of social change.
After a decade building social impact technology, we identified a pattern: well-funded, well-intentioned platforms depreciate rather than appreciate. The issue isn't weak missions or uncaring teams. It's flawed operational models — and the assumption that commercial product approaches work for transformational social impact work. 4 Common Causes of Depreciation — Governance gaps, behavioral design failures, product-led growth deficits, and hidden infrastructure costs.
Taking responsibility for technology failures. In her article 'When Technology Fails, Who's Responsible', Amber Case shares her experience of technology failures and discusses accountability.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
Sources emphasizing social shaping/implementation (e.g., the “neutral tool”/context-dependence framing in Source 1 and the co-evolution/social-determination view in Source 7, plus examples about inequality/governance shaping outcomes in Source 8 and systemic-regulatory incentives in Source 6) make it plausible that many tech-attributed harms are mediated by social conditions, but they do not establish that this is "often" the case across technologies, while other sources (2,3,10,13) argue technology/platform design can itself contribute to harms and thus block any inference that underlying social issues are usually the primary cause. Therefore, the claim overgeneralizes from a context-dependence thesis to a frequency/causal-primary assertion (“often caused by underlying social issues rather than by the technology itself”) that the evidence pool does not logically prove and is partly countered by evidence of technology-driven causal mechanisms.
Expert 2 — The Context Analyst
The claim usefully highlights that many “technology problems” are mediated by social context (governance, incentives, inequality, usage patterns) (Sources 1, 7, 8, 9), but it omits that some harms are plausibly attributable to technology's design/affordances themselves (e.g., addictive engagement mechanics, amplification dynamics) and that evidence in the pool describes technology as exacerbating or contributing to mental-health and polarization outcomes, not merely reflecting preexisting social issues (Sources 2, 3, 10, 13). With full context restored, it's directionally right as a generalization (“often” and “underlying social issues” matter a lot) but framed too one-sidedly because it downplays technology-as-causal/amplifying mechanisms, so the overall impression is incomplete rather than fully true.
Expert 3 — The Source Auditor
The highest-authority sources in this pool — Source 1 (Cambridge University Press, high-authority, 2025) and Source 4 (American Sociological Association, high-authority, 2023) — both support a nuanced view consistent with the claim: technology's impacts are context-dependent, usage-driven, and shaped by social conditions rather than inherent to the technology itself; Source 4 explicitly finds that social connectedness mediates outcomes more than technology use per se. Sources 2 (PubMed Central, high-authority) and 3 (NYU School of Global Public Health, high-authority) are tagged as refuting the claim, but their snippets describe associations and exacerbation rather than independent technological causation — Source 5 (Medical News Today) even explicitly cautions that no causal relationship is established, and Source 3 itself points to the need for systemic/social remedies, which actually aligns with the claim. Sources 7 (Technological Determinism academic overview) and 8 (Illinois News Bureau) further support the claim that social determination shapes technological outcomes. The opponent's strongest sources (Sources 2 and 3) do not cleanly refute the claim because they describe technology as amplifying pre-existing social vulnerabilities rather than being an independent root cause, and the most authoritative independent sources (Cambridge, ASA) clearly support the claim's framing that social context is the primary driver of technology-attributed problems.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple independent sources converge on the core point that harms blamed on “technology” are largely mediated by social context—how tools are implemented, governed, and embedded in existing power relations—rather than being inherent properties of the tools themselves (Source 1, Cambridge University Press; Source 7, Technological Determinism overview; Source 8, Illinois News Bureau). Empirical discussions of social media and well-being likewise show outcomes depend on social connectedness, usage patterns, and systemic incentives/regulation, implying that underlying social conditions (isolation, inequality, governance gaps) are often the real drivers of the “technology problems” people observe (Source 4, American Sociological Association; Source 9, Evidence Brief; Source 6, Pew Research Center).
Your reliance on Source 1's "neutral tool" framing commits a classic false dichotomy fallacy — acknowledging that context shapes outcomes does not negate that the technology itself actively amplifies harms, as Source 2 (PubMed Central) and Source 3 (NYU School of Global Public Health) both demonstrate with direct causal links between platform design and measurable mental health deterioration in youth, independent of pre-existing social conditions. Furthermore, your appeal to Source 4 (American Sociological Association) actually undermines your own case, since it explicitly states technology "is not a panacea to solve society's ills," implying technology introduces its own distinct problems — and Source 5 (Medical News Today) confirms that higher social media use correlates with users being over three times more likely to feel socially isolated, a harm generated by the technology's own mechanics, not merely reflected from underlying social dysfunction.
Argument against
Multiple high-authority sources directly contradict the claim by demonstrating that technology itself — not merely underlying social issues — produces measurable harms: Source 2 (PubMed Central) links compulsive digital platform engagement to increased anxiety, depression, and attention disorders in youth, while Source 3 (NYU School of Global Public Health) explicitly states that mental health disorders, behavioral addiction, and political polarization are "exacerbated by technology," with panelists agreeing that tech companies bear a direct moral obligation for these platform-driven consequences. Furthermore, Source 13 (University of Arkansas Grantham) identifies misinformation, cyberbullying, and mental health disorders as "common side effects of our cyberspace," and Source 10 (Human Kinetics) attributes distraction, stress, isolation, and narcissism directly to technological advances themselves — collectively establishing that technology is not a neutral vessel for pre-existing social problems but an active, independent contributor to the harms attributed to it.
You treat “associated with” and “exacerbated by” as proof that harms are caused by technology itself, but that's a causal leap: even your own cited evidence flags limits on causality (Source 5, Medical News Today), while the broader brief shows outcomes hinge on context, intentions, and implementation rather than inherent properties (Source 1, Cambridge University Press; Source 7, Technological Determinism overview). And by leaning on broad, non-mechanistic claims about “side effects of cyberspace” (Source 13, University of Arkansas Grantham) and generic lists of tech-caused ills (Source 10, Human Kinetics), you ignore the stronger, more specific evidence that social connectedness, usage patterns, inequality, and governance/regulatory incentives are the underlying drivers shaping whether the same technologies help or harm (Source 4, American Sociological Association; Source 8, Illinois News Bureau; Source 6, Pew Research Center; Source 9, Evidence Brief).