Verify any claim · lenz.io
Claim analyzed
Science“Individuals who prefer music with less positive emotional content tend to have higher intelligence.”
The conclusion
A 2026 peer-reviewed study directly found that people who listened to music with less positive emotional tones had higher predicted intelligence scores, providing real support for this claim. However, the relationship is correlational, based on modeled (not directly measured) intelligence, and much of the broader supporting evidence actually addresses genre preferences or personality traits rather than emotional valence and general intelligence specifically. The claim is directionally supported but overgeneralizes a limited, construct-dependent finding.
Caveats
- The key study uses 'predicted intelligence' from a behavioral model, not directly measured IQ or cognitive ability scores — this is an important distinction.
- Much of the cited supporting research links music preferences to personality traits (openness, low extraversion) or cognitive styles (empathizing), not directly to general intelligence, meaning the relationship may be confounded.
- The finding is correlational and based on specific populations; it should not be interpreted as a causal or universally generalizable rule about music taste and intelligence.
Sources
Sources used in the analysis
Preferences for lyrics with a less positive emotional tone are consistent with prior research showing that sad or melancholic music can facilitate introspection and life reflection, both of which are rather cognitive uses of music. Specifically, the models found that people who listened to songs with less positive emotional tones tended to have higher predicted intelligence scores.
Here, we show that live music can stimulate the affective brain of listeners more strongly and consistently than recorded music.
This study examined whether AI-powered music creation can evoke the same emotional impact as human-created music in audiovisual contexts.
Listening to sad music, compared with happy music, is associated with stronger mind-wandering and greater transitions to the DMN (Taruffi et al., 2017). These results suggest that the emotional valence of the music can modulate the engagement of the DMN activity.
Through a mixed research design involving 100 undergraduate students, the study found that music type is significantly related to emotional intelligence. Fast-paced and traditional music types are positively correlated with emotional intelligence, while intense and rebellious music types are negatively correlated. In addition, movie soundtracks and theme songs are positively correlated with emotional intelligence, while rock music is negatively correlated.
Study 2 (N = 353) replicated and extended these findings by investigating how musical preferences are differentiated by E-S cognitive styles (i.e., 'brain types'). Analyses of detailed psychological attributes revealed that type E preferred music with low arousal, negative valence, and emotional depth.
It is widely agreed upon that both natural and man-made sounds, including music, profoundly impact our emotions and cognitive abilities, such as our attention, memory, problem-solving, decision-making, and creativity. However, after evaluating evidence produced by other researchers, Schellenberg and Lima [13] concluded that causal inferences cannot be made.
Higher scores on openness to experience and lower scores on extraversion, as defined by the Big Five Model of personality traits, were shown to be associated with the liking of sad music (Vuoskoski et al., 2011; Ladinig and Schellenberg, 2012).
Rentfrow and Gosling (2003) obtained an interesting, but somewhat differing result. Their study revealed a positive correlation between higher intelligence test scores and a preference for reflective and complex music styles and intense and rebellious music styles. Additionally, participants with lower intelligence test scores preferred upbeat and conventional music styles.
New research shows engaging with music is linked to better cognitive function and a reduced risk of dementia and heart disease in later life. Researchers found that individuals who listened to music every day, compared to those who didn't listen to music, had better memory and overall cognitive performance.
Music can be categorized into various genres, and those who exhibit a preference for certain genres may tend to have higher emotional intelligence than others. In a cross-sectional study, researchers gave an emotional intelligence test to participants after they identified their musical preference. Spearman’s analysis revealed a weak positive correlation between test scores and pop, jazz, folk, classical, and gospel.
Analyses of fine-grained psychological and sonic attributes in the music revealed that type E individuals preferred music that featured low arousal (gentle, warm, and sensual attributes), negative valence (depressing and sad), and emotional depth (poetic, relaxing, and thoughtful), while type S preferred music that featured high arousal (strong, tense, and thrilling), and aspects of positive valence (animated) and cerebral depth (complexity).
Their study included 44 Georgia Tech students who listened to film soundtracks while recalling a difficult memory. The participants listened to movie soundtracks and incorporated new emotions into their memories that matched the mood of the music.
Previous research has shown that intelligence has a critical influence in music preference. Rentfrow and Gosling (2003) showed that more intelligent individuals preferred “reflective, complex, and intense” genres of music (which included classical, jazz, blues, and folk).
A well-known 2014 study by Greenberg et al. in Psychology of Aesthetics, Creativity, and the Arts found that individuals with higher intelligence and openness to experience prefer instrumental music, reflective and complex (MUSIC model), and complex music, which often has lower positive emotional valence compared to mainstream pop.
Expert review
How each expert evaluated the evidence and arguments
Source 1 provides the most direct evidence, explicitly stating that people who listened to songs with less positive emotional tones tended to have higher predicted intelligence scores — a correlational finding from a predictive model, not a direct IQ measurement — and this is corroborated by Sources 9 and 14 (linking higher intelligence to reflective/complex music preferences) and Source 6/12 (linking negative valence preferences to empathizing cognitive styles). However, the opponent's rebuttal correctly identifies several inferential gaps: Source 1 uses "predicted intelligence" from a behavioral model rather than measured general cognitive ability, Sources 9 and 14 use genre labels ("reflective/complex") rather than the specific variable of emotional valence, Source 6/12 maps negative valence preference to a cognitive style (empathizing) rather than higher general intelligence, and Source 8 attributes sad-music preference more robustly to personality traits (openness, low extraversion) than to intelligence — meaning the convergent evidence partially conflates distinct constructs (cognitive style, personality, and general intelligence), constituting a false equivalence and scope mismatch. The claim uses the word "tend," which is appropriately hedged and correlational, and the multi-source pattern does point in a consistent direction, but the logical chain from "less positive emotional content preference" to "higher intelligence" specifically relies on indirect and partially mismatched evidence, making the claim Mostly True but with meaningful inferential gaps that prevent a full True verdict.
The claim frames a specific link between “less positive emotional content” and intelligence as a general tendency, but most supporting context in the pool is either (a) about genre clusters like “reflective/complex” rather than emotional valence per se (9,14), (b) about cognitive style/personality correlates of sad/negative-valence music rather than higher general intelligence (6,8,12), or (c) cautions that this literature is correlational and not suited to broad inferences (7). With full context, there is some recent direct evidence of an association between less-positive lyrical tone and higher model-predicted cognitive ability (1), but the broader framing overgeneralizes beyond a limited, correlational, and construct-dependent finding, so the overall impression is misleading rather than straightforwardly true.
The most authoritative source directly addressing the claim is Source 1 (PMC, 2026), a high-authority peer-reviewed article that explicitly states people who listened to songs with less positive emotional tones tended to have higher predicted intelligence scores — this is the strongest and most recent evidence. Supporting this are Source 9 (APA-linked, though undated and accessed via a third-party PDF host), Source 6 (PMC, 2015) linking negative valence preferences to cognitive styles, and Source 14 (a university newsroom PDF, lower authority) referencing Rentfrow & Gosling's reflective/complex genre findings; however, the opponent's rebuttal correctly identifies that Sources 9, 14, and 6 speak to genre labels or cognitive styles rather than the specific variable of emotional valence and general intelligence, partially weakening the convergence argument. Critically, Source 7 (PMC-NIH, high-authority) cautions against causal inferences in this literature, Source 8 (PMC) ties sad music preference more to personality traits than intelligence, and Source 5 (Frontiers in Psychology) introduces contradictory findings about music type and emotional intelligence — together these credible sources introduce meaningful caveats that prevent a clean "True" verdict, though Source 1's direct and recent finding does substantively support the directional claim as a correlational tendency, warranting a "Mostly True" verdict with the caveat that the link is correlational, partially confounded by personality, and not as broadly generalizable as the claim implies.
Expert summary
What do you think of the claim?
The arguments
Two AI advocates debated this claim using the research gathered.
Source 1 (PMC, 2026) directly and explicitly supports the claim, finding through natural music-listening behavior analysis that "people who listened to songs with less positive emotional tones tended to have higher predicted intelligence scores," with this result framed as consistent with prior research on introspective and cognitively-oriented music use — making it among the most current and authoritative evidence available. This finding is further corroborated by a convergent body of independent research: Source 9 (APA) links higher intelligence to preferences for reflective and complex music styles, Source 6 (PMC) connects negative valence music preferences to empathizing cognitive styles, Source 14 confirms more intelligent individuals prefer reflective and complex genres, and Source 15 ties higher intelligence and openness to music with lower positive emotional valence — together forming a robust, multi-source consensus that firmly establishes the claim as true.
You treat Source 1's “predicted intelligence” from a model on listening behavior as if it were measured IQ and a generalizable trait link, but Source 7 explicitly warns this literature is largely correlational and does not justify broad inferences about intelligence from music engagement—so your leap from one modeling association to “tend to have higher intelligence” is an overgeneralization. And your “convergent consensus” is mostly a bait-and-switch: Sources 9 and 14 are about genre labels like “reflective/complex,” not the specific variable in the motion (less-positive emotional content), while Source 6/12 is about empathizing vs systemizing cognitive styles (not higher general intelligence) and Source 8 shows sad-music liking tracks personality (openness/low extraversion), undercutting your claim that valence preference reliably indicates higher intelligence.
The motion overstates a specific valence→intelligence link: the only direct evidence cited is a single predictive modeling paper (Source 1, PMC: “Deep Beats, Deep Thoughts?”), while broader reviews caution that music–cognition findings are largely correlational and do not support causal or general claims about intelligence from listening preferences (Source 7, PMC-NIH: “The Role of Sounds and Music in Emotion and Cognition”). Moreover, the wider literature in the brief ties “sad music” preference more consistently to personality traits like openness and lower extraversion rather than intelligence (Source 8, PMC: “The pleasures of sad music”), and even the cognitive-styles work shows negative-valence preference characterizes one brain-type rather than higher general intelligence (Source 6/12: “Musical Preferences are Linked to Cognitive Styles”), so the claim that less-positive emotional content reliably indicates higher intelligence is misleading.
Your argument commits a straw man fallacy — the claim is correlational, not causal, and you yourself cite Source 7's caution against "causal inferences," which is irrelevant to a claim about tendency and association, exactly what Source 1 demonstrates through natural listening behavior and what Sources 9 and 14 independently corroborate through separate methodologies. Furthermore, your attempt to reduce the finding to mere personality confounds ignores that Source 1 explicitly controls for this by predicting intelligence scores directly from music-listening behavior, and Source 9 links higher intelligence test scores — not just personality traits — to preferences for reflective and complex music, making your personality-substitution argument a red herring that fails to undermine the multi-source convergence supporting the claim.