Cognitive Training

Does Brain Training Actually Work? An Honest Assessment

A comprehensive, scientifically rigorous analysis of brain training research. Discover what the evidence actually shows about cognitive training, the transfer problem, and what this means for you.

35 min readBy Brain Zone Team

Brain training apps promise to make you smarter, sharper, and more mentally resilient. The $8 billion industry markets itself as a shortcut to cognitive enhancement, with colorful games that claim to boost everything from memory to problem-solving ability. But after facing intense scientific scrutiny and regulatory action, the question remains: does any of this actually work?

The honest answer isn't a simple yes or no. After reviewing hundreds of studies, multiple meta-analyses, and regulatory filings, the clearest conclusion is this: you will definitely get better at brain training games, but whether this translates to real-world cognitive improvement depends heavily on what you mean by "work," what type of training you do, and who you are.

Here's what makes this question so complicated: the scientific community itself remains genuinely divided. In 2014, 70 prominent scientists issued a consensus statement warning that brain training claims are "frequently exaggerated and at times misleading." Two months later, 133 scientists countered that the evidence for certain benefits was being unfairly dismissed. Both groups included respected researchers from top universities. This disagreement isn't about bad science—it reflects deep uncertainty about a genuinely difficult question.

What follows is a thorough examination of what the research actually demonstrates, where uncertainty remains, and what this means for anyone considering brain training. No cherry-picking, no promotional language—just an honest assessment of a complicated scientific landscape.

What does "work" actually mean?

Before we can evaluate whether brain training works, we need to be clear about what "working" looks like. When scientists study brain training, they distinguish between three very different types of improvement. Conflating these is where most of the confusion originates.

Practice effects describe getting better at the specific task you practice. If you play a memory matching game repeatedly, you'll improve at that game. This is reliably demonstrated across virtually all brain training research and is essentially uncontroversial. Your brain adapts to repeated challenges—this is expected and, frankly, unremarkable. Nobody disputes that practice makes you better at the thing you practice.

Near transfer refers to improvement on tasks closely related to training. If you train on a specific working memory task and then improve on a slightly different working memory task you've never seen before, that's near transfer. The tasks share enough common elements that skills from one help with the other. Multiple meta-analyses confirm that near transfer occurs consistently, though the effects are typically modest—think improvements of 10-15% on related measures.

Far transfer is the holy grail of brain training, and it's what companies implicitly or explicitly promise when they suggest their products will make you "smarter" or protect against cognitive decline. Far transfer means improvement on abilities quite different from training. Training on working memory games and becoming better at complex reasoning, academic performance, or everyday cognitive functioning would be far transfer. It's the difference between getting better at brain training games and getting better at life.

The critical finding from decades of research: far transfer is extraordinarily difficult to achieve. Multiple comprehensive meta-analyses, examining hundreds of studies and thousands of participants, conclude that when proper controls are used, the overall effect of far transfer approaches zero. As researchers Giovanni Sala and Fernand Gobet concluded in their 2023 review, "the lack of training-induced far transfer is an invariant of human cognition." That's academic language for: it doesn't happen.

This distinction matters enormously because getting better at brain training games has limited practical value. Nobody cares about high scores in abstract cognitive exercises. The question is whether cognitive training changes underlying mental capacity in ways that improve performance across the domains that matter in life—at work, in school, in daily functioning. That's what the entire debate centers on.

The transfer problem sits at the heart of the debate

Why is far transfer so difficult to achieve? Understanding this helps explain why reasonable scientists disagree about brain training and why simple answers remain elusive.

Edward Thorndike proposed over a century ago that transfer depends on shared elements between the learning context and the application context. The more different two tasks are, the less learning in one will benefit the other. Playing chess doesn't make you better at mathematics despite both involving logical thinking—they simply don't share enough common elements. Your chess skills stay in the domain of chess.

Decades of expertise research confirms this insight. Chess grandmasters aren't better at general reasoning; they're specifically better at recognizing and remembering chess patterns. Expert musicians don't have generally superior memory—they have superior memory for musical patterns and structures. Skills turn out to be highly domain-specific, which is why expecting a brain training game to improve general intelligence is asking for something that contradicts what we know about how learning actually works.

The problem becomes even more complex when you consider what actually improves during practice. As people practice brain training tasks, they develop task-specific strategies that reduce their reliance on general cognitive mechanisms. You might learn that certain patterns appear frequently in a particular game, or you develop a mental shortcut for solving that specific type of puzzle. What looks like cognitive improvement may actually be strategy refinement—you're getting more efficient at the specific task without your fundamental cognitive "hardware" changing at all.

The methodological challenge compounds these issues. Unlike pharmaceutical trials where placebo pills are indistinguishable from real medication, participants in brain training studies know whether they're receiving training. They're aware they're playing brain training games, and this awareness creates expectation effects that can be surprisingly powerful.

A striking 2016 study by Foroughi and colleagues demonstrated this directly: participants recruited with flyers advertising "brain training & cognitive enhancement" showed improvements equivalent to 5-10 IQ points after just one hour of training. This happened regardless of what cognitive task they performed. The researchers concluded these gains were likely driven entirely by expectations rather than genuine cognitive change. People who expected improvement showed improvement, not because their brains changed, but because their beliefs and motivation affected their test performance.

When brain training studies use proper active control groups—where control participants do an alternative activity with similar expectations of benefit—positive effects shrink dramatically or disappear entirely. Studies using passive controls, where people just go about their normal lives, consistently find larger effects than studies using active controls. This pattern strongly suggests that expectation effects account for much of what initially appears to be training benefit.

What the major studies actually demonstrate

The ACTIVE trial remains the gold standard

The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) study stands as the most influential piece of brain training research ever conducted. Beginning in 2002, it enrolled 2,802 adults aged 65-94 across six sites and followed them for over a decade. The study's scope, methodological rigor, and long-term follow-up make its findings particularly credible.

Participants were randomly assigned to one of four groups: memory training focused on strategy-based verbal memory techniques, reasoning training emphasizing problem-solving with patterns and sequences, speed-of-processing training using visual search tasks, or a no-contact control group that received no intervention. Training consisted of ten sessions lasting 60-75 minutes each, delivered over 5-6 weeks. Some participants received "booster" sessions at one and three years to maintain gains.

The findings revealed something crucial about how brain training works—and doesn't work. Each training type improved performance on its targeted ability. People who received memory training got better at memory tasks. People who received reasoning training improved their reasoning. Speed training enhanced processing speed. So far, so good.

But here's the critical part: the effects were strictly domain-specific. Reasoning training didn't improve memory. Memory training didn't enhance reasoning. Each person improved only in the domain they practiced, not in cognitive ability generally. This pattern directly challenges the idea that brain training makes you broadly smarter or enhances general cognitive capacity.

The training gains weren't fleeting either. Effects remained statistically significant at both the 5-year and 10-year follow-ups for participants who received booster sessions. Speed training showed the largest and most durable immediate effects, while memory training effects proved less persistent over time without continued practice.

The 10-year follow-up published in 2017 produced the study's most striking finding: speed-of-processing training was associated with a 29% reduced risk of dementia compared to controls. Each additional training session correlated with approximately 10% lower dementia risk. Remarkably, neither memory nor reasoning training showed any significant dementia risk reduction—only the speed training produced this effect.

This finding has received enormous attention because it represents the only behavioral intervention shown in a randomized controlled trial to reduce dementia risk. Physical exercise has observational evidence supporting brain health benefits, but ACTIVE's speed training finding comes from an experimental design with random assignment, making causal claims more defensible.

However, important caveats apply. Dementia was assessed through Medicare claims data rather than clinical diagnosis by specialists, which introduces potential misclassification. The mechanism by which speed training might reduce dementia risk remains unknown—we don't understand why this specific training type would have this specific effect. And critically, this finding requires independent replication before being considered definitive. A single study, even a large and well-designed one, shouldn't completely reshape clinical recommendations.

The 20-year follow-up is currently funded by the National Institute on Aging and underway. This extended analysis may provide crucial additional data about whether the dementia-protective effect persists even longer and whether we can understand the underlying mechanism. Preliminary analyses examining all-cause mortality found no significant survival benefit, tempering some of the earlier optimism about broad health impacts.

Meta-analyses paint a more complex picture

While ACTIVE suggests specific benefits for speed training in older adults, the broader research literature paints a considerably less encouraging picture for far transfer claims. Meta-analyses—studies that systematically combine results across many individual studies—provide our best window into what the overall evidence actually shows.

Monica Melby-Lervåg and Charles Hulme conducted comprehensive meta-analyses examining working memory training across 87 publications in their 2013 and 2016 reviews. Working memory—your ability to hold and manipulate information in mind—is fundamental to many cognitive tasks, so if training improved working memory capacity generally, it should enhance performance across many domains. The training programs they examined typically used tasks like n-back exercises, where you must remember items from several steps back in a sequence.

Their findings were clear but disappointing for far transfer proponents. Near transfer to similar working memory tasks was significant and reliable, with effect sizes ranging from 0.5 to 0.8—modest but real improvements on tasks closely resembling training. But far transfer to fluid intelligence—your ability to solve novel problems and think abstractly—told a different story. When studies used active controls, the average effect size approached zero. Not small, but zero.

The type of control group proved critical. Studies using active controls, where participants did some alternative activity with similar engagement and expectations, averaged an effect size of 0.00 for far transfer. Studies using passive controls, where people just continued their normal lives, averaged 0.38. This pattern strongly implicates expectation effects rather than genuine cognitive change. Their conclusion was unambiguous: "There is no evidence that working memory training yields improvements in so-called far-transfer abilities."

Giovanni Sala and Fernand Gobet took an even broader view in their 2019 second-order meta-analysis—a meta-analysis of meta-analyses. They analyzed 233 effect sizes from multiple meta-analyses covering working memory training, video game training, music training, and chess instruction. After controlling for placebo effects and publication bias (the tendency for positive results to be published more often than null findings), they found the overall far-transfer effect was zero.

Their stark conclusion: "The lack of generalization of skills acquired by training is thus an invariant of human cognition." In other words, the failure of training to transfer broadly isn't a methodological quirk or a limitation of current approaches—it's a fundamental feature of how human learning works.

Michael Lampit and colleagues examined computerized cognitive training specifically in older adults, analyzing 52 studies with 4,885 cognitively healthy participants over age 50. They found a small but significant overall effect, with processing speed showing the strongest benefits. But crucially, they discovered that home-based training was ineffective compared to supervised, group-based training conducted in research settings. And counterintuitively, training more than three sessions per week actually reduced benefits compared to less frequent training, suggesting that more isn't always better.

This last finding illustrates an important principle: the effectiveness of cognitive training appears to depend heavily on implementation details, not just the type of exercises used. The social support, structure, and professional guidance provided in supervised settings may be as important as the cognitive exercises themselves.

The Jaeggi studies sparked hope and controversy

In 2008, Susanne Jaeggi and colleagues published a study claiming that dual n-back working memory training improved fluid intelligence—the holy grail of cognitive enhancement. The study was small but generated enormous public attention and launched hundreds of follow-up studies. If you could genuinely improve fluid intelligence through a simple training task, it would revolutionize education and cognitive science.

However, multiple well-powered replication attempts failed to confirm the findings. Thomas Redick and colleagues conducted a rigorous replication with proper active controls in 2013 and found no evidence of intelligence improvement. Tyler Thompson and colleagues in 2013 found that nearly all apparent transfer effects "did not surpass test-retest practice effects"—meaning people improved on retaking tests simply because they'd seen similar tests before, not because their intelligence increased.

A 2015 meta-analysis co-authored by Jaeggi herself found a small positive effect overall, with an effect size of 0.24. But this effect lost statistical significance when studies without active controls were excluded from the analysis. When only the most methodologically rigorous studies were considered, the evidence for transfer to intelligence disappeared.

The current scientific consensus is that while people dramatically improve their n-back task performance—sometimes achieving 50-100% gains—evidence for transfer to fluid intelligence is weak at best when methodologically rigorous controls are used. The initial excitement has largely faded, replaced by recognition that improving n-back performance and improving intelligence are fundamentally different outcomes.

The 2016 comprehensive review sets the standard

The most thorough evaluation of brain training evidence was published in Psychological Science in the Public Interest by Daniel Simons and colleagues in 2016. This journal is the official publication of the Association for Psychological Science and specifically addresses topics of public importance where scientific evidence intersects with public claims and concerns.

The research team reviewed the full brain training literature and examined every study cited by brain training companies in their marketing materials. They established clear criteria for what would constitute convincing evidence: improvements should encompass a broad array of tasks beyond those trained, should persist for a reasonable time after training ends, should appear in real-life cognitive functioning rather than just laboratory measures, and should remain when accounting for motivation and expectation effects.

Their conclusions were stark. They found extensive evidence that training improves performance on trained tasks—this was uncontroversial. They found less evidence that benefits extend to closely related tasks, though near transfer does occur in some circumstances. They found little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance outside the laboratory.

Perhaps most damning: "None of the cited studies conformed to all of the best practices we identify as essential" for drawing clear conclusions about brain training effectiveness. Common problems included inadequate control groups, small sample sizes, failure to preregister analyses (which allows researchers to selectively report favorable findings), lack of long-term follow-up, and unacknowledged conflicts of interest where researchers had financial stakes in the outcomes.

The review didn't dismiss brain training entirely, but it clearly established that the scientific foundation for broad cognitive enhancement claims was far weaker than marketing suggested. The gap between what companies promise and what science demonstrates remains substantial.

Why scientists disagree becomes clearer

At first glance, the scientific debate seems puzzling. Both sides cite research, both include credentialed experts, both claim evidence supports their position. How can smart people examining the same studies reach opposite conclusions?

The answer isn't about competence or bias—it reflects genuine methodological disagreements and legitimate differences in how to interpret ambiguous evidence.

Researchers finding positive effects for brain training tend to use passive or weaker active controls, focus on measures of near transfer or outcomes similar to training tasks, may include studies with industry connections or funding, and often emphasize effects in specific populations like older adults with mild cognitive impairment rather than healthy individuals. None of these choices are necessarily wrong, but they produce different pictures of the evidence.

Researchers finding null effects for far transfer tend to use more stringent active control conditions designed to match expectations, focus specifically on far transfer measures to distantly related abilities, attempt to control rigorously for expectancy effects, and use larger sample sizes and meta-analytic methods that combine across many studies. These choices are also defensible, but they produce more skeptical conclusions.

Both approaches are scientifically legitimate, which is why the debate persists. Neither side is obviously wrong—they're answering slightly different questions with different methodological priorities.

The Stanford/Max Planck consensus statement signed by roughly 70 scientists argued that "exaggerated and misleading claims exploit the anxiety of older adults about cognitive decline" and that evidence for real-world cognitive benefits was lacking. The statement emphasized methodological weaknesses and the gap between marketing and evidence.

Two months later, 133 scientists responded that this ignored substantial evidence, particularly from large trials like ACTIVE, and that "discouraging the use of validated exercises by people who could benefit from them" caused real harm. They argued the consensus statement was overly negative and dismissed legitimate evidence.

The response was organized partly by Michael Merzenich, a pioneering neuroscientist who made fundamental discoveries about brain plasticity and co-founded Posit Science, maker of BrainHQ. This illustrates a persistent challenge in evaluating brain training evidence: many researchers with the deepest expertise have financial interests in the industry, while skeptics may lack the specialized knowledge that comes from years of working on cognitive training research.

This creates a genuine dilemma. Who should you trust—the skeptical outsiders without conflicts of interest but possibly limited expertise, or the knowledgeable insiders whose expertise comes partly from industry involvement? The most honest answer is that you need to examine the evidence itself rather than relying primarily on authority or credentials.

Methodological challenges make definitive answers elusive

Several factors make brain training research uniquely difficult compared to other areas of psychology or medicine.

True blinding is impossible. In pharmaceutical trials, participants can take a placebo pill that looks and feels identical to the real medication, allowing genuine double-blind testing where neither participants nor researchers know who received the active treatment. But in brain training research, participants know whether they're playing brain training games. Any alternative "control" activity differs noticeably from the intervention, making complete blinding unachievable.

Expectation effects are substantial and pervasive. The Foroughi study demonstrating 5-10 IQ point improvements from expectations alone should give anyone pause. When participants believe something will improve their cognition, they often perform better on tests regardless of any "real" effect. They might try harder, feel more confident, or interpret ambiguous problems more optimistically. Disentangling genuine cognitive improvement from expectation-driven performance gains is enormously challenging.

No neutral cognitive activity exists to serve as a perfect control. Any alternative activity you might use as a comparison involves some cognitive engagement, making it difficult to isolate what's actually causing observed effects. Watching educational videos engages different cognitive processes than playing brain training games. Solving crossword puzzles exercises verbal abilities differently than working memory tasks. Every potential control activity has confounding factors.

Publication bias distorts the evidence base. Studies finding positive effects are more likely to be published than studies finding no effects, potentially inflating meta-analytic estimates of brain training benefits. Researchers have careers to build, journals prefer publishing novel positive findings over null results, and negative studies often languish in file drawers. Statistical analyses called p-curve analyses suggest many published positive findings in brain training research may lack genuine evidential value once publication bias is accounted for.

Conflicts of interest are pervasive and not always transparently disclosed. A systematic review of brain training studies found that industry connections were common and often inadequately reported. Several researchers who conducted foundational academic studies later founded or became consultants for brain training companies. Financial stakes in outcomes don't necessarily invalidate findings, but they warrant careful scrutiny of methodology and interpretation.

These challenges don't make brain training research impossible, but they explain why definitive answers remain elusive even after decades of investigation. The cleanest, most rigorous studies require active controls, large samples, preregistered analyses, long-term follow-up, and measures of real-world functioning—a combination that's expensive, time-consuming, and rarely achieved.

Regulatory actions reveal the gap between claims and evidence

The Federal Trade Commission has directly addressed the disconnect between brain training marketing and scientific evidence through enforcement actions. These cases provide a stark illustration of how companies' promises exceed what research actually supports.

In 2016, Lumosity's parent company Lumos Labs settled FTC charges for $2 million, with a $50 million judgment suspended based on the company's financial condition. The FTC found that Lumosity made bold claims—that their games could improve school performance, work productivity, and athletic ability; protect against dementia and Alzheimer's disease; and help with conditions including ADHD, PTSD, and traumatic brain injury—without adequate scientific evidence to support these promises.

The agency stated that Lumosity "preyed on consumers' fears about age-related cognitive decline, suggesting their games could stave off memory loss, dementia, and even Alzheimer's disease," but "simply did not have the science to back up its ads." The settlement required Lumosity to stop making misleading claims and to notify subscribers about the FTC action.

Similarly, LearningRx paid $200,000 in 2016 over claims that their programs were "clinically proven to permanently improve" conditions including ADHD, autism, dementia, and Alzheimer's. The FTC found these claims were not supported by adequate scientific testing.

These settlements established important precedents. Brain training companies must have randomized controlled trial evidence for health claims, particularly for serious conditions. They cannot claim benefits for medical conditions without FDA clearance. They must avoid promising "real-world improvements" in work, school, or daily functioning without appropriate proof. Entertainment claims like "challenge yourself" or "fun brain games" face lower scrutiny than medical claims, but any suggestion of treating or preventing health conditions requires solid evidence.

No brain training products have received FDA approval for preventing or treating dementia, Alzheimer's disease, or other cognitive conditions. A few products have received FDA clearance for specific, narrow uses—like EndeavorRx for pediatric ADHD as an adjunct to clinical treatment—but these remain exceptions rather than the rule.

The regulatory actions don't prove brain training is worthless, but they clearly demonstrate that companies were making claims far beyond what the evidence justified. When profit motives meet public anxiety about cognitive decline, marketing can easily outpace science.

Different training types yield different evidence

Not all brain training is equivalent, and distinguishing between approaches helps make sense of conflicting research findings.

Processing speed training has the strongest evidence base, primarily from the ACTIVE trial. The finding that speed training, but not memory or reasoning training, reduced dementia risk at 10-year follow-up is particularly notable. Processing speed training typically involves quickly identifying objects in your peripheral vision or rapidly responding to targets while ignoring distractors. The tasks feel very different from traditional puzzle-solving exercises.

Why speed training might be uniquely effective remains uncertain. Some researchers speculate that processing speed is fundamental to many cognitive operations, so enhancing it could have cascading benefits. Others suggest that speed training may maintain neural efficiency in ways that other training types don't. The mechanism requires further investigation, but the ACTIVE findings give processing speed training a level of evidence that other approaches lack.

Working memory training using n-back and similar exercises reliably improves working memory performance on the trained tasks. Studies consistently show that people can increase their n-back level substantially with practice. However, evidence for transfer to intelligence or other cognitive abilities is minimal when active controls are used. The initial excitement from Jaeggi's studies has not been supported by subsequent rigorous research. Working memory training makes you better at working memory tasks, but probably not much else.

Multi-domain training targeting multiple cognitive abilities simultaneously has theoretical appeal—maybe exercising various cognitive skills together produces broader benefits than isolated training. However, research hasn't demonstrated clear superiority for multi-domain approaches. The ACTIVE trial's finding that each training type improved only its targeted domain suggests that combining approaches doesn't create synergistic effects. You don't become generally smarter by training many specific skills; you just get better at those specific skills.

Executive function training shows some promise for specific populations. Executive functions include abilities like planning, task-switching, and inhibitory control that organize and regulate other cognitive processes. A 2025 meta-analysis examining executive function training in children found small but significant effects, with younger children showing larger gains. Whether these benefits persist into adulthood or translate to academic performance remains less clear.

Individual differences determine who might benefit

Brain training doesn't affect everyone equally, and recognizing who might benefit helps target interventions more effectively.

Age matters considerably. Older adults, particularly those with mild cognitive impairment, show more consistent benefits in meta-analyses than healthy younger adults. This makes intuitive sense—people experiencing cognitive decline have more "room for improvement," while healthy young adults already performing near their cognitive ceiling have little capacity for enhancement. If you're a 25-year-old with no cognitive concerns, expecting brain training to make you substantially sharper sets you up for disappointment. If you're 70 and noticing memory changes, the potential benefit is considerably greater.

Baseline cognitive ability plays a role that varies by domain. Research suggests two competing patterns: the "compensation" effect, where individuals with lower baseline performance sometimes show larger gains because they benefit most from training, and the "magnification" effect, where those with higher abilities may be better at leveraging training for transfer because they have stronger foundational skills. Which pattern appears depends on the specific cognitive domain and training type.

Clinical populations differ substantially from healthy individuals. People with specific cognitive deficits—from mild cognitive impairment to post-stroke cognitive impairment to traumatic brain injury—may show benefits that don't appear in healthy populations. For mild cognitive impairment specifically, multiple meta-analyses show significant improvements in verbal memory and working memory from computerized training. The brain's response to training appears different when recovering from impairment versus trying to enhance already-normal functioning.

However, for ADHD specifically, a comprehensive 2023 meta-analysis examining 36 randomized controlled trials found no significant effect on ADHD symptoms when blinded assessments were used. While children improved on trained tasks, this didn't translate to reduced inattention or hyperactivity in daily life. Earlier meta-analyses showing benefits had relied more heavily on parent and teacher ratings, which are susceptible to expectation effects.

Motivation and expectations influence outcomes in ways that are difficult to disentangle from "real" cognitive change. People who believe abilities can improve—those with a "growth mindset"—may show greater training benefits. Those who enjoy challenging mental work and find the training engaging perform better than those who find it tedious. But how much of this represents genuine cognitive improvement versus expectation-driven performance enhancement remains uncertain.

Recent research adds nuance without resolving the debate

Research from 2020-2025 has refined understanding rather than revolutionizing it. Several themes have emerged.

Combined interventions consistently outperform single approaches. Physical exercise combined with cognitive training produces larger benefits than either intervention alone. A 2024 network meta-analysis examining interventions for dementia populations found that combined motor-cognitive training showed the greatest effects for global cognition, with effect sizes up to 1.0 in some studies. The synergy between physical and cognitive training appears genuine and substantial.

This makes biological sense. Exercise increases blood flow to the brain, promotes neurogenesis in the hippocampus, and stimulates release of brain-derived neurotrophic factor (BDNF), which supports neuronal growth and connectivity. Combining these physiological benefits with cognitive challenge may create conditions more conducive to lasting change than cognitive training in isolation.

Virtual reality training has emerged as a promising frontier. Meta-analyses of VR-based cognitive training show moderate effects with Hedges' g around 0.42, particularly for mild cognitive impairment populations. The FDA approved EndeavorRx, a game-based digital therapeutic delivered through a tablet, for pediatric ADHD in 2020—establishing regulatory pathways for game-based treatments. VR's immersive nature and ability to simulate real-world environments may offer advantages over traditional computerized training, though research is still accumulating.

Personalized adaptive training that adjusts difficulty in real-time based on individual performance has theoretical appeal and shows promise in some contexts. However, a 2025 meta-analysis examining executive function training in children found that non-adaptive training was actually associated with larger effects than adaptive training—contradicting expectations and highlighting how much uncertainty remains about optimal training parameters.

Long-term follow-up data from major studies continues to accumulate, providing crucial evidence about persistence of benefits. ACTIVE's 10-year data showed that some training effects persisted, particularly for speed and reasoning training, though memory training effects were less durable. Booster sessions improved maintenance of gains. The 20-year follow-up currently underway will provide even longer-term data about whether benefits continue or eventually fade. Preliminary analyses found no significant mortality benefit from cognitive training, tempering some earlier optimism about broad health impacts.

Neuroplasticity is real but often misrepresented

Brain training marketing frequently invokes neuroplasticity—the brain's ability to reorganize neural connections throughout life. This scientific concept is valid: the brain does change with experience, and these changes aren't limited to childhood or early development. Adult brains remain plastic, adapting to challenges and experiences across the lifespan.

London taxi drivers who must memorize complex city layouts develop larger hippocampi than control participants. Musicians show structural differences in motor and auditory brain regions that correlate with years of practice. Blind individuals who read Braille extensively show reorganization of visual cortex for tactile processing. Neuroplasticity is demonstrably real.

However, neuroplasticity doesn't automatically validate brain training claims. Several critical nuances get lost when "neuroplasticity" is used as a marketing buzzword.

Brain changes don't necessarily equal cognitive improvement. Your brain changes constantly in response to any experience—watching television changes your brain, scrolling social media changes your brain, reading this article changes your brain. Not all neuroplastic changes are beneficial, and changes observed after brain training might reflect different strategy use rather than increased underlying capacity.

As Sala and Gobet note in their 2023 review: "Theories based on general mechanisms such as brain plasticity predict far transfer. The empirical evidence shows no far transfer. Therefore, these theories are incorrect or incomplete." The existence of neuroplasticity doesn't predict that training will generalize broadly. Your brain might change in response to training while those changes remain specific to the trained tasks.

A rigorous 2022 study examined this directly using multiple brain imaging modalities—functional MRI, diffusion tensor imaging, and FDG-PET scans—during 8 weeks of n-back training. Despite participants substantially improving their n-back performance, researchers found no training-induced changes in brain structure, connectivity, or metabolism that differed from control participants. People got better at the task without detectable neural changes on these measures.

This doesn't mean neuroplasticity isn't occurring—it may happen at scales or in regions not captured by these imaging methods. But it challenges the simple story that brain training → neuroplastic changes → cognitive enhancement. The relationship is far more complex than marketing suggests.

What this honestly means for consumers

Given the evidence, here's a balanced assessment for anyone considering brain training.

The evidence clearly supports that you will improve substantially at whatever games or exercises you practice. This improvement is reliable, consistent, and unsurprising. Some modest transfer to similar cognitive tasks likely occurs, particularly if the tasks share common elements with your training. Older adults and those with cognitive impairment show more consistent benefits than healthy younger adults. Processing speed training has the strongest evidence for broader outcomes, including the ACTIVE trial's finding on dementia risk. Combined approaches—cognitive training plus physical exercise—are superior to either intervention alone. Training effects can persist for years, particularly if you do periodic booster sessions to maintain gains.

The evidence does not support several common claims. Brain training will not make you "smarter" in a general sense—the idea that training specific tasks enhances overall intelligence lacks support when rigorous methods are used. Promises of preventing dementia or Alzheimer's disease exceed what current evidence demonstrates, despite the intriguing ACTIVE findings that require replication. Claims about treating ADHD, PTSD, or other clinical conditions aren't supported without clinical supervision and integration into comprehensive treatment. Home-based app training may not produce the same benefits as supervised laboratory training, which has been more carefully studied.

If you're considering brain training, ask yourself these questions. Does this specific product have randomized controlled trials with active controls published in peer-reviewed journals? Were studies conducted by independent researchers without financial ties to the company, or are all studies industry-funded? Do claimed effects extend beyond the trained tasks to real-world functioning, or just to similar cognitive tests? What does this training cost compared to free evidence-based alternatives like physical exercise, social engagement, and mentally stimulating activities?

The most honest recommendation is that brain training may have value as part of a cognitively engaging lifestyle, but treating it as a stand-alone solution or replacement for other health behaviors isn't justified by current evidence. If you enjoy brain training games and find them engaging, they're a reasonable leisure activity—certainly better than passive entertainment. But for evidence-based cognitive health, prioritize physical exercise (which has robust evidence for cognitive benefits), social engagement (strongly associated with reduced cognitive decline), education and cognitive engagement built over decades rather than weeks, and cardiovascular health management (because what's good for your heart is good for your brain).

The most honest conclusion acknowledges uncertainty

The question "Does brain training work?" doesn't have a simple answer because it depends fundamentally on what you mean by "work," what training approach you use, and who's doing the training.

For far transfer and general cognitive enhancement—the implicit promise of most brain training marketing—the preponderance of evidence indicates effects are minimal to non-existent when proper controls account for expectation effects. Multiple comprehensive meta-analyses from independent research groups reach this conclusion. The claim that brain training makes you broadly smarter, enhances general intelligence, or improves cognitive abilities across domains is not supported by rigorous evidence.

For near transfer and task-specific improvement, solid evidence supports modest but real benefits, particularly for older adults and those with cognitive impairment. If your goal is improving specific cognitive skills that closely resemble your training exercises, brain training can help.

For specific outcomes like dementia prevention, the ACTIVE trial's finding regarding speed-of-processing training is intriguing and deserves serious attention. A 29% risk reduction is clinically meaningful if it holds up in replication studies. But this finding comes from one trial examining one specific type of training in one population, and the mechanism remains unknown. This is promising preliminary evidence, not established fact. The 20-year follow-up may help clarify whether this protective effect is genuine and durable.

For enjoyment and engagement, brain training may provide value even if cognitive transfer is limited. Challenging yourself mentally, working toward goals, and experiencing a sense of accomplishment have inherent worth. But this is different from the cognitive enhancement claims typically made, and you should be clear about what you're paying for.

The scientific debate continues, methodology improves, and new evidence accumulates. Perhaps future research will identify specific training approaches, delivery methods, or combinations that reliably produce far transfer. Perhaps we'll discover that certain subpopulations benefit substantially while most don't. Perhaps longer-term studies will reveal delayed benefits that short-term trials miss.

But the current evidence, honestly assessed, indicates that the bold promises of the brain training industry substantially outpace what has been scientifically demonstrated. If something sounds too good to be true—an app that prevents dementia, a game that makes you smarter, a simple exercise that enhances general intelligence—it probably is.

The most evidence-based approach to cognitive health remains decidedly unsexy: exercise regularly, stay socially engaged, keep learning throughout life, manage cardiovascular risk factors, sleep well, and challenge yourself mentally through diverse activities. Brain training might complement these fundamentals, but it can't replace them.


Frequently asked questions

Will brain training help me avoid dementia?

The ACTIVE study found that speed-of-processing training was associated with reduced dementia risk at 10-year follow-up—a 29% reduction compared to controls. This is the strongest evidence that any behavioral intervention can reduce dementia risk. However, this single finding requires replication before becoming the basis for broad recommendations. Dementia was assessed through Medicare claims rather than clinical diagnosis, and the mechanism remains unknown. Other types of cognitive training in the ACTIVE study (memory and reasoning) showed no dementia risk reduction.

No brain training product can legally claim to prevent dementia without FDA approval, which none currently have. The broader lifestyle factors with stronger evidence bases for brain health include regular physical exercise, social engagement, lifelong learning, and cardiovascular health management. Brain training, particularly speed training, may be a useful addition but shouldn't replace these established factors.

I feel sharper after using brain training apps. Isn't that proof it works?

Subjective improvement doesn't necessarily confirm genuine cognitive enhancement. Research shows that simply believing training will help can produce measurable performance improvements—the Foroughi study demonstrated 5-10 IQ point improvements from expectations alone. Additionally, people naturally feel more alert and engaged after any mentally stimulating activity, including reading, solving crosswords, or learning something new.

Feeling sharper might reflect increased confidence, reduced anxiety about cognitive decline, practice effects on similar mental activities, or the energizing effect of challenge and achievement—all of which have value, but aren't the same as lasting increases in cognitive capacity. For determining whether genuine enhancement occurred, we need controlled studies with objective measures and comparison groups.

My app says it's "scientifically proven." What should I make of this claim?

Examine the specific claims carefully and skeptically. Many apps cite research on cognitive training generally rather than testing their specific product. Even when studies exist for a particular app, they may use passive controls (making any effect look larger than it really is), small sample sizes (making results less reliable), or measure only near transfer to very similar tasks (which doesn't tell us much about real-world benefit).

Look for whether studies were published in peer-reviewed journals by independent researchers without financial ties to the company. Check whether effects extended beyond trained tasks to real-world functioning. Ask whether claims about memory improvement, dementia prevention, or treating conditions like ADHD are supported by the studies cited, or whether the research actually shows only that people improved at the games themselves.

The FTC has taken enforcement action against companies for overstating evidence, requiring $2 million settlements from Lumosity and payments from other companies for claims that exceeded scientific support. "Scientifically proven" is a marketing phrase that may mean far less than you'd think.

Is any brain training actually worth it?

This depends on your goals and expectations. If you enjoy brain training games, find them engaging, and appreciate the challenge, they're a reasonable leisure activity—certainly more cognitively stimulating than passive entertainment. For this purpose, brain training is "worth it" in the same way puzzles, crosswords, or learning a new language is worth it.

If your goal is cognitive enhancement or dementia prevention, brain training might be a useful addition to a broader lifestyle approach, but it shouldn't be your primary strategy. The evidence is stronger for combining cognitive training with physical exercise than for cognitive training alone. Processing speed training has the best evidence for potential long-term benefits in older adults.

For most people, free or low-cost alternatives—physical exercise, social activities, learning new skills, reading challenging material—have better evidence for cognitive health than commercial brain training apps. If you're spending significant money on brain training while neglecting exercise or social connection, you're probably prioritizing wrong based on the evidence.

Why do some scientists say it works while others say it doesn't?

Scientists examining similar evidence reach different conclusions because they're often answering slightly different questions and making different methodological choices, all of which are defensible.

Researchers finding positive effects may focus on near transfer (which does occur), use passive or weaker controls (which can make effects look larger), emphasize results in specific populations like older adults with mild cognitive impairment (who show more consistent benefits), or include studies with industry connections. Researchers finding null effects for far transfer may use more stringent active controls, focus specifically on transfer to distantly related abilities, attempt to control rigorously for expectation effects, or use meta-analytic methods combining many studies.

Both perspectives reflect legitimate interpretations of a genuinely ambiguous evidence base. However, it's notable that comprehensive meta-analyses controlling carefully for methodology typically find limited far transfer, and regulatory agencies like the FTC have required companies to stop making claims that exceed what evidence supports.

Are there populations who clearly benefit more than others?

Yes, several groups show more consistent benefits. Older adults with mild cognitive impairment show significant improvements in memory and cognitive function in multiple meta-analyses. People recovering from stroke or traumatic brain injury may benefit as part of cognitive rehabilitation, though effects are still primarily task-specific.

Older adults generally show larger benefits than healthy young adults, likely because there's more room for improvement when starting from a lower baseline. Supervised, group-based training in research settings appears more effective than home-based training using apps. Combined physical and cognitive interventions outperform either approach alone across age groups.

Healthy young adults show the smallest and least consistent benefits, which makes sense—they're already performing near their cognitive ceiling, so expecting substantial enhancement is unrealistic. For children, effects depend heavily on the type of training, with some executive function interventions showing promise while working memory training has been disappointing.