Jump to main content
Back to news02/04/2026

The stability of social science findings depends on methodological decisions

When experts reanalysed the data behind a quarter of social and behavioural science publications, they reached different conclusions from the original authors. This highlights the methodological sensitivity of research findings, according to a large international study based on mass collaboration and co-led by Corvinus University, published in Nature.
Budapesti Corvinus Egyetem

How much do research results depend on the exact way data are analysed? This question was examined by a large international collaboration led by Balázs Aczél (Eötvös Loránd University) and Barnabás Szászi (Eötvös Loránd University and Corvinus University of Budapest). In the project, 457 independent analysts conducted a total of 504 reanalyses of data from 100 previously published studies. The paper, published in Nature on 1 April , shows that the conclusions drawn from data can be strongly influenced by methodological decisions made during the analysis. 

Each analyst received the same dataset and the same key research question, but they were free to choose how to conduct the analysis. Although everyone started from the same information, and most reanalyses broadly supported the main claims of the original studies, effect sizes, statistical estimates and levels of uncertainty often differed meaningfully. These differences were not due to a lack of expertise: experienced researchers with strong statistical backgrounds were just as likely to arrive at different results as others. 

Reanalysis results varied considerably 

In only about one third of the cases did every analyst reach exactly the same conclusion as the original authors. With a more permissive margin of error that was four times wider, this proportion rose to 57%. In other words, what we consider to be “the same result” meaningfully shapes how robust we perceive findings to be. 

Seventy-four percent of the studies reached conclusions in the same direction as the original research, while 24% found no effect or produced inconclusive results, and 2% identified an effect in the opposite direction. 

Observational studies proved less robust than experimental ones, suggesting that more complex data structures allow greater analytical flexibility and therefore more uncertainty. Larger sample sizes did not offer greater protection against variability in analytical choices. 

What next then? 

“These findings do not call into question the credibility of previous research, nor do they mean that the social sciences are unreliable. On the contrary, they highlight that presenting a single analysis often fails to reflect the true level of empirical uncertainty. Ignoring analytical variability can lead to unjustified confidence in scientific conclusions. In many cases there may be several alternative analytical paths that were not explored in advance but are still defensible, so these alternatives need to be made visible,” says Barnabás Szászi, one of the authors of the study. 

The paper therefore recommends wider use of multi-analyst approaches and so-called “multiverse” analyses, which evaluate several different analytical strategies, especially when addressing questions of high scientific or social importance. These approaches do not search for a single “correct” answer but instead reveal how stable or fragile scientific conclusions may be. 

How research studies are structured 

In standard scientific practice, a dataset is typically analysed by a single researcher or research team, and the publication presents the results of that one specific analytical pathway. Peer review assesses whether the methods are acceptable, but it rarely reveals what results might have emerged under other equally defensible statistical decisions. 

Yet empirical research involves many decision points: how the data are cleaned, how variables are defined, which statistical models or software are used, and how results are interpreted. Together, these choices create what is known as analytical variability, the flexibility that can fundamentally influence final conclusions. 

Over the past decade, the social and behavioural sciences have undergone major reforms aimed at making research more transparent, reproducible and reliable. Practices such as preregistration, registered reports, replication studies and checks of analytical reproducibility all aim to reduce the prevalence of chance findings and biased results. 

Nature also published a news article on the study and its broader context, which is available here

Photo: The screenshot of Nature homepage on 1 April 2026  

Copied to clipboard
×