eResearch NZ
Browse
- No file added yet -

Why overfitting is bad for scienc: Lessons from psycology

Download (1.91 MB)
presentation
posted on 2020-03-10, 03:59 authored by Adam Bartonicek
Many published research findings in psychology cannot be replicated. Even formerly “wellestablished” effects such as power-posing and implicit priming have failed to replicate. The crisis is not limited to psychology – replication issues abound across numerous fields, including neuroscience and biomedical sciences (e.g. Button et al., 2013; Ioannidis, 2005). The main causes of the replication crisis are thought to be inadequate statistical literacy and questionable research practices, such as p-hacking and “HARKing” (Hypothesizing After Results are Known). However, there may also be a less well-appreciated contributor to replication crisis – overfitting (Yarkoni & Westfall, 2017). Overfitting occurs when an overly complex model provides a good fit to the data it was trained on, but fails to accurately predict new samples. The goal of the classical statistical frameworks used in psychology, such as OLS and maximum likelihood methods, is to provide inference by finding the best fit to the data at hand. As such, these methods are liable to overfitting, especially when used alongside automatic variable selection methods such as forward, backward, and stepwise regression. Conversely, the goal of more recent statistical and machine learning methods is to maximize prediction accuracy in new samples and guard against overfitting directly. As such, psychologists and other scientific researchers may benefit from incorporating newer statistical and machine learning methods into their research in order to improve its replicability. To this end, more user-friendly open-source machine learning software packages are now being developed, such as the recent R package PredPsych and machine learning module for JASP. The proliferation of convenient digital tools for machine learning may lead to more replicable and reliable research, in psychology and in experimental science in general.

ABOUT THE AUTHOR(S)
Adam Bartonicek is a PhD student at the Department of Psychology, University of Otago. His main interests are well-being and using new statistical learning methods for highdimensional inference. Dr. Narun Pornpattananangkul is a lecturer at the Department of Psychology, University of Otago. His main research interests include using big data in fMRI to study changes in reward-processing in mood disorders. Associate Professor Tamlin Conner is a lecturer at the Department of Psychology, University of Otago. Her main research interests include the impact of health behaviours on well-being and using mobile technology for daily experience sampling.

Associate Professor Tamlin Conner is a lecturer at the Department of Psychology, University of Otago. Her main research interests include the impact of health behaviours on well-being and using mobile technology for daily experience sampling.

REFERENCES
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. https://doi.org/10.1038/ nrn3475

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), 0696–0701. https://doi.org/10.1371/journal.pmed.0020124

Yarkoni, T., & Westfall, J. (2017). Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspectives on Psychological Science, 12(6), 1100–1122. https://doi.org/10.1177/1745691617693393

History

Usage metrics

    eResearch NZ 2020

    Categories

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC