Planning a symposium or panel on methods and practices in psychology? Here's a collection of top notch speakers to consider inviting.* Inspired by a recent post on PsychMAP as well as #womenalsoknowstuff—not to mention the frequency with which people ask me to recommend female speakers because they can't think of any—these are all women. So now there is no excuse for the 100% male panel on the subject. In fact, you could easily have a 100% female panel of stellar experts (and it's been done! exactly once, as far as I know). Keep in mind that many of these scholars could also be excellent contributors to special issues and handbooks on methods and practices topics.
Here are names and institutions for potential speakers across a range of career stages. These scholars can all speak to issues that relate to our field's unfolding conversations and debates about replicability and improving research methods and practices. When possible, I've linked the name to a relevant publication as well so that you can get a sense of some of their work.
(And of course, this list is incomplete. If you or someone you know should be on it, please leave a comment with the scholar's name, position, institution, relevant speaking topics, and a link to a relevant paper if applicable!)
Samantha Anderson, PhD student, University of Notre Dame
Statistical power, replication methodology, more nuanced ways to determine the "success" of a replication study
Jojanneke Bastiaansen, Postdoc, Groningen
Citation distortion, bias in reporting
Christina Bergmann, Max Planck Institute Nijmegen, The Netherlands
Crowd-sourced meta-analyses, open science, improving research practices in infancy research
Dorothy Bishop, Professor, Oxford
Reproducibility, open science
Erin Buchanan, Associate Professor, Missouri State University
Effect sizes and confidence intervals, alternatives to NHST, Bayesian statistics, statistical reporting
Katherine Button, Lecturer, University of Bath
Power estimation, replicability
Krista Byers-Heinlein, Associate Professor, Concordia University
Organizing large multi-lab collaborative studies and RRRs (she leads the ManyBabies Bilingual project, an RRR at AMPPS currently in data collection), working with hard-to-recruit/hard-to-test/hard-to-define populations (bilingual infants), and making sure the media gets your science right.
Katie Corker, Assistant Professor, Grand Valley State University
Meta-analysis, replication, perspectives on open science from teaching institutions
Angelique Cramer, Associate Professor, Tilburg University
Slow science, open science, exploratory vs. confirmatory hypothesis testing, hidden multiple-testing issues in ANOVA, replication issues in the context of psychopathology research
Alejandrina Cristia, Researcher, Ecole Normale Supérieure
Crowd-sourced meta-analyses, research practices in infancy research
Pamela Davis-Kean, Professor, University of Michigan
Large developmental data sets, replication
Elizabeth Dunn, Professor, University of British Columbia
Pre-registration, how researchers think about Bayes Factors, the NHST debate
Arianne Eason, PhD student, University of Washington
Research practices in infancy research
Ellen Evers, Assistant Professor, University of California, Berkeley
Statistical power, reliability of published work
Fernanda Ferreira, Professor, UC Davis
Open science, open access, replication, how to design appropriate replication studies when original studies involve stimuli that may be specific to certain time periods or contexts (e.g., words used in an experiment in psycholinguistics)
Jessica Flake, Postdoc, York University
Construct validation, measurement, instrument design
Susann Fiedler, Research Group Leader, Max Planck Institute for Research on Collective Goods, Bonn, Germany
Economics and ethics of science, reproducibility, publication bias, incentive structures, digital scholarship and open science
Shira Gabriel, Associate Professor, SUNY Buffalo
Editor perspective on changes in the field and implementing new ideas in journals
Kiley Hamlin, Associate Professor, University of British Columbia
How to improve methods when you study hard-to-recruit populations; personal experiences with the dangers of failing to document everything and how to prevent this problem in your own lab.
Erin Hennes, Assistant Professor, Purdue University
Simulation methods for power analysis in complex designs
Ase Innes-Ker, Senior Lecturer, Lund University
Open science, replication, peer review
Deborah Kashy, Professor, Michigan State University
Reporting practices, transparency
Melissa Kline, Postdoc, MIT
Improving practices in infancy research
Alison Ledgerwood, Associate Professor, UC Davis
Practical best practices; how to design a study to maximize what you learn from it (strategies for maximizing power, distinguishing exploratory and confirmatory research); how to learn more from exploratory analyses; promoting careful thinking across the research cycle.
Carole Lee, Associate Professor, University of Washington
Philosophy of science, peer review practices, publication guidelines
Dora Matzke, Assistant Professor, University of Amsterdam
Bayesian inference
Michelle Meyer, Assistant Professor and Associate Director, Center for Translational Bioethics and Health Care Policy at Geisinger Health System
Topics related to responsible conduct of research, research ethics, or IRBs, including ethical/policy/regulatory aspects of replication, data preservation/destruction, data sharing and secondary research uses of existing data, deidentification and reidentification, and related IRB and consent issues.
Kate Mills, Postdoc, University of Oregon
Human neuroscience open data, multi-site collaboration
Lis Nielson, Chief, Individual Behavioral Processes Branch, Division of Behavioral and Social Research, NIH
Improving reproducibility, validity, and impact
Michèle Nuijten, PhD student, Tilberg University
Replication, publication bias, statistical errors, questionable research practices
Elizabeth Page-Gould, Associate Professor, University of Toronto
Reproducibility in meta-analysis
Jolynn Pek, Assistant Professor, York University
Quantifying uncertainties in statistical results of popular statistical models and bridging the gap between methodological developments and their application.
Cynthia Pickett, Associate Professor, UC Davis
Changing incentive structures, alternative approaches to assessing merit.
Julia Rohrer, Fellow, Deutsches Institut Für Wirtschaftsforschung, Berlin
Metascience, early career perspective on replicability issues
Caren Rotello, Professor, UMass Amherst
Measurement issues, response bias, why replicable effects may nevertheless be erroneous.
Victoria Savalei, Associate Professor, University of British Columbia
The NHST debate, how people reason about and use statistics and how this relates to the replicability crisis, how researchers use Bayes Factors.
Anne Scheel, PhD student, Ludwig-Maximilians-Universität, Munich
Open science, pre-registration, replication issues from a cognitive and developmental psychology perspective, early career perspective
Linda Skitka, Professor, University of Illinois at Chicago
Empirically assessing the status of the field with respect to research practices and evidentiary value; understanding perceived barriers to implementing best practices.
Courtney Soderberg, Statistical and Methodological Consultant, Center for Open Science
Pre-registration and pre-analysis plans, sequential analysis, meta-analysis, methodological and statistical tools for improving research practices.
Jessica Sommerville, Professor, University of Washington
Research practices in infancy research.
Jehan Sparks, PhD student, UC Davis
Practical strategies for improving research practices in one's own lab (e.g., carefully distinguishing between confirmatory and exploratory analyses in a pre-analysis plan).
Barbara Spellman, Professor, University of Virginia
Big-picture perspective on where the field has been and where it’s going; what editors can do to improve the field; how to think creatively about new ideas and make them happen (e.g., RRRs at Perspectives on Psychological Science)
Sara Steegen, PhD student, University of Leuven, Belgium
Research transparency, multiverse analysis
Victoria Stodden, Associate Professor, University of Illinois at Urbana-Champaign
Enabling reproducibility in computational science, developing standards of openness for data and code sharing, big data, privacy issues, resolving legal and policy barriers to disseminating reproducible research.
Jennifer Tackett, Associate Professor, Northwestern
Replicability issues in clinical psychology and allied fields
Sho Tsuji, Postdoc, UPenn and LSCP, Paris
Crowd-sourced meta-analysis
Anna van t'Veer, Postdoc, Leiden University
Pre-registration, replication
Simine Vazire, Associate Professor, UC Davis; Co-founder, Society for the Improvement of Psychological Science (SIPS)
Replication, open science, transparency
Anna de Vries, PhD student, Groningen
Citation distortion, bias in reporting, meta-analysis
Tessa West, Associate Professor, NYU
Customized power analysis, improving inclusion in scientific discourse
Edit (6/27/17): Note that this list doesn't even try to cover the many excellent female scholars who could speak on quantitative methods more broadly—I will leave that to someone else to compile (and if you take this on, let me know and I'll link to it here!). In this list, I'm focusing on scholars who have written and/or spoken about issues like statistical power, replication, publication bias, open science, data sharing, and other topics related to core elements of the field's current conversations and debates about replicability and improving research practices (i.e., the kinds of topics covered on this syllabus).
Julia Moeller, Postdoc, Yale University, intensive longitudinal data / experience sampling and person-oriented methods (e.g., latent profile analysis, co-occurrence network analysis). Relevant papers e.g., http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01389/full and https://osf.io/fze7d/
ReplyDelete