Psych 342: Everything is Fucking Nuanced
Prof. Alison Ledgerwood
Class meetings: Ongoing, forever
A common theme in discussions about replicability and improving research practices across scientific disciplines has been debating whether or not science (or a specific scientific discipline) is “in crisis.” The implicit logic seems to be that we have to first establish that there is a crisis before research practices can begin to improve, or conversely, that research practices need not change if there is not a crisis. This debate can be interesting, but it also risks missing the point. Science is hard, reality is messy, and doing research well requires constantly pushing ourselves and our field to recognize where there is room for improvement in our methods and practices.
We can debate how big the sense of crisis should be till the cows come home. But the fact is, whether you personally prefer to describe the current state of affairs as “science in shambles” or “science working as it should,” we have a unique opportunity right now to improve our methods and practices simply because (a) there is always room for improvement and (b) we are paying far more attention to several key problems than we were in the past (when many of the same issues were raised and then all too often ignored; e.g., Cohen, 1992; Greenwald, 1975; Maxwell, 2004; Rosenthal, 1979).
In this class, we will move beyond splashy headlines like “Why most published research findings are false,” “Everything is fucked,”* and “Psychology is in crisis over whether it’s in crisis” to consider the less attention-grabbing but far more important question of Where do we go from here? Along the way, we will learn that the problems we face are both challenging and nuanced, and that they require careful and nuanced solutions.
Week 1 - Introduction to the F*cking Nuance: How we got here, and the single most important lesson we can learn going forward
Spellman, B. A. (2015). A short (personal) future history of Revolution 2.0. Perspectives on Psychological Science, 10, 886-899.
Ledgerwood, A. (2016). Introduction to the special section on improving research practices: Thinking deeply across the research cycle. Perspectives on Psychological Science, 11, 661-663.
Week 2: Estimating Replicability is F*cking Nuanced
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
Etz, A., & Vandekerckhove, J. (2016). A Bayesian Perspective on the Reproducibility Project: Psychology. PLoS ONE 11(2): e0149794.
Stanley, D. J., & Spence, J. R. (2014). Expectations for replications: Are yours realistic? Perspectives on Psychological Science, 9, 305-318.
Anderson, S. F., & Maxwell, S. E. (2016). There’s more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, 21, 1.
Week 3: Power is F*cking Nuanced
Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147-163.
Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365-376.
Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9, 319-332.
McShane, B. B., & Böckenholt, U. (2014). You cannot step into the same river twice: When power analyses are optimistic. Perspectives on Psychological Science, 9, 612-625.
Week 4: Selecting an Optimal Research Strategy is F*cking Nuanced
Finkel, E. J., Eastwick, P. W., & Reis, H. T. (in press). Replicability and other features of a high-quality science: Toward a balanced and empirical approach. Journal of Personality and Social Psychology.
Miller, J., & Ulrich, R. (2016). Optimizing research payoff. Perspectives on Psychological Science, 11, 664-691.
Week 5: Interpreting Results from Individual Studies is F*cking Nuanced
De Groot, A. D. (2014). The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han LJ van der Maas]. Acta psychologica, 148, 188-194.
Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609-612.
Ledgerwood, A., Soderberg, C. K., & Sparks, J. (in press). Designing a study to maximize informational value. In J. Plucker & M. Makel (Eds.), Toward a more perfect psychology: Improving trust, accuracy, and transparency in research. Washington, DC: American Psychological Association. (See section on “Distinguishing Exploratory and Confirmatory Analyses.”)
Week 6: Maximizing What We Learn from Exploratory (Data-Dependent) Analyses is F*cking Nuanced
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11, 702-712.
Sagarin, B. J., Ambler, J. K., & Lee, E. M. (2014). An ethical approach to peeking at data. Perspectives on Psychological Science, 9, 293-304.
Wang, Y., Sparks, J., Gonzales, J., Hess, Y. D., & Ledgerwood, A. (2017). Using independent covariates in experimental designs: Quantifying the trade-off between power boost and Type I error inflation. Journal of Experimental Social Psychology, 72, 118-124.
Week 7: The Role of Direct, Systematic, and Conceptual Replications is F*cking Nuanced
Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments examined. Perspectives on Psychological Science, 7, 531-536.
Roediger, H. L. (2012). Psychology’s woes and a partial cure: The value of replication. APS Observer, 25, 9.
Fabrigar, L. R., & Wegener, D. T. (2016). Conceptualizing and evaluating the replication of research results. Journal of Experimental Social Psychology, 66, 68-80.
Crandall, C. S., & Sherman, J. W. (2016). On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology, 66, 93-99.
Week 8: Thinking Cumulatively about Evidence is F*cking Nuanced
Braver, S. L., Thoemmes, F. J., & Rosenthal, R. (2014). Continuously cumulating meta-analysis and replicability. Perspectives on Psychological Science, 9, 333-342.
Tsuji, S., Bergmann, C., & Cristia, A. (2014). Community-augmented meta-analyses: Toward cumulative data assessment. Perspectives on Psychological Science, 9, 661-665.
McShane, B.B. and Böckenholt, U. (2017). Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. Journal of Consumer Research, 43, 1048-1063.
Week 9: Dealing with Publication Bias in Meta-Analysis is F*cking Nuanced
Inzlicht, M., Gervais, W., & Berkman, E. (September 11, 2015). Bias-Correction Techniques Alone Cannot Determine Whether Ego Depletion is Different from Zero: Commentary on Carter, Kofler, Forster, & McCullough, 2015.
McShane, B.B., Böckenholt, U., and Hansen, K.T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11, 730-749.
Week 10: Incentive Structures Need Some F*cking Nuance
Maner, J. K. (2014). Let’s put our money where our mouth is: If authors are to change their ways, reviewers (and editors) must change with them. Perspectives on Psychological Science, 9, 343-351.
Tullett, A. M. (2015). In search of true things worth knowing: Considerations for a new article prototype. Social and Personality Psychology Compass 9: 188–201.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615-631.
Pickett, C. (2017, April 12). Let's Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit. Retrieved from osf.io/tv6nb
Week 11: Keep Reading...
*Note that in contrast to the other two headlines mentioned here,
Sanjay’s “Everything is Fucked” title is obviously intentionally hyperbolic for comedic effect. He goes on to write: “What does it mean, in science, for something to be fucked? …In this class we will go a step further and say that something is fucked if it presents hard conceptual challenges to which implementable, real-world solutions for working scientists are either not available or routinely ignored in practice.” His post, as well as the other two articles noted here, raise important issues in thoughtful ways. But if you just focus on the titles, as many people have, you might find yourself sliding into a polarizing argument about how bad things are or aren’t. And this polarizing argument can distract us from the more pressing question of how do we get better, right now, starting today.