Monday, April 17, 2017

Everything is F*cking Nuanced: The Syllabus

Psych 342: Everything is Fucking Nuanced
Prof. Alison Ledgerwood
Class meetings: Ongoing, forever

A common theme in discussions about replicability and improving research practices across scientific disciplines has been debating whether or not science (or a specific scientific discipline) is “in crisis.” The implicit logic seems to be that we have to first establish that there is a crisis before research practices can begin to improve, or conversely, that research practices need not change if there is not a crisis. This debate can be interesting, but it also risks missing the point. Science is hard, reality is messy, and doing research well requires constantly pushing ourselves and our field to recognize where there is room for improvement in our methods and practices.

We can debate how big the sense of crisis should be till the cows come home. But the fact is, whether you personally prefer to describe the current state of affairs as “science in shambles” or “science working as it should,” we have a unique opportunity right now to improve our methods and practices simply because (a) there is always room for improvement and (b) we are paying far more attention to several key problems than we were in the past (when many of the same issues were raised and then all too often ignored; e.g., Cohen, 1992; Greenwald, 1975; Maxwell, 2004; Rosenthal, 1979).

In this class, we will move beyond splashy headlines like “Why most published research findings are false,” “Everything is fucked,”* and “Psychology is in crisis over whether it’s in crisis” to consider the less attention-grabbing but far more important question of Where do we go from here? Along the way, we will learn that the problems we face are both challenging and nuanced, and that they require careful and nuanced solutions.


Week 1 - Introduction to the F*cking Nuance: How we got here, and the single most important lesson we can learn going forward
Spellman, B. A. (2015). A short (personal) future history of Revolution 2.0. Perspectives on Psychological Science, 10, 886-899.

Ledgerwood, A. (2016). Introduction to the special section on improving research practices: Thinking deeply across the research cycle. Perspectives on Psychological Science, 11, 661-663.


 

Week 2: Estimating Replicability is F*cking Nuanced

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.


Etz, A., & Vandekerckhove, J. (2016). A Bayesian Perspective on the Reproducibility Project: Psychology. PLoS ONE 11(2): e0149794. 


Stanley, D. J., & Spence, J. R. (2014). Expectations for replications: Are yours realistic? Perspectives on Psychological Science, 9, 305-318.


Anderson, S. F., & Maxwell, S. E. (2016). There’s more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, 21, 1.


Week 3: Power is F*cking Nuanced
 

Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147-163.

Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365-376.


Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard power as a protection against imprecise power estimates. Perspectives on Psychological Science, 9, 319-332.


McShane, B. B., & Böckenholt, U. (2014). You cannot step into the same river twice: When power analyses are optimistic. Perspectives on Psychological Science, 9, 612-625.

 

Week 4: Selecting an Optimal Research Strategy is F*cking Nuanced
 

Finkel, E. J., Eastwick, P. W., & Reis, H. T. (in press). Replicability and other features of a high-quality science: Toward a balanced and empirical approach. Journal of Personality and Social Psychology.
 

Miller, J., & Ulrich, R. (2016). Optimizing research payoff. Perspectives on Psychological Science, 11, 664-691.


Week 5: Interpreting Results from Individual Studies is F*cking Nuanced
 

De Groot, A. D. (2014). The meaning of “significance” for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han LJ van der Maas]. Acta psychologica, 148, 188-194.

Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609-612. 


Ledgerwood, A., Soderberg, C. K., & Sparks, J. (in press). Designing a study to maximize informational value. In J. Plucker & M. Makel (Eds.), Toward a more perfect psychology: Improving trust, accuracy, and transparency in research. Washington, DC: American Psychological Association. (See section on “Distinguishing Exploratory and Confirmatory Analyses.”)


Week 6: Maximizing What We Learn from Exploratory (Data-Dependent) Analyses is F*cking Nuanced


Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11, 702-712.

Sagarin, B. J., Ambler, J. K., & Lee, E. M. (2014). An ethical approach to peeking at data. Perspectives on Psychological Science, 9, 293-304.

Wang, Y., Sparks, J., Gonzales, J., Hess, Y. D., & Ledgerwood, A. (in press). Using independent covariates in experimental designs: Quantifying the trade-off between power boost and Type I error inflation. Journal of Experimental Social Psychology.

 

Week 7: The Role of Direct, Systematic, and Conceptual Replications is F*cking Nuanced

Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments examined. Perspectives on Psychological Science, 7, 531-536.

Roediger, H. L. (2012). Psychology’s woes and a partial cure: The value of replication. APS Observer, 25, 9.

Fabrigar, L. R., & Wegener, D. T. (2016). Conceptualizing and evaluating the replication of research results. Journal of Experimental Social Psychology, 66, 68-80.

Crandall, C. S., & Sherman, J. W. (2016). On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology, 66, 93-99.


Week 8: Thinking Cumulatively about Evidence is F*cking Nuanced 

Braver, S. L., Thoemmes, F. J., & Rosenthal, R. (2014). Continuously cumulating meta-analysis and replicability. Perspectives on Psychological Science, 9, 333-342.

Tsuji, S., Bergmann, C., & Cristia, A. (2014). Community-augmented meta-analyses: Toward cumulative data assessment. Perspectives on Psychological Science, 9, 661-665.


McShane, B.B. and Böckenholt, U. (2017). Single paper meta-analysis: Benefits for study summary, theory-testing, and replicability. Journal of Consumer Research, 43, 1048-1063.



Week 9: Dealing with Publication Bias in Meta-Analysis is F*cking Nuanced

 
Inzlicht, M., Gervais, W., & Berkman, E. (September 11, 2015). Bias-Correction Techniques Alone Cannot Determine Whether Ego Depletion is Different from Zero: Commentary on Carter, Kofler, Forster, & McCullough, 2015.


McShane, B.B., Böckenholt, U., and Hansen, K.T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11, 730-749.
 


Week 10: Incentive Structures Need Some F*cking Nuance

Maner, J. K. (2014). Let’s put our money where our mouth is: If authors are to change their ways, reviewers (and editors) must change with them. Perspectives on Psychological Science, 9, 343-351. 


Tullett, A. M. (2015). In search of true things worth knowing: Considerations for a new article prototype. Social and Personality Psychology Compass 9: 188–201.

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615-631.


Pickett, C. (2017, April 12). Let's Look at the Big Picture: A System-Level Approach to Assessing Scholarly Merit. Retrieved from osf.io/tv6nb
 


Week 11: Keep Reading...



*Note that in contrast to the other two headlines mentioned here,

Sanjay’s Everything is Fucked” title is obviously intentionally hyperbolic for comedic effect. He goes on to write: “What does it mean, in science, for something to be fucked? …In this class we will go a step further and say that something is fucked if it presents hard conceptual challenges to which implementable, real-world solutions for working scientists are either not available or routinely ignored in practice.” His post, as well as the other two articles noted here, raise important issues in thoughtful ways. But if you just focus on the titles, as many people have, you might find yourself sliding into a polarizing argument about how bad things are or aren’t. And this polarizing argument can distract us from the more pressing question of how do we get better, right now, starting today.

Tuesday, January 24, 2017

Why the F*ck I Waste My Time Worrying about Equality


Last week, I spent an enjoyable hour of a conference hanging out with five extremely smart people in remarkably tall chairs on a stage, talking about some of the opportunities and potential pitfalls of social media as a vehicle for scientific discourse. (Video here.)

The conversation was thought-provoking, as conversations with smart people tend to be…I sometimes disagreed with the other speakers, but always found their positions reasonable and often we would realize that we agreed more than we disagreed as we delved further into a topic.

One of the issues that came up early on was the question of gender bias in online discussions of research, because data from a survey of psychologists using social media suggested that there are some pervasive discrepancies between men and women when it comes to participating in scientific discourse on social media as well as with respect to how helpful men and women think participating in social media is for their careers. A comment from a female participant in the open-ended section of the survey summed up a common sentiment: “I just don’t have time for this sh*t!”

Late in the panel discussion that followed, an audience member asked about whether gender bias was apparent in the panel itself—were the male panelists talking more often or longer than the female panelists? Some intrepid coders went back to the video and figured out the answer was almost certainly no. But the fact that this question was even asked seems to have offended some people online, as illustrated by this comment:


At the apparent risk of causing someone to become literally sick, I’m going to take just a moment here to wonder why the fuck I worry about gender equality.

Let’s set aside for a moment the current political context in the United States and why that might make a person especially prone to worrying about gender equality. Let’s talk about just what’s going on in our science these days.

Early on in our panel discussion last Friday, Brian Nosek made the excellent point that “science proceeds through conversation.” He went on to elaborate that scientific conversation needs criticism and skepticism in order to flourish—and I completely agree. But I also think it’s worth juxtaposing this idea that science proceeds through conversation against the data presented at the beginning of the session, which suggested some big inequalities in WHO is participating in scientific discourse online. Across various social media platforms (PsychMAP, PMDG, and Twitter), the data from the SPSP survey suggest that men participate more than women. Moreover, if you look at who is posting in the Facebook forums, it turns out most of the content is being driven by about nine people. Think about that for a moment. NINE people—out of thousands of scholars involved in these forums—are driving what we talk about in these conversations. [UPDATE 1/26/17: The "about nine people" estimate mentioned in the presentation of the SPSP survey was a ballpark estimate of the number of people IN THE SURVEY saying that they post frequently on Facebook methods groups. You could translate this estimate as "about 2% of respondents post frequently," but it should definitely NOT be taken as meaning that only nine people post on social media! The point I was trying to make here was that a very tiny fraction of the field is currently driving the majority of the conversation on these platforms, and that I think we could do better.]

The idea that conversation is central to the entire scientific enterprise highlights why we should care deeply about WHO is participating in these conversations. If there are inequalities in who is talking, that means there are inequalities in who is participating in science itself. To the extent that the forums we build for scientific discourse enable and promote equality in conversation, they are enabling and promoting equality in who can be part of science. And the reverse is true as well: If we create forums that exclude rather than include, then we are creating a science that excludes as well.*

What makes a science exclusionary? Proponents of open science often point (rightly) to things like old boys networks and the tendency for established gate-keepers sometimes to prioritize well-known names over merit in publication or funding or speaker invitation decisions. But there are other factors that influence the exclusiveness or inclusiveness of a science as well. For instance, we know from Amanda Diekman’s work on why women opt out of STEM careers that when a career is perceived as less likely to fulfill communal goals, women are more likely than men to lose interest in the field (see also Sapna Cheryan's research on gendered stereotypes in computer science). Changes that make a field seem more combative and less communal are therefore likely to disproportionately push away women (and indeed anyone who prioritizes communal goals). 

Meanwhile, participating in a conversation about science obviously means not only that you are talking, but that someone is listening to you. To the extent that audience attention is finite (we only have so many hours a day to devote to listening, after all), then the more one person speaks, the less attention is left over to spend on other speakers. That means that the people who talk the most end up setting the threshold for getting heard—if you don’t comment as loudly or as frequently as the loudest and most frequent contributors, you risk being drowned out in the din. In such an environment, who is talking—that is, who gets to participate in science itself—becomes less of an open, level playing field and more of a competition where people with more time and more willingness to engage in this particular style of discourse get to drive disproportionately the content of scientific conversation.**

Here again, we might think about various demographic inequalities. Take just the question of time: Women in academia tend to spend substantially more time on service commitments than do men. Scholars at teaching institutions spend more time in scheduled teaching activities than do their peers with more flexible schedules at research institutions. Primary caregivers have greater demands on their time than people with stay-at-home partners or people with the means to pay for full time childcare. If we create venues for scientific discourse where your ability to participate effectively depends on how much time you have to make your voice heard over the din, then we are effectively saying: We prioritize the voices of men more than women, of scholars at research rather than teaching institutions, and of people with more versus less childcare support.

So to those who keep saying why worry about inequality in scientific discourse, just so you know, this is what it sounds like you are saying: Why worry about inequality, because the existing inequalities don’t bother me. I’m fine with them. I’m okay with our science excluding some groups more than others. I’d like to focus on other things instead, and let psychology become more like other STEM fields in terms of what they look like demographically.

And you know what? You are totally entitled to that opinion.

And I am entitled to mine. Which is, in a nutshell: Fuck that.


--
*Note that I'm talking about any forum for scientific discourse, not just social media. For the record, I thinks social media offers some amazing opportunities for increasing inclusiveness in science. And I think that with some careful attention and creativity, we could maximize those benefits while mitigating some of the issues I raise here. (Here's an example of one recent attempt to do that.)

**Again, this issue is not remotely unique to social media...it's true of lab meetings, conference panels, publishing in traditional journals with limited page space, you name it.