I don’t know about you, but when the shouting gets shouty, I like to wrap myself in a warm blanket of thoughtful nuance. Fortunately, I have here in front of me a set of six manuscripts that do exactly that, and they are headed your way in the latest special section on improving research practices in the forthcoming September issue of Perspectives on Psychological Science.
I have talked before about the tendency for humans to love a good cognitive shortcut, and I suspect that cognitive shortcuts act as both antecedents to and consequences of the shouting matches that sometimes erupt in the ongoing conversation on research practices. One of my favorite drinking games these days* is to take a shot every time somebody claims “everyone knows X” or “nobody is arguing Y” or “I don’t think anyone would do Z.” It turns out that this is a prime example of the false consensus effect—a heuristic that leads people to overestimate the extent to which other people share their own beliefs, preferences, and behaviors. We tend to use our own beliefs and behaviors as a guesstimate and generalize from there.
Meanwhile, if I simplify the landscape of perspectives into two sides, I’m more likely to perceive the “other” side as unified, homogeneous, and extreme in their positions, and I contribute in turn to other people’s perceptions that there are only two sides. These and other heuristics tend to sink us further into polarizing arguments and unhelpful finger-pointing, and impede our ability to have constructive discussions, learn from each other, change our own minds, and build consensus.
Moreover, cognitive shortcuts also played a major role in creating the problems with our methods and practices that we are now confronting (p < .05, anyone?). As I note in my introduction to our new special section (available here, in UC’s open access repository, if you’d like a sneak peek): The single most important lesson we can draw from our past in this respect is that we need to think more carefully and more deeply about our methods and our data. Heuristics got us into this mess. Careful thinking will help get us out. The only heuristic you'll ever need in science is this: Don't rely on heuristics.
And this is why the papers in this special section feel like a warm blanket of thoughtful nuance to me: Together, they highlight the importance of thinking carefully at each phase of the research process, from selecting among multiple possible research strategies, to analyzing one’s data, to aggregating across multiple studies to build a more comprehensive picture of a given topic area.
They hammer home the importance of thinking carefully about tradeoffs when choosing one research strategy over another (e.g., running fewer studies with larger samples or more studies with smaller samples), echoing and building on recent calls to fully consider both the pros and cons of a given research strategy when seeking to design smart changes for one’s own lab or for the field as a whole (see e.g., Finkel, Eastwick, & Reis, in press; Gelman,2013; Ledgerwood, Soderberg, & Sparks, in press). They push us to more carefully examine and transparently communicate the assumptions we make when we analyze our data. And they unpack some of the idealized assumptions underlying various meta-analytic techniques—including p-curve and p-uniform, as well as traditional methods—and show us what happens when those assumptions are violated, as they often are in the real world. (Don’t worry, there’s a better way to do meta-analysis, and the last article in the special section explains how.)
Most importantly, the articles all provide concrete advice both on how we can be more careful and more transparent about the assumptions we make throughout the research process, and on how we can continue to improve our research practices in a thoughtful, smart, and nuanced way.
So if you’re feeling tired of the shouting, and you’re ready for some nuance, stay tuned: The following articles are coming your way, open access, very shortly.
**Or am I?
Post a Comment