Tuesday, November 14, 2017

Walking and Talking

I’m going to say something, and you’re not going to like it: It’s a hell of a lot easier, these days, to talk the talk than to walk the walk.

I mean this in at least three different ways.

1. Low-Cost Signals vs. High-Cost Actions
It is far easier to extol publicly the importance of changing research practices than to actually incorporate better practices in your own research. I can put together a nice rant in about ten minutes about how everyone should preregister and run highly powered studies and replicate before publishing…and while you’re at it, publish slower, prioritize quality over quantity, don't cherry-pick, and post all your data!

But actually thinking through how to increase the informational value of my own research, learning the set of skills necessary to do so, and practicing what I preach? Well that’s far more time-consuming, effortful, and costly.

For example. Let’s say you have a paper. It’s totally publishable, probably at a high impact journal. It reports some findings that for whatever reason, you’re not super confident about. Do you publish it? All together now: “No way!” (See? Talk is easy.)

But do you ACTUALLY decide against publishing it? Because if you do (and I have, repeatedly), your publication count and citation indices take a hit. And your coauthors’ counts and indices take a hit. And now your bean count is lower than it might otherwise be in a system that still prioritizes beans at many levels.



“Down with the beans!” you say. “Let’s change the incentive structure of science!” Awesome. Totally agree. And saying this is easy. Do you actually do it? Do you go into the faculty meeting and present a case for hiring or promoting someone WITHOUT counting any beans? And do you do this REGARDLESS of whether or not the bean count looks impressive? Because it’s tempting to only bother with the longer, harder quality conversation if the quantity isn’t there. And, if you do focus the conversation exclusively on quality, someone is likely to ask you to count the beans anyway. In fact, even if you are armed with an extensive knowledge of the quality of the candidate’s papers and a compelling case for why quality matters, you are going to have an uphill battle to convince the audience to prioritize quality over quantity—especially if those audience members come from areas of psychology that have not yet had to grapple seriously with issues of replicability and publication bias.

Or maybe you say “yes, publish that paper with the tentative findingsjust be transparent about your lack of confidence in the results! At the end of the day, publish it all…just be sure to distinguish clearly between exploratory (data-dependent) and confirmatory (data-independent) findings!” Totally agree. And again: Talk is easy. When you submit your paper, do you clearly state the exploratory nature of your findings front and center, so that even casual readers are sure to see it? If it’s not in the abstract, most people are likely to assume the results you describe are more conclusive than they actually are. But if you put it front and center, you may dramatically lower the chances that your paper gets accepted. (I haven’t tried this one yet, for exactly this reason…instead, I’ve been running pre-registered replications before trying to publish exploratory results. But again, that’s far easier to advocate than to actually do, especially when a study requires substantial resources.)

2. Superficial Shortcuts vs. Deep Thinking
It’s far easier to say you’ve met some heuristic rules about minimum sample sizes, sharing your data, or preregistering than it is to learn about and carefully think through each of these practices, what their intended consequences are, and how to actually implement them in a way that will achieve their intended consequences.


For example. I can upload my data file to the world wide internets in about 60 seconds. Whee, open data! But how open is it, really? Can other researchers easily figure out what each variable is, how the data were processed, and how the analyses were run? Clearly labeling your data and syntax, providing codebooks, making sure someone searching for data like yours will be able to find it easily--all of these things take (sometimes considerable) time and care…but without them, your “open data” is only open in theory, not practice.



Likewise, I can quickly patch together some text that loosely describes the predictions for a study I’m running and post it on OSF and call it a preregistration. But I’m unlikely to get the intended benefits of preregistration unless I understand that there are multiple kinds of preregistration, that each kind serves a different goal, and what it takes to achieve the goal in question. Likewise, I can read a tweet or Facebook post about a preregistered paper and decide to trust it (“those keywords look good, thumbs up!”), or I can go read everything critically and carefully. Equating preregistered studies with good science is easy, and we’ve done this kind of thing before (p < .05! Must be true!). Going beyond the heuristic to think critically about what was in the preregistration and what that means for how much confidence we should have in the results…that’s much harder to do.

3. Countering a Norm in Words vs. Deeds
Now, you might be thinking: It is NOT easy to talk the talk when you’re in the [career stage, institution, or environment] that I am in! And that may be very true. But of course, even here, the talk is still easier than the walk. Talking to your adviser about the merits of clearly distinguishing between data-dependent and data-independent analyses in a paper may be a challenge…but actually convincing them to agree to DO it is probably harder. Publicly stating that you’ve preregistered something may have costs if the people around you think preregistration is silly. But asking yourself why you’re preregistering—what you hope to gain (theory falsification? Type I error control?) and how to actually implement the preregistration in a way that gets you those benefits—that’s an extra layer of effort.


So what’s the point of all this talking that I’m doing right now? It’s to acknowledge that change is hard, and costly. Anyone who tells you this is easy—that there are no tradeoffs, that we just need some new simple decision rules to replace the old ones—is oversimplifying. They are talking but not walking, in the second sense above.

But this post full of talking is also meant to be a challenge to all the talkers out there, and to myself as well. Are you really (am I really) doing what you’re advocating to the fullest extent that you can? The answer is probably no. 
So: What’s the next step?

No comments:

Post a Comment