Or maybe it seems like a social psychology problem, and you’re not in that area, so it doesn’t even apply to you. In any case, business as usual. Onward and upward, publish or perish, keep on moving, nothing to see here.
Here’s the problem, though.
You’re running out of time.
This pesky “crisis” thing? It isn’t going away.* It isn’t limited to one area of psychology, or even just psychology. It’s not something you can ignore and let other people deal with. And it isn’t even something you can put off grappling with in your own work for just another month, semester, year, two years. The alarm bells have been sounded—alarms bells about replicability, power, and publication bias—and although these concerns have been raised before and repeatedly, a plurality of scholars across scientific disciplines are finally listening and responding in a serious way.
Now, it takes time to change your research practices. You have to go out and learn about the problems and proposed solutions, you have to identify which solutions make sense for your own particular research context, you have to learn new skills and create new lab policies and procedures. You have to think carefully about things like power (no, running a post-hoc power analysis to calculate observed power is not a good idea) and preregistration (like why do you want to preregister and which type of preregistration will help you accomplish your goals?), and you probably have to engage in some trial and error before you figure out the most effective approaches for your lab.
So a few years ago, when someone griped to me about seeing a researcher present a conference talk with no error bars in the graphs, I nodded sympathetically but also expressed my sense that castigating the researcher in question was premature. Things take awhile to percolate through the system. Not everybody hears about this stuff right away. It might take people awhile to go back through every talk and find every dataset and add error bars. Let’s have some patience. Let’s wait for things to percolate. Let’s give people a chance to learn, and try new things, and improve their research practices going forward, and let’s give that research time to make its slow way through the publication process and emerge into our journals.
Now, though? It’s 2018. And you’re submitting a manuscript where you interpret p = .12 on one page as “a similar trend emerged” that is consistent with your hypothesis, and on another page you use another p = .12 to conclude that “there were no differences across subsamples, so we do not investigate this variable further”…or you’re writing up a study where you draw strong conclusions from the lack of a significant difference on a behavioral outcome between 5 year olds and 7 year olds, with a grand total of 31 children per group and no discussion of the limited reliability of your measure?
Or you’re giving a talk…a new talk, about new data…and you haven’t put error bars on your graphs? And for your between-subjects interaction…for a pretty subtle effect …you collected TWENTY people per cell? And you don’t talk about power at all when you’re describing the study? Or the next study? Or the next?
Well now you’ve lost me. I’m looking out the window. I’m wondering why I’m here. Or actually, I’m wondering why YOU’RE here. Why are you here?
Are you here to science?
Well then. It’s time to pay attention.
Here is one good place to start.**
*Note, I'm not here to debate how bad the replicability crisis is. Lots of other people seem to find value in doing that, but I'm personally more interested in starting with a premise we can all agree on -- i.e., that there's always room for improvement -- and making progress on those improvements.
**And let me just emphasize that word start. I'm not saying you're out of time to finish making all improvements to your research methods and practices -- in fact, I see improving methods and practices as a process that we can incorporate into our ongoing research life, not something that really gets finished. Again, nothing is ever perfect...we can always be looking for the next step. But I do think it's time for EVERYONE to be looking for, and implementing, whatever that next step is in their own particular research context. If you find that you're still on the sidelines -- get in the game. This is not something to watch and it's not something to ignore. It's something you need to be actively engaged in.