Tuesday, January 24, 2017

Why the F*ck I Waste My Time Worrying about Equality


Last week, I spent an enjoyable hour of a conference hanging out with five extremely smart people in remarkably tall chairs on a stage, talking about some of the opportunities and potential pitfalls of social media as a vehicle for scientific discourse. (Video here.)

The conversation was thought-provoking, as conversations with smart people tend to be…I sometimes disagreed with the other speakers, but always found their positions reasonable and often we would realize that we agreed more than we disagreed as we delved further into a topic.

One of the issues that came up early on was the question of gender bias in online discussions of research, because data from a survey of psychologists using social media suggested that there are some pervasive discrepancies between men and women when it comes to participating in scientific discourse on social media as well as with respect to how helpful men and women think participating in social media is for their careers. A comment from a female participant in the open-ended section of the survey summed up a common sentiment: “I just don’t have time for this sh*t!”

Late in the panel discussion that followed, an audience member asked about whether gender bias was apparent in the panel itself—were the male panelists talking more often or longer than the female panelists? Some intrepid coders went back to the video and figured out the answer was almost certainly no. But the fact that this question was even asked seems to have offended some people online, as illustrated by this comment:


At the apparent risk of causing someone to become literally sick, I’m going to take just a moment here to wonder why the fuck I worry about gender equality.

Let’s set aside for a moment the current political context in the United States and why that might make a person especially prone to worrying about gender equality. Let’s talk about just what’s going on in our science these days.

Early on in our panel discussion last Friday, Brian Nosek made the excellent point that “science proceeds through conversation.” He went on to elaborate that scientific conversation needs criticism and skepticism in order to flourish—and I completely agree. But I also think it’s worth juxtaposing this idea that science proceeds through conversation against the data presented at the beginning of the session, which suggested some big inequalities in WHO is participating in scientific discourse online. Across various social media platforms (PsychMAP, PMDG, and Twitter), the data from the SPSP survey suggest that men participate more than women. Moreover, if you look at who is posting in the Facebook forums, it turns out most of the content is being driven by about nine people. Think about that for a moment. NINE people—out of thousands of scholars involved in these forums—are driving what we talk about in these conversations. [UPDATE 1/26/17: The "about nine people" estimate mentioned in the presentation of the SPSP survey was a ballpark estimate of the number of people IN THE SURVEY saying that they post frequently on Facebook methods groups. You could translate this estimate as "about 2% of respondents post frequently," but it should definitely NOT be taken as meaning that only nine people post on social media! The point I was trying to make here was that a very tiny fraction of the field is currently driving the majority of the conversation on these platforms, and that I think we could do better.]

The idea that conversation is central to the entire scientific enterprise highlights why we should care deeply about WHO is participating in these conversations. If there are inequalities in who is talking, that means there are inequalities in who is participating in science itself. To the extent that the forums we build for scientific discourse enable and promote equality in conversation, they are enabling and promoting equality in who can be part of science. And the reverse is true as well: If we create forums that exclude rather than include, then we are creating a science that excludes as well.*

What makes a science exclusionary? Proponents of open science often point (rightly) to things like old boys networks and the tendency for established gate-keepers sometimes to prioritize well-known names over merit in publication or funding or speaker invitation decisions. But there are other factors that influence the exclusiveness or inclusiveness of a science as well. For instance, we know from Amanda Diekman’s work on why women opt out of STEM careers that when a career is perceived as less likely to fulfill communal goals, women are more likely than men to lose interest in the field (see also Sapna Cheryan's research on gendered stereotypes in computer science). Changes that make a field seem more combative and less communal are therefore likely to disproportionately push away women (and indeed anyone who prioritizes communal goals). 

Meanwhile, participating in a conversation about science obviously means not only that you are talking, but that someone is listening to you. To the extent that audience attention is finite (we only have so many hours a day to devote to listening, after all), then the more one person speaks, the less attention is left over to spend on other speakers. That means that the people who talk the most end up setting the threshold for getting heard—if you don’t comment as loudly or as frequently as the loudest and most frequent contributors, you risk being drowned out in the din. In such an environment, who is talking—that is, who gets to participate in science itself—becomes less of an open, level playing field and more of a competition where people with more time and more willingness to engage in this particular style of discourse get to drive disproportionately the content of scientific conversation.**

Here again, we might think about various demographic inequalities. Take just the question of time: Women in academia tend to spend substantially more time on service commitments than do men. Scholars at teaching institutions spend more time in scheduled teaching activities than do their peers with more flexible schedules at research institutions. Primary caregivers have greater demands on their time than people with stay-at-home partners or people with the means to pay for full time childcare. If we create venues for scientific discourse where your ability to participate effectively depends on how much time you have to make your voice heard over the din, then we are effectively saying: We prioritize the voices of men more than women, of scholars at research rather than teaching institutions, and of people with more versus less childcare support.

So to those who keep saying why worry about inequality in scientific discourse, just so you know, this is what it sounds like you are saying: Why worry about inequality, because the existing inequalities don’t bother me. I’m fine with them. I’m okay with our science excluding some groups more than others. I’d like to focus on other things instead, and let psychology become more like other STEM fields in terms of what they look like demographically.

And you know what? You are totally entitled to that opinion.

And I am entitled to mine. Which is, in a nutshell: Fuck that.


--
*Note that I'm talking about any forum for scientific discourse, not just social media. For the record, I thinks social media offers some amazing opportunities for increasing inclusiveness in science. And I think that with some careful attention and creativity, we could maximize those benefits while mitigating some of the issues I raise here. (Here's an example of one recent attempt to do that.)

**Again, this issue is not remotely unique to social media...it's true of lab meetings, conference panels, publishing in traditional journals with limited page space, you name it.

Monday, September 5, 2016

The Only Heuristic You'll Ever Need


I don’t know about you, but when the shouting gets shouty, I like to wrap myself in a warm blanket of thoughtful nuance. Fortunately, I have here in front of me a set of six manuscripts that do exactly that, and they are headed your way in the latest special section on improving research practices in the forthcoming September issue of Perspectives on Psychological Science. 

I have talked before about the tendency for humans to love a good cognitive shortcut, and I suspect that cognitive shortcuts act as both antecedents to and consequences of the shouting matches that sometimes erupt in the ongoing conversation on research practices. One of my favorite drinking games these days* is to take a shot every time somebody claims “everyone knows X” or “nobody is arguing Y” or “I don’t think anyone would do Z.” It turns out that this is a prime example of the false consensus effect—a heuristic that leads people to overestimate the extent to which other people share their own beliefs, preferences, and behaviors. We tend to use our own beliefs and behaviors as a guesstimate and generalize from there.

Meanwhile, if I simplify the landscape of perspectives into two sides, I’m more likely to perceive the “other” side as unified, homogeneous, and extreme in their positions, and I contribute in turn to other people’s perceptions that there are only two sides. These and other heuristics tend to sink us further into polarizing arguments and unhelpful finger-pointing, and impede our ability to have constructive discussions, learn from each other, change our own minds, and build consensus.

Moreover, cognitive shortcuts also played a major role in creating the problems with our methods and practices that we are now confronting (p < .05, anyone?). As I note in my introduction to our new special section (available here, in UC’s open access repository, if you’d like a sneak peek): The single most important lesson we can draw from our past in this respect is that we need to think more carefully and more deeply about our methods and our data. Heuristics got us into this mess. Careful thinking will help get us out. The only heuristic you'll ever need in science is this: Don't rely on heuristics. 

And this is why the papers in this special section feel like a warm blanket of thoughtful nuance to me: Together, they highlight the importance of thinking carefully at each phase of the research process, from selecting among multiple possible research strategies, to analyzing one’s data, to aggregating across multiple studies to build a more comprehensive picture of a given topic area.

They hammer home the importance of thinking carefully about tradeoffs when choosing one research strategy over another (e.g., running fewer studies with larger samples or more studies with smaller samples), echoing and building on recent calls to fully consider both the pros and cons of a given research strategy when seeking to design smart changes for one’s own lab or for the field as a whole (see e.g., Finkel, Eastwick, & Reis, in press; Gelman,2013; Ledgerwood, Soderberg, & Sparks, in press). They push us to more carefully examine and transparently communicate the assumptions we make when we analyze our data. And they unpack some of the idealized assumptions underlying various meta-analytic techniques—including p-curve and p-uniform, as well as traditional methods—and show us what happens when those assumptions are violated, as they often are in the real world. (Don’t worry, there’s a better way to do meta-analysis, and the last article in the special section explains how.)

Most importantly, the articles all provide concrete advice both on how we can be more careful and more transparent about the assumptions we make throughout the research process, and on how we can continue to improve our research practices in a thoughtful, smart, and nuanced way. 

So if you’re feeling tired of the shouting, and you’re ready for some nuance, stay tuned: The following articles are coming your way, open access, very shortly.
 

*Just kidding!**

**Or am I?