Question 1. Implications of Findings. This article explores a trade-off inherent in conducting interdisciplinary research (IDR), which on the one hand may benefit scientists’ careers from a more favorable reception (higher citation counts), but on the other may lead to lower productivity owing to cognitive and collaboration costs from spanning categories. This is a fascinating puzzle, applicable not only to our own careers, but to a variety of creative domains. Society can arguably benefit from more research that “thinks outside of it’s own box.” However, productivity pressures in academia are very real. What does your work imply for scholars considering to engage in IDR? Should we consider the stage of our career before embarking on a IDR project? Also, what does it say about the incentive system in academia?
We agree this is a puzzle that has far-reaching implications. Our study is a first step in thinking about interdisciplinarity with respect to academic careers, and this trade-off between visibility and productivity is important to consider. But there are many questions we are unable to answer in this study that we hope future studies will address. One is the question you raise about the implications of doing IDR work at different points in one’s career. In our data, the highest IDR scores are for papers with scientists between 11-20 years post-PhD. After 20 years, scientists are less likely to publish IDR work. But we did not follow a cohort of scientists – and IDR is increasing over time. 60% of our scientists received their PhD prior to 1990, and 5% received their PhD after 2000. So, although we had the full history of an academic’s publications, we do not have enough younger scholars, and in particular pre-tenure scholars, to talk about the implications of IDR for important outcomes like getting tenure. We see this as the biggest limitation of our sample and look forward to future studies addressing this critical question. That said, our study does suggest we should understand the trade-offs for different kinds of research. The trade-offs will likely be different across institutions and fields (as will the propensities, Table 3 shows that academics at middle status institutions are less likely to do IDR).
As for the incentive system in academia, it is easy to say that academia should value quality over quantity. And we are measuring the number of papers that are published and the numbers of citations that accrue to papers. But we are not claiming to measure quality. And we don’t know what is actually being valued at the institutional level. As a result, our study only tangentially sheds light on the incentive system in academia. It is the case that if institutions value IDR work, they should understand that this will come with less quantity of research. Erin is doing research on institutional support for IDR work, and it is possible for institutions to build these understandings into the tenure and promotion processes. It is too early to say whether or not these efforts will be successful, but it is promising. Finally, in these institutional efforts and more broadly, it is important to be clear on the definition of IDR. We mean something very specific – bringing together distant fields. Without a clear definition, everyone will claim their work as IDR, especially if we think the most cutting-edge research is interdisciplinary (which is the current rhetoric).
Question 2. Market-Makers vs. Market-Takers. You mention that once published, (that is, after having overcome the hurdles of dissent from reviewers from heterogeneous fields), IDR is on average associated with greater visibility related to mono-disciplinary work. You also expect this to be more true where evaluators are “market-makers” (e.g. fellow academics also engaging in research production) vs. simply “market-takers” (e.g. groups that evaluate but do not similarly engage in production). This reminds us of Berg’s (2016) ASQ article on how creatives and managers differ in the ways in which they evaluate, and forecast the success of, novel ideas. Do you see any parallels between these two works, and can you talk more about why the market-makers vs. market-takers distinction might be important?
First let us clarify that the hurdles we hypothesize from doing IDR are from the difficulties of doing the work itself – the cognitive and collaborative challenges of working across disciplines. Although it is certainly possible that the review process punishes IDR work (and indeed other qualitative evidence suggests this), we don’t see evidence of this in our supplementary analyses. The review times between high and low IDR work are similar. And if we compare working papers with published papers, it appears as if the published papers are actually more interdisciplinary.
We like your question about the comparison with Berg (2016). As more macro scholars, this isn’t a comparison we would have made naturally and it’s good to try and connect our work broadly. Berg’s paper examines “creative forecasting” which is the ability of managers and creators to accurately predict which novel ideas will be positively received in the market. He finds that the creator role is more accurate in forecasting the success of others’ creative ideas than a manager role (and he suggests this is because creators are more likely to engage in divergent thinking). He has creators and managers (circus professionals) assess videos of circus acts as well as conducts a survey. This paper is interesting because it suggests that role is important to consider. Using the language in our paper, perhaps creators are market-makers.
The market-maker and market-taker idea that we use builds from another ASQ paper written by Elizabeth Pontikes in 2012. She uses this distinction to think about how evaluators will assess ambiguity in organizational labels. She finds that market-takers (consumers who will consume goods in the software industry) find ambiguous labels less appealing than market-makers. Market-makers (venture capitalists in the software industry) find these same labels appealing because they are thinking about how to change the market structure. As we thought about academic scientists, and deliberated over whether they are market makers or market takers, we thought explicitly about the role they play in the field. (To be clear, we are thinking here about the audience that is receiving the IDR work, not the scientists producing it). We think academic scientists are more like market makers because they are interested in new and novel ideas (much like the venture capitalists that Elizabeth Pontikes studied and perhaps much like the circus act creators that Justin Berg studied). However, Pontikes finds that not all venture capitalists are similar (corporate venture capitalists act as market-takers); similarly, it may be that some types of academic scientists are more like market-takers than market-makers. It would be interesting to explore whether differences in institutions, organizational incentives, career stage, or individual attributes impact the likelihood of an academic scientist being a market-maker v. market-taker.
Berg’s paper is slightly different in that his creators are predicting how others will receive the circus act. Pontikes is predicting whether the audience is likely to appreciate this work directly (the audience being the venture capitalist). This is an important difference, and we see the academic scientists like Pontikes does the venture capitalists—as the audience for the ideas. In thinking about how creators fit in, perhaps editors and reviewers serve the role of the market-makers as in Berg’s paper. If the editors and reviewers are market-makers, they should be good at assessing how others will receive this work. And if editors, reviewers and readers of the paper are all market-makers, it may explain why we don’t see differences in the editorial process—at all stages we generally see appreciation for IDR work. Who are the market-takers in the academic realm? Certain types of academic scientists? Our students? School administrators? Local or federal government agencies? Journalists? We don’t have an answer but think it’s an interesting question. And going back to your earlier question about the incentive system in academia, it does seem important that we continue to have the market-makers the primary evaluators of academic work if we want to continue to value new and novel ideas.
Question 3. Variability in Visibility. You mention that IDR is associated with greater variability in visibility—a high-risk high-reward endeavor where there is greater probability of both scientific breakthroughs or “hits,” and of failures or “flops.” You also say that there is greater likelihood of IDR visibility in fields that are already highly interdisciplinary versus those that are not. This sounds like a “tipping point” problem—IDR has a lower chance of receiving visibility unless there is already a history and practice of IDR in the field. Assuming there is some value in IDR, how might academia and institutions help solve this tipping point issue? Also, what factors (if any) other than field interdisciplinarity might you think influence variability in visibility?
The dynamics of interdisciplinarity is a fascinating question, and the field level tipping point in IDR is one way to think about these dynamics. What are the field level differences in producing and appreciating IDR work? Should a field be encouraging IDR work? To some extent, we think these incentives are already in place with NSF and other funding agencies providing funding for IDR work. This in fact motivated our study because we wondered whether IDR work was providing enough benefits to warrant the increased funding. Our answer? Probably. Part of the problem in studying these dynamics is that you don’t know when IDR is likely to take off. We can see that the earliest IDR work is likely to be well-cited over time (and at one point we had hypothesized about this). But as the reviewers pointed out, this is hardly surprising and doesn’t help us predict which IDR work is likely to be a blip in a field and which is likely to be the start of something big. A study of a single field over time may be better able to answer this question.
What are other factors that might influence variability in visibility? Outside of the usual suspects—status of the institution and journal, age and gender of the scientist—it might be interesting to look at random influences of visibility. What about when a paper comes out? Or the other papers in an issue (in the old days, when people looked at actual physical journals)? You could see if timing or proximity influences whether a paper is well-cited. In previous research, Erin has found that more than ‘timing,’ providing some closure on an unsettled topic or debate, or offering a new result or new theory is predictive of visibility (see Leahey & Cain Social Studies of Science 2013).
Question 4. Multi- vs. Inter-Disciplinarity. Strikingly, you find that although scientists who engage in interdisciplinary work (IDR) tend to be relatively less productive than mono-discipline researchers, those who engage in multi-disciplinary work (i.e. “writing papers across different disciplines but each paper is mono-disciplinary”) tend to be more productive than mono-disciplinary researchers. What do you think might account for this relative increase in productivity for multi-disciplinary researchers, even when compared to those who work more neatly within a single discipline?
This is an interesting question. One hypothesis would be that multi-disciplinary scientists have a broader audience for their work. They can publish their psychology papers in a psychology journal, their sociology papers in a sociology journal, etc. So there are more outlets for multi-disciplinary work. Another hypothesis would be that the same idea might be framed for different disciplinary outlets. So perhaps these scientists are getting multiple publications from one idea. Our multi-disciplinary measure did not take into account the distance between the fields. This may be particularly true if multi-disciplinary scientists are working in closely related but distinct fields.
Question 5. Initial Idea and Data Sources. You constructed an impressively intricate dataset that fits your research question nicely. Was this your plan from the very beginning? Did you also search for alternative sources of data, or have different ideas for the project altogether when you began? How did the idea and data collection process evolve over time? And more generally, any advice for finding and putting together novel datasets?
This was a dataset collected over a long period of time, and it took far more time and effort than any of us anticipated. Taryn and Christine began collecting the data in 2003 with an interest in the diversity of collaborations among academic scientists. They were thinking of diversity primarily in terms of geography and discipline. During the initial data collection, Taryn began to work on a dissertation and then went on the job market, and Christine focused on projects closer to fruition as she put together her tenure packet. Fast forward to 2008, where Erin and Christine were introduced by a mutual friend (Joe Broschak) at the American Sociological Association meeting. As we talked about our joint interests, this data collection effort came up and seemed well suited for a project on interdisciplinarity that Erin had proposed. The initial question was about the precursors and consequences of doing IDR work. Truth be told, we started with a shared interest in whether women were more likely to engage in IDR work. From Erin’s work on gender differences in academic focus (men are more likely to be specialists, women generalists), we expected women to be more likely to engage in IDR work. Erin and Christine were both on sabbatical in 2008-9 and collected a lot more data (with the help of many new students) in order to calculate IDR, control for field and institutional level characteristics, and measure the citations of papers in the original sample. The number of women ended up not being a strong predictor of IDR (perhaps because our sample of 13% women is lower than it is in the population), and the paper got big and complicated. So we dropped the precursors in our theorizing (although the empirical results can still be seen in Table 3) and focused on consequences. That led us to the paper you read.
So the advice? Perhaps persistence. Perhaps the benefits of friends who make connections. And our collaboration definitely highlights the benefits to having a clear idea of what you want to study so you can recognize the right dataset (or in this case, a dataset that seemed right!). This particular data collection effort was slow – well over a decade before we had a published paper. But importantly, our frustrations were never with co-authors but with the very slow progress. If we hadn’t enjoyed working together, we would undoubtedly have given up in the process. But we’re not sure it’s a model we would recommend to junior scholars anxious to have high rates of productivity!