Gerald F. Davis (firstname.lastname@example.org)
Article link: http://asq.sagepub.com/content/59/2/193
Do you agree with Jerry Davis, editor of ASQ?
In the June 2014 issue of ASQ he argues that “the core technology of journals is not their distribution but their review process”, and that the system of journals is broken. (See abstract below, and full ungated article at link above.)
In a break from our normal fare of author interviews (we have many of these scheduled for posting over the summer!), ASQ Blog is hosting a discussion of Jerry’s essay. Jerry himself will be participating in the discussion, as will others on the ASQ editorial board.
Please add your voice below in the comments!
Abstract. The Web has greatly reduced the barriers to entry for new journals and other platforms for communicating scientific output, and the number of journals continues to multiply. This leaves readers and authors with the daunting cognitive challenge of navigating the literature and discerning contributions that are both relevant and significant. Meanwhile, measures of journal impact that might guide the use of the literature have become more visible and consequential, leading to “impact gamesmanship” that renders the measures increasingly suspect. The incentive system created by our journals is broken. In this essay, I argue that the core technology of journals is not their distribution but their review process. The organization of the review process reflects assumptions about what a contribution is and how it should be evaluated. Through their review processes, journals can certify contributions, convene scholarly communities, and curate works that are worth reading. Different review processes thereby create incentives for different kinds of work. It’s time for a broader dialogue about how we connect the aims of the social science enterprise to our system of journals.
Comments are open! Comment moderation is off to ease the flow of discussion. I trust our readers can distinguish between real comments and spam. In any case, moderators will delete spam as we see it. We also reserve the right to delete ad hominem attacks and comments with sexually explicit or otherwise grossly offensive language.
Note that the original posting contained an error in the Article Link. The correct link is: http://asq.sagepub.com/content/59/2/193.
Jerry: This is a wonderful editorial. Thank you! My colleagues, students, and I conducted related work on the conceptualization and measurement of “scholarly impact.” See the following Academy of Management Perspectives article available at http://mypage.iu.edu/~haguinis
Aguinis, H., Suarez-González, I., Lannelongue, G., & Joo, H. 2012. Scholarly impact revisited. Academy of Management Perspectives, 26(2): 105-132.
As noted by former AOM President Anne Tsui (2013: 378, Management and Organization Review), “faculty members are responding to the requirements of the measurement system. When only the number of papers in certain outlets count, rational and good people will do whatever it takes to meet the expectations.” We, management scholars, know about motivation, performance, rewards, and measurement. So, we should be the ones to offer a better alternative—or we will have to live with (and by) ranking and rating systems imposed by others (e.g., Businessweek, etc.).
–Herman Aguinis, Kelley School of Business, Indiana University.
Dear Jerry and Colleagues,
As a follow up, this article expands on this topic (to appear in the December 2014 issue of Academy of Management Learning and Education):
Aguinis, H., Shapiro, D. L., Antonacopoulou, E., & Cummings, T. G. in press. Scholarly impact: A pluralist conceptualization. Academy of Management Learning and Education.
This article is available at http://mypage.iu.edu/~haguinis/pubs.html and the Abstract is below.
I look forward to learning about reactions and comments!
All the best,
-Herman Aguinis, Kelley School of Business, Indiana University
We critically assess a common approach to scholarly impact that relies almost exclusively on a single stakeholder (i.e., other academics). We argue that this approach is narrow and insufficient, and thereby threatens the credibility and long-term sustainability of the management research community. We offer a solution in the form of a broader and novel conceptual and measurement framework of scholarly impact, a pluralist perspective. It proposes actions that depart from the current win-lose and zero-sum view that lead to false tradeoffs such as research versus practice, rigor versus relevance, and research versus service. Our proposed pluralist conceptualization can be instrumental in enabling business schools and other academic units to clarify their strategic direction in terms of which stakeholders they are trying to affect and why, the way future scholars are trained, and the design and implementation of faculty performance management systems. We argue that the adoption of a pluralist conceptualization of scholarly impact can increase motivation for engaged scholarship and design-science research that is more conducive to actionable knowledge as opposed to exclusive career-focused advances, enhance the relevance and value of our scholarship, and thereby help to narrow the much-lamented chasm between research and practice.