Zbaracki (1998). The Rhetoric and Reality of Total Quality Management

Mark Zbaracki – Western University, Ivey Business School

Mayur P. Joshi – University of Manchester, Alliance Manchester Business School

Article link: https://www-jstor-org.proxy.library.ucsb.edu:9443/stable/2393677

1. If we were to go down memory lane, could you tell us something about how the paper came about? What were the key triggers that led you to decide to examine TQM and how did you come up with the idea of using the phrase rhetoric and reality in the title, which succinctly describes the paper and yet pushes the reader to read through?

Mark: Let me start with a story. When I was a doctoral student, Bob Sutton told me that my work experience would not help me as an academic, because the expertise that you develop while working in industry would not be valuable in academia. I think he was right in that my work experience did not always match well with the academic knowledge. That said, I arrived at Stanford at a time of the height of a particular version of institutional theory, and there was a conversation going around Stanford about both Meyer and Rowan (1977) and DiMaggio and Powell (1983). There was a great deal made of the distinction between technical settings and institutional settings––suggesting that in technical settings one would expect less ceremonial conformity compared to institutional settings. I kept looking at it and saying, “No! This is not right as a formulation.” Because from my work experience at IBM—a seriously competitive context—I saw what looked like a version of ceremonial conformity. So I said to myself, “Somehow I need to understand what this is all about!” That became the impetus for my dissertation, which focuses on TQM––at that time a hot fad. I was trained as an industrial engineer, so I did believe in the merits of TQM (and Six Sigma). It had a grounding in something that was not just faddish; it actually had some substantive content, but that seemed to get lost. The structure of the dissertation was to show how that plays out. What was cool about it was that because it was a fad, it had hit every quadrant of the 2 X 2 built around the extent of the technical and institutional dimensions of the settings. So from a theoretical sampling standpoint, I thought I had to go out and examine TQM in all these different contexts. This was the structure. It seemed to work okay.

There were two critical moments in the structure of the article. One you pick up on in your question around the phrase rhetoric and reality. The title actually came from a discussion with Bob Sutton. I was just describing what was going on in the field and he suggested that what I was describing was about “the rhetoric and reality of TQM.” That became the title. The second was that Bob listened to the story and said, “This is a story of equifinality.” In the paper, I tried to demonstrate that the same phenomenon happens whether you look at the institutional or the technical settings, so a similar outcome is happening across all four different quadrants. Whereas you usually try to look for differences across quadrants, my research was comparing the quadrants and showing how similar they were. In other words, the process was driving roughly the same outcome in places where we would predict the outcome to be different. These two things together made it a powerful story, I think.

TQM was an example of the times. The phenomenon demonstrates the dramaturgy of businesses and the way we think of action. The moments are not about rationality; they are not about optimization; they are not about problem-solving. They are rituals by which we define ourselves. That is not to say that the rituals are unimportant or to undermine what the businesses do, but to recognize that the rituals are an important part of the action, and we should not reduce it to a simple consequential outcome. 

2. If you were to reflect on the organization theory scholarship over the last two decades, how would you picture the ways scholars have followed up on your work?

Rather than answering your question directly by going into the citations, I would share some of the scholarly discourses that resonate with the heart of the TQM paper. One person who gets at what I was trying to achieve is Thomas Greckhamer (e.g., Greckhamer, 2005) with his Qualitative Comparative Analysis (QCA) approach. Thomas saw that my TQM piece was trying to use a fuzzy set notion to define what TQM is to people. QCA is a cool technique to work with the kind of equifinality that I showed across the four quadrants of the matrix with technical and institutional dimensions.

Another of my favorite ways it has been picked up is by Beth Bechky following on the literature on inhabited institutions as she demonstrates the importance of work and how people actually inhabited institutions. Part of Beth’s claim (e.g., Bechky, 2011) is that if we actually pay attention to the work people do, we will understand the challenges that they face.

Dan Levinthal picked it up in a very imaginative way with his work on adaptation in rugged landscapes. Dan once told me—his summary of my work—that it is an ongoing statement about technology implementation: these things are harder to implement than they seem. My work shows people those challenges.

Callen Anthony (e.g., Anthony, 2018) also picks it up indirectly when she talks about epistemic technologies. (I am incredibly jealous of Callen because she was wise enough to adapt the term to help people make sense of her ideas.) You can think of TQM as an epistemic technology. There is a common thread across the work of Callen in what she is doing with epistemic technologies, what Beth is doing in her work of inhabited institutions, what Dan is doing in the form of rugged landscapes, and what I did with my TQM paper.

If you are talking about a more direct follow-up, no particular article comes to my mind which followed as I would have loved to see it. I was very intentional in saying “Let’s actually think what constitutes TQM. How can we think about the constituent elements? And can we show what that means for an organization to adapt?” My co-author, Mark Bergen saw that and he also saw consulting implications in what I was doing. At the time I was mortified, but I think he was right. You can apply this to any managerial practice or what Callen would call epistemic technologies.

3. Earlier in response to the first question, you mentioned the 2 X 2 built around technical and institutional settings, and in response to the second question, you mention QCA. If we think of your TQM paper, it does not seem to fit with the so-called “standard” templates of qualitative research. Where do your methods come from and how do you unpack the black box of technologies like TQM?

My short answer would be, well, exactly! During the early phase of the TQM study, I contacted Steve Barley and asked the same question: Where do you get your methods from? And Steve said, “I don’t get them; I make them up to address the question I want to address.” That was both helpful and intimidating. Helpful in the sense of it was liberating to feel that I did not have to copy someone else’s methods, and I could create my own. But of course that was intimidating, as I was like “Shit, I will not be able to copy someone else’s methods!”

So the methods in the TQM piece were quite intentional, and they evolved as I conducted the study and as I wrote the manuscript. If you go to the paper, there is the table where I give you all the different techniques that make up TQM and I compare who uses them and how frequently they use them. That came about through an evolving process. I realized it would be a whole lot more effective if I could get a sense of what people meant by TQM, which meant I then had to go back to all the people I talked to before and systematically ask them the same set of questions. What I had wanted to do with that data was use it to define the reality that people are constructing and how do I compare it. This is the fuzzy set dimension of my research that Thomas picked up on as he moved into QCA methods.

There is another dimension of the methods in that paper, which I intend to convey: If we take social construction seriously then we need to understand what people are actually constructing. That’s what those methods were trying to get at. In other words, what world are they constructing when they encounter TQM? I suspect if I were to go back and reframe what I was doing right now, you can think of not just what kind of tools get incorporated, but also what aspects of the world people are attending to. That would be with reference to what Beth talks about regarding inhabited institutions. Not just what’s the tool, not just what’s the work, but also who are the people they hang out with, what are the institutional structures that inform that. I wish I had more time! I was trying to do a similar thing with pricing (Zbaracki and Bergen, 2010), in uncovering what world are people constructing and how do they juxtapose the interactions between those different worlds.

4. Your paper presents a view on TQM as an instance of enduring problem of management fads where you refer to other management practices like just-in-time, reengineering, and management by objectives. We have recently seen other management approaches like six sigma, agile, as well as lean management (including the lean startup approach). What do you think has changed about management approaches since you wrote the paper in 1998?

My short answer for the first part of your question would be, I did not anticipate something like six sigma following TQM. In the TQM paper, what I have tried to do is anchor TQM in what would be the equivalent of Six Sigma––statistics from the 1920s. It’s a simple t-test, where you pay attention to the things falling outside of the 3 standard deviation limits on either side. This has not changed. You can think of TQM—and Six Sigma—as a collection of practices anchored in a simple statistical comparison that helps people make choices more effectively. That’s what the Japanese figured out and that’s what a lot of production systems are predicated on. My conception was that TQM needs to be seen as constructed in the mind of the actor. It’s not material in itself, but a conceptual apparatus that helps you observe the materiality being socially constructed. Such tools have been there for a long time.

I treated TQM as a fad, but it was based on a technology that I valued; I did not anticipate that the fad would morph into something like Six Sigma. But I think another dimension of what has happened is I think we have moved into the post-production world in North America. Jerry Davis has documented that very effectively. If you look at the largest organizations when I was doing my TQM study, manufacturing was still very dominant; it is much less so now. That is a subtext of my career. When I arrived at the University of Chicago, which has a dominant base in economics and business-related fields, there was much less focus on engineering and production. I realized that I would quickly become marginal if I only focused on TQM. That’s why and how I switched to pricing (e.g., Zbaracki, 2007; Zbaracki and Bergen, 2010)—which had its own challenges.

The bigger issue—what these things have in common with what you [i.e., Mayur] are talking about in your research or in what Callen Anthony calls epistemic technology—you can think of it as the core of what I am doing: what it means to deal with what I think of as socially constructed technology. The image that I always had is that it sat between the material (physical) world and the completely socially constructed world. You can think along the lines of Berger and Luckmann (1966), though Goffman’s Frame Analysis is probably more effective. It has implications for how people gear into the material world but it’s a way of abstracting away from the material world and trying to find order there. TQM, for example is a way of measuring variance for a whole range of products in the material world. Measuring the dimensions of products in a post-production world becomes a little more complicated. For example, when Callen is studying financial measures, there isn’t a material object in the same sense there is with respect to something like TQM. So her methods and approach are going to be different.

5. Following up on your point about epistemic technologies, how do you compare TQM with technologies like AI and how do we examine them?

I guess I have two avenues of responding to this. One of them is, the reason I chose TQM was that I am trained as an industrial engineer and so I was trained to believe in the value of TQM. The title “The rhetoric and reality of TQM” was not intended to be a cynical statement; rather it was puzzling over a phenomenon. I simply wanted to say, “Look when people talk about TQM and try to deal with the core ideas, their rhetoric and the reality of what they do relate in very complicated ways.” So I began with the premise that there was something there, something of value that people are wrestling with—hopefully! I think that is also absolutely true with AI. There is no doubt there is merit in what we can do with AI. But there is a question about the dynamics behind how people talk about AI. In general I try not to drop my assumption of good faith in the epistemic technologies.

But there are two dynamics that we need to unpack behind the good faith. One of them is the process of learning. This is the theme in my recent research with Claus Rerup (see Rerup and Zbaracki, 2021; see also Maslach et al. 2018) where we introduce validity and reliability. You can think of validity as learning processes that produce knowledge that can be used in understanding, prediction, and control. In the case of TQM, for instance, you are using it to predict what is going on in the production processes. With AI I think that’s the seductive beauty of it, the hope that we can predict and control what’s going on in the world. In general, that’s our taken for granted understanding of learning; to the extent that knowledge is valid it can be used to understand, predict and control. And we think that it is sufficient. However, Claus and I argue that it is a necessary but not sufficient condition for thinking about knowledge. The second dimension you have to take into account is reliability. That is, the processes that can produce knowledge need to be public, stable, and shared. That is where the problem becomes more interesting.

So think of TQM. Validity was the key theme in the piece by Hackman and Wageman (1995), which was looking at what TQM became and questioning whether it was valid. It’s what I try to pick up in the table in the paper that shows how the understanding of TQM varies across the settings. The methods there were designed to understand whether TQM as it was implemented could be considered valid. Quite often, it wasn’t, a point that I think of as consistent with Hackman and Wageman’s explanation. And I think the AI applications that are not valid are quite similar. As you describe in your own research, the storytelling by data scientists—when they describe what they have accomplished with AI—is a quest to sell the technology in subsequent contexts. When they are proposing their models as useful to an organization, I think that is exactly the kind of dynamic that matters. You can also think of that kind of storytelling more broadly. What we saw with TQM was a kind of outsized projection about what organizations could achieve by adopting it. Perhaps those claims are not entirely valid. The same is true with the kind of stories we tell with AI: the seductive beauty of AI is it promises things that may not stand up against what it can actually do.

So there is already a problem with validity and that’s where we usually stop. But, I think the problem becomes more interesting when you add reliability to the dynamics. Because we assume that the things that are not valid are not going to diffuse and become widely accepted. The classic example, or counter point to this is vaccine adoption. There is, within a surprisingly large portion of the population, a public, stable, and shared belief that the vaccines are not efficacious, which upsets the medical scientists to no end. But for social scientists that should be just a repeated finding. If indeed that is what people believe, we need to take that into account while theorizing about epistemic technologies. You can think of a 2 * 2 built across reliability and validity of knowledge claims emanating from epistemic technologies. The ideal condition is the knowledge that is valid should become more reliable. But in the paper Claus and I show that people have come up with new valid knowledge, at the cutting edge of science, that is not public, shared, and stable (i.e. not reliable). And the more interesting situations where it becomes more problematic is the situation where knowledge is public, stable, and shared (i.e. reliable) but not valid. Or, perhaps worse, circumstances in which people are propagating knowledge with no expectation of it being either reliable or valid. There is a form of skepticism that underlies those latter to categories. To some degree those cells in the 2 * 2—invalid, but reliable knowledge or invalid and unreliable knowledge—are the most interesting.

That leads to me to the second dynamic why epistemic technologies become challenging because, if you want to take validity seriously, it does not necessarily mean good for the world. This is the second issue that I am wrestling with right now. There is a piece that I wrote for Research in Sociology of Organizations in honor of Jim March (Zbaracki et al. 2021)—the volume is titled Carnegie goes to California (Beckman 2021). (Here I need to make a huge shout out to Christine M. Beckman, who in addition to her role as the EIC of ASQ, also edited that volume. I love that volume because it shows the side of Jim that has been enormously influential for me; Christine saw it as well and put together a beautiful collection of papers.) Our contribution—this was written with colleagues at Western University, two doctoral students, Cam McAlpine and Julian Barg, and my colleague Lee Watkiss—it picks up on one of my favourite pieces that Jim titled “Model bias” (March, 1972). He argued that models may produce truth, but they also have implications for beauty and justice that we may find unattractive. Truth in our sense would be validity—understanding, prediction, and control. A model that produces truth—a valid model—may have troubling effects when we consider beauty and justice. I think of AI as a great example of that. Because what you can do with AI in the context of pricing—where I am again doing some research—people have developed supercharged ability to use AI to engage in price discrimination (Bergen et al. 2021). With valid predictions about how much people are willing to pay under circumstances, price discrimination can quickly become problematic when you think in terms of justice. What if price discrimination becomes a form of social discrimination? The utilitarian technical understanding is quite predictively accurate, but the implications for resource allocation and consumer interests becomes much more complicated, because you can use that predictive accuracy to exploit people in ways that they don’t realize. I think those are the dimensions of epistemic technologies that we need to take into account. There are social dimensions to what it is that we do that you can’t necessarily predict from AI. Those are the kind of things that we need to attend to a whole lot more.

6. You already described some of the work you are doing currently. Would tell us a little more on other projects you are working on right now, and what is coming up in near future?

There are a couple of things that I am working on. One of them is a paper with Callen Anthony. That is about competition in the ferry industry. This is related to the paper with Claus (Rerup and Zbaracki, 2021) but at a different point on the spectrum of social construction of epistemic technologies. In the paper with Claus we focus on various material implications of the design of the ship. Somewhat independently of that, Callen went back and gathered the history and evolution of the ferry. So this paper is much more focused on the competitive dynamics in the forms of meaning, materiality and use as the ship evolves over time. In that paper, we are wrestling with how to think about competition, because it’s frequently invoked and has profound implications for the way our social worlds are organized, but receives surprisingly limited attention. (Indeed, I just recently finished a book review titled “The Beauty of Competition?” for ASQ on a couple of excellent volumes on competition, one edited by David Stark and a second edited by Stefan Arora-Jonsson. Book reviews don’t get a lot of credit in the way we value academic work, but I agreed to do the review because I wanted to get a better handle on competition. The volumes are great; they changed the way I think about the problem that Callen and I are working with.)

The second thing I am working on is with Cam McAlpine and Lee Watkiss, where we revisit some of the questions about strategic decision making in Eisenhardt and Zbaracki (1992). Our intuition is the way the world has shifted, we need an approach to decision-making that is more appropriate to the times, so we are trying to revisit the ways that we think about rationality. Not to say Kahneman and Tversky (1979) and decision theorists are wrong, but more in the spirit of focusing on the dimensions of the decision making that they left out, but that increasingly are getting attention. Right now, truth, beauty and justice are central to our thinking about decision making, as are reliability and validity. Some of that gets picked up by Cyert and March (1963), some by Jeff Pfeffer and colleagues thinking about power and politics (Pfeffer and Salancik 1974), some by Karl Weick (1995) and some of it gets picked up by the garbage can model (Cohen et al. 1972). But the predominant mode of thinking about decision making is utilitarian. And that might be where some of our contemporary problems lie.

Also, I am doing research following up on the themes of pricing, where AI is applied to pricing. My pricing research was always intended to follow up on TQM. I always saw pricing as another form of epistemic technology. There is an argument that I haven’t figured out how to make, but it goes roughly as follows. The basic point in Coase’s argument back in 1937 was something like “If pricing mechanisms work so effectively, who do we need organizations?” We tend to think of the pricing mechanism as the default; organizations are second best. I wanted to argue that price systems is produced by organizations. My argument with my co-authors was that firms invest in pricing capabilities, and how well they do it determines the profitability of the organization. But it also determines how well the economy functions. With the application of AI, that problem becomes increasingly urgent. The problem is not just the predictive validity of AI. It is also the justice implications because what you now have is organizations with immense pricing capabilities that are able to take advantage of individuals. Amazon, for instance, has tons of pricing power. You don’t necessarily see prices posted in many settings, but Amazon can track all of that. And they can figure out with great precision what you are willing to pay. Hotels and airlines also use pricing technologies. The forms of price discrimination that are possible with all that technology are so much more sophisticated, especially compared to the bounded rationality of humans. I think that’s very important, because it can create all sorts of social problems (see Bergen et al. 2021).

The last direction that I am going is I am getting back to rhetoric and reality. And credit to doctoral students for this. Now multiple doctoral students have come to me and pointed out that the rhetoric and reality argument probably applies to sustainability questions. I have started telling my sustainability colleagues that you won the battle on the rhetoric of sustainability. Because it’s now part of the language of business. So everyone’s making claims about sustainability in their businesses and in some instances it has gone farther than the policy makers making their own policy. On the other hand, the evidence of these practices having an impact is more limited. Wren Montgomery (see Lyon and Montgomery 2013) talks about green washing, with the hope that it might come to an end. But I think there is evidence that there is more significant decoupling going on. The claims don’t match the efforts. I think there are some important rhetoric and reality problems there. Steve Barley has a lovely piece about building institution field to corral a government (Barley 2010). I think there is an argument to be made that we need to think about the institutional fields that are built around sustainability—though also around a variety of different things now. This comes back to the argument about validity and reliability: you can build a very reliable understanding of sustainability that does not necessarily reflect where the evidence stands. The doctoral students are drawing me to that one. Credit to them. What I am profoundly hopeful about is that I have seen a shift in the way doctoral students think about these issues. The fact that they are bringing forward their skepticism on things like sustainability means that they are starting to investigate forms of knowledge that are reliable but not valid. Perhaps it sounds odd, but I think there is hope there!

Interviewer bio:

Mayur P Joshi completed his PhD from Ivey Business School last year and currently works as an assistant professor in FinTech and Information Systems at Alliance Manchester Business School. His research examines how digital technologies shape (and are shaped by) the strategies, processes, and practices of organizing. His current focus is on examining how organizational actors employ artificial intelligence in enacting rational decisions.


Anthony, C. (2018). To question or accept? How status differences influence responses to new epistemic technologies in knowledge work. Academy of Management Review, 43(4), 661–679.

Barley, S. R. (2010). Building an institutional field to corral a government: A case to set an agenda for organization studies. Organization Studies, 31(6), 777-805.

Bechky, B. A. (2011). Making organizational theory work: Institutions, occupations, and negotiated orders. Organization Science, 22(5), 1157–1167.

Beckman, C. M. (Ed.). (2021). Carnegie goes to California: Advancing and Celebrating the Work of James G. March. Emerald Publishing Limited.

Berger, P. L., & Luckmann, T. (1966). The Social Construction of Reality (New York: Doubledav and Co).

Bergen, M.E., Dutta, S., Guszcza, J., & Zbaracki, M.J. (2021, March 3). How AI can help companies set prices more ethically. Harvard Business Review. https://hbr.org/2021/03/how-ai-can-help-companies-set-prices-more-ethically

Coase, R. (1937) The nature of the firm, Economica, Vol. 4, pp. 386-405.

Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A Garbage Can Model of Organizational Choice. Administrative Science Quarterly, 17(1), 1–25.

Cyert RM, March JG (1963) A Behavioral Theory of the Firm (Englewood Cliffs, NJ: Prentice-Hall).

DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147–160.

Eisenhardt, K. M., & Zbaracki, M. J. (1992). Strategic decision making. Strategic Management Journal, 13(S2), 17–37.

Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience. Harvard University Press.

Greckhamer, T. (2005). The Globalization of Managerial Discourse: A Cross-national Comparative Study of Discourse about Industry Clusters, 1990–2003. Ph.D. Dissertation, University of Florida

Hackman, J. R., & Wageman, R. (1995). Total quality management: Empirical, conceptual, and practical issues. Administrative Science Quarterly, 40(2), 309-342.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291.

Lyon, T. P., & Montgomery, A. W. (2013). Tweetjacked: The impact of social media on corporate greenwash. Journal of Business Ethics, 118(4), 747-757.

March, J. G. (1972). Model bias in social action. Review of Educational Research42(4), 413-429.

Maslach, D., Branzei, O., Rerup, C., & Zbaracki, M. J. (2018). Noise as signal in learning from rare events. Organization Science29(2), 225-246.

Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83(2), 340–363.

Pfeffer, J., & Salancik, G. R. (1974). Organizational decision making as a political process: The case of a university budget. Administrative Science Quarterly, 19(2), 135–151.

Rerup, C.  & Zbaracki, M. J. (2021). The politics of learning from rare events. Organization Science, 32(6): 1391-1414.

Weick, K. E. (1995). Sensemaking in Organizations (Vol. 3). Sage.

Zbaracki, M.J. (2007), A sociological view of costs of price adjustment: Contributions from grounded theory methods. Managerial and Decision Economics, 28(6): 553-567.

Zbaracki, M. J., & Bergen, M. (2010). When truces collapse: A longitudinal study of price-adjustment routines. Organization Science, 21(5), 955–972.

Zbaracki, M. J., Watkiss, L., McAlpine, C., & Barg, J. (2021). Truth, beauty, and justice in models of social action. In C. M. Beckman (Ed.), Research in the Sociology of Organizations (pp. 159–177). Emerald Publishing Limited.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: