ASQ Interview

The Dynamics of Inferential Interpretation in Experiential Learning: Deciphering Hidden Goals from Ambiguous Experience

Authors: Bryan Spencer (Alberta School of Business); Claus Rerup (Frankfurt School of Finance and Management)
Interviewers: Runjia Zhang (Guanghua School of Management); Umme Kalsoom (University of Arkansas at Little Rock)

Article link: 10.1177/00018392241273301


Note: Questions were sent to and answered jointly by Dr. Spencer and Dr. Rerup.

One of the most fascinating dynamics in your findings is the “arms race” of learning between the MCC community and the CTG secret coalition. Our question is: CTG initially held an advantage, as discussions within the MCC community were more transparent to CTG members. Why do you think CTG ultimately failed to sustain its backstage activities, and why was it the initially “disadvantaged” MCC that ultimately “prevailed”? Was there a moment in your data where you thought the scammers might “win” by staying ahead of the community’s learning curve?

This question actually held a lot of theoretical importance for us as we analyzed the data and traced the back-and-forth learning processes between CTG and MCC (and became the basis for Tables 3A-3D). The question was theoretically important because it was very easy for us to trace the experiential learning of CTG through iterations of market manipulation. One of the keys to why CTG failed to sustain its backstage activities may be that they ignored the cues that indicated that MCC was accumulating knowledge of CTG’s backstage activities (see Table 3D from the paper). CTG members ignored these cues because they were profiting from the manipulation. It brings us back to a reminder from The Ambiguities of Experience where March (2010, p. 114) notes that “Experience may possibly be the best teacher, but it is not a particularly good teacher.” From there, we essentially asked “how was MCC learning?”, because it did not follow a “standard” model of experiential learning. It was from there that we inductively developed our understanding of “inferential interpretation” and a new model of effective learning.

Although there is limited research specifically examining learning related to “hidden goals,” there exist several studies on experiential learning. Could you further elaborate on the uniqueness of your research compared to existing experiential learning literature?

In essence, there are a few interrelated assumptions about experiential learning that we wanted to challenge, such that we could move the literature collectively forward. Hidden goals can be seen as a form of multiple goals, which make experience ambiguous and obscures causality (Levinthal and Rerup, 2021). In existing literature, there had been an implicit assumption that experience was an observable “chunk” of reality that is causally identifiable. In our paper, we illustrated how experience emanating from the enactment of hidden goals may have no clearly defined beginnings or endings. Instead, actors may have to piece together experience from cues and interpret what these cues mean. Our big “take away” from this paper is that hidden goals and ambiguous experience bring interpretation to the fore in the experiential learning process, which has largely been backgrounded. As we wrote our paper we came to realize that hidden goals, despite being overlooked in the literature, are a party of everyday life and organizing.

Fraud is one such case, as we illustrate—though the politics of organizing and work mean that virtually all organizations will face hidden goals of some sort through various coalitions. We offer inferential interpretation as a starting point for unpacking that process.

Your data collection on the process of market manipulation is remarkably comprehensive. You successfully tracked users’ posts across different platforms and ensured real-time information collection. This gives the impression that you clearly identified your research question in the very early stage. How did you achieve this?

Our research question actually emerged through the review process—as our paper initially started by looking at how members of CTG could become part of a coalition that was enacting fraud. The feedback was (to roughly summarize) that the enactment of fraud was likely the most interesting part of our data, and that we should focus on that. So in our revision process, we analyzed how fraud was enacted across different platforms. It was at this time that we picked up on different forms of learning taking place, and how our research question emerged.

As this was Bryan’s first research project during his PhD, he was very careful in his data collection process and attempted to capture “everything” that was going on. This proved beneficial in the revision process as we were able to reorient from looking at the internal dynamics of CTG to tracing market manipulation and seeing the interplay between CTG and MCC. The extensive data was also in part a byproduct of the digital medium of the study: it is much easier to traverse the Internet and document various posts and webpages in a way that would be much more difficult in an “offline” study.

Gaining access to a secret scammer chatroom is rare. What were the biggest practical and ethical challenges in studying this hidden world? Also, as your research did not include interviews, how did you ensure the relevance of your interpretations of participants’ behaviors and narratives?

When Bryan was first introduced to CTG he was not aware that there was a secret coalition within the group engaging in fraud. In that sense, we were quite lucky to be able to observe what was going on, as these sorts of things are rarely made visible to researchers. However, it also presented a dilemma: having not set out to study fraud, how does one handle this ethically? This led to a lengthy internal discussion among the authors and ethics committee on what to do. Do you report the fraud? Do you protect the informants’ confidentiality that was promised when gaining access to the fieldsite? It really brings into question what the role of the researcher is in these instances (see Surmiak, 2020), and is a complex topic that is very difficult to do justice to in just a few sentences. In the end, and following established practice (e.g., Contreras, 2019; Holt, 2015; Reyes, 2018) we determined that because these were not vulnerable populations nor were the crimes “grave,” that ultimately we owed confidentiality to the informants.

As to how we ensured the relevance of our interpretations, this goes back to the above in that we had quite a bit of data from which we could triangulate our findings, both within individual enactments of market manipulation and across all 12 enactments, as the usernames were consistent across different platforms.

Your paper introduces the powerful concept of “inferential interpretation” to explain how actors learn when goals are hidden. Can you provide some suggestions for researchers who aim to conduct further studies based on the insightful concept of “inferential interpretation”?

If you look at the literature on experiential learning from a historical lens, interpretation was largely backgrounded because researchers sought to produce statistical models that were focused on firm performance and generalizable. As we discuss in Question 2 above, we see our contribution of inferential interpretation as a way to make the learning literature, and interpretation of experience in particular, more complex, dynamic, and representative of a world in which meaning making is a core tenet of organizational intelligence. We hope that our ideas around inferential interpretation can be a starting point for exploration around the interplay of ambiguity, sensemaking, interpretation, and learning. While we developed this construct through an inductive study, we think that quantitative scholars could treat inferential interpretation as a process which becomes salient under conditions where goals are “vague, problematic, inconsistent, or unstable” (March, 1978, p. 590) or as in our case, hidden.

Machine learning techniques such as anomaly detection or topic modeling using digital trace data could allow researchers to identify patterns of responses to “cloaked” cues that deviate from expected responses, capturing how hidden goals are deciphered and better understanding the complex processes of how understandings are constructed when causal structures are unknown.

The dynamics started with a few individuals in the MCC who first recognized the cues. What was special about those first people who became suspicious, and how did they convince the wider community? If “inferential interpretation” is a key modern skill, what are the practical implications?

This is a great question. What was special about the first people who became suspicious likely connects to other areas of the Carnegie school, where scholarship has examined how cues gain collective attention (e.g. Rerup, 2009). For practical implications, the notion of inferential interpretation captures a distinctly modern skill: the ability to construct meaning from partial, ambiguous, or deliberately obscured information. In today’s world, where digital environments enable anonymity and concealment, crucial evidence is often hidden in plain sight rather than openly displayed. The BBC account by Sam Piranty (2026) of Homeland Security investigator Greg Squire illustrates this vividly. When explicit identifiers were intentionally removed from child abuse images shared on the dark web, Squire’s team relied on inferential interpretation—analyzing subtle environmental cues such as electrical outlets, a regionally sold sofa, and even the distinctive “Flaming Alamo” brick in a bedroom wall. By inferring geographic constraints from the weight and distribution radius of bricks, they narrowed tens of thousands of possibilities to a single address, rescuing a child and arresting a convicted sex offender. The case demonstrates how inferential interpretation operates as a contemporary capability: reasoning systematically from ambiguous cues to uncover what others attempt to hide. This speaks to how at a fundamental level, organizational intelligence still depends upon human actors paying attention and engaging in collective interpretation to develop a plausible causal model of the hidden realm.

Papers mentioned:

Contreras, R. (2019). Transparency and unmasking issues in ethnographic crime research: methodological considerations. Sociological Forum 34(2), 293-312.

Holt, T. J. (2015). Qualitative criminology in online spaces. In H. Copes & J. M. Miller (Eds.), The Routledge Handbook of Qualitative Criminology (pp. 173–188). Routledge.

Levinthal, D.A., & Rerup, C. (2021). The plural of goals: Learning in a world of ambiguity. Organization Science, 32 (3), 527-543.

March, J. G. (1978). Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics, 587-608

March, J. G. (2010). The Ambiguities of Experience. Cornell University Press.

Piranty, S. (2026). How dark web agent spotted bedroom wall clue to rescue girl from years of abuse. BBC, February 16, 2026 https://www.bbc.com/news/articles/cx2gn239exlo?accountMarketingPreferences=off

Rerup, C. (2009). Attentional triangulation: Learning from unexpected rare crises. Organization Science, 20(5), 876-893.

Reyes, V. (2018). Three models of transparency in ethnographic research: Naming places, naming people, and sharing data. Ethnography, 19(2), 204-226.

Surmiak, A. (2020). Should we maintain or break confidentiality? The choices made by social researchers in the context of law violation and harm. Journal of Academic Ethics, 18(3), 229247.


Interviewer Bios:
Runjia Zhang is a PhD candidate at the Guanghua School of Management, Peking University. Her research examines technologies in the workplace, with a particular focus on how automated algorithms generate significant shifts in organizing, coordination, and accountability. She is currently exploring routines in automated workplaces.

Umme Kalsoom is a PhD student in Computer Science at the University of Arkansas at Little Rock (UALR). Her research focuses on developing synthetic occupancy generators (SOG) using artificial intelligence—work that has important applications for energy efficiency, building design, and smart environments. By leveraging AI to model human occupancy patterns, her research contributes to more sustainable and intelligent infrastructure systems.

Place Iteration and Integration: How Digital Nomads Navigate the Mobile Worker Paradox

Previous article