Modularization of knowledge work

[I know my posts are long, so I’m going to start adding a high-level summary if you want to get the main point without having to read the full piece]

High level summary: Even within the complex domain of knowledge work, there are certain processes that can be increasingly standardized when we take an archetypical, big picture view. This is already being done in ways both big and small, ranging from tagging/labeling, to bigger-picture process automation in recruiting, investing, and other decision-making. In the future, there are multitudes of opportunities for increasing knowledge work automation.


In some of my other posts, I’ve talked about finding patterns and archetypes in tasks, decisions, and processes. I believe that when we have enough experience or knowledge about a certain area or topic (either through direct experience or through thorough studying examples and history), we can begin to find and form patterns, and then to begin to apply them to new instances of the same situation.


In particular, I’m interested in exploring and understanding how certain tasks (increasingly complex tasks), can be defined into a coherent, standard process, and ultimately automated (or at least, 80% automated, with a human reviewing, adjusting, and adding to this to get it to 100% completion). In a previous post, I gave the example of due diligence research for private equity firms, and the potential to modularize some aspects of it (e.g., interview planning, survey planning, modeling). In this set of research, I’d like to look into the future of work modularization for knowledge work, explore examples of how this has already been done, and then explore how this could be taken further.


I. Future of knowledge work

First of all, what is knowledge work?


Some scholars have defined it as work focused on “non-routine problem solving that requires a combination of convergent and divergent thinking.” (1) According to Wikipedia, it is: anyone “whose line of work requires one to think for a living.” (2) Essentially, we can think of knowledge workers as people whose jobs focus on significant amounts of unique, case-by-case thinking, problem-solving, decision-making, and non-routine tasks with non-linear processes. In terms of careers, some examples include bankers, consultants, lawyers, accountants, scientists, engineers, professors and academics, and more.

However, I’d like to test the common assumption here and ask if this is really the case – is knowledge work truly non-linear and non-routine?

Although it seems that way on the surface, I would actually take the stance that maybe that’s not fully the case. Rather, maybe it’s just that we have to zoom out a bit and take a much bigger-picture view to to be able to see the patterns in what is being done across the work of knowledge workers. There is definitely a lot of independent thinking that does need to be done a case-by-case basis, but for many roles, there is still a level of process that is followed in the big-picture. For example, in private equity, there is a somewhat standard big-picture deal process that takes place, in consulting, there is a big-picture process for running projects, and in academia, there is a big-picture process for doing research and publishing papers. At the detailed level, this work changes because there are nuances to every single thing that takes place (i.e., types of consulting projects vary vastly, private equity deals have a large amount of variance, and academic research and publishing is never fully the same). However, if we accept the fact that there is similarity at a big-picture level, then we can begin to explore how we can take this further, by finding similarities across cases in levels of detail of a second-order, third-order, and below.

To do this, we can consider archetypes. For example, what are the different types of consulting projects that there may be? For example, we could break them down by industry – those would have more similarities among each other. Then, we could apply a functional filter on them (i.e., strategy, diligence, transformation, etc) – those would have even more similarities with each other. Then we could think about what type of company it is for (i.e., corporate BU in a large multi-national, or a small/medium enterprise, etc). The further we segment and apply filters, there more the subset that we reach actually has increasing levels of similarity, and therefore, the more that there may be opportunity to apply some level of standardization across (e.g., maybe there are certain types of analyses that are more or less standard if we do a go-to-market study for a medium-sized B2B software company focused in the retail space). When we segment and break down, we find similarities in how we can approach each of the cases.

The point is, when we actually try to apply logic-based categorization to each situation, we may find that there are more similarities than we see on the surface, and this is true across topics and industries. What it takes to find them is to think at a high enough level and then begin to break down the mass into specific areas that have similarities among each other – archetypes.

II. Examples of where this can be possible

Today, a lot of progress has been made in terms of modularization across a variety of areas. I’d like to go through a few examples of where I see it or could see it being done. This is not meant to be exhaustive, but simply to be a starting point of examples of where this can be and is being used.

Atomization of work units: At the very base level, there is significant movement toward “atomization” of work. What I mean by that is the breaking down of work into very small components. I see this happening in terms of labeling and tagging. As AI and machine learning become increasingly important decision-making tools, there is also an increasing need for data to train these machine learning algorithms. As such, human minds are being used to label, tag, or identify objects and meaning in images, text, videos, and more. A wide number of companies has emerged to meet this need, the largest among them being Amazon Mechanical Turk, with others including Scale, Hive, Labelbox, Cloudfactory, and Samasource. In practice, what they do is outsource thousands of small tasks (e.g., identifying all of the stoplights in an image, saying whether the meaning of a sentence if positive or negative, etc) to human workers all over the world, the output of which is used to train machine learning models to do the same.


What does this mean and why does it matter? It matters because it shows that at a very core level, tasks which have been are traditionally human-led (i.e., thinking, identifying, recognizing, predicting), are now being systematized and given to machines to do. Thus, labeling is simply a smaller example of something much bigger – soon, whole thought and decision-making processes will be (and are being) systematized.


Recruiting: Recruiting is one area which has seen and will continue to see a great deal of automation. It can take place in many ways. For example, it could be determining what specific skills will be needed for a specific role, but also going beyond that into thinking about what personality types actually may perform well in this role (Bridgewater and many other companies have done this), and also into what other qualities will be important for success in the role (e.g., level of curiosity/desire to learn, propensity to work hard before giving up, need for autonomy, etc).
On the candidate side, it would then be important to determine where they actually stand across all of these data point dimensions – i.e., skills, personality types and dimensions, and additional categories. This data can be gathered via a variety of different methods, from self-reporting, to formal test-taking of certain personality tests, to even web-scraping based search methods.


Ultimately, the automation and systematization in this area is and could come from determining what the right fit is across a variety of data points (as well as where the levels of tolerance are – i.e., critical vs. nice to have), determining candidate positioning across the data points, and then identifying top matches based on this, with the system ultimately creating a more systematized process for recruiting.

Training: To continue on the human capital angle, I could also envision that training would also become more modular, in that there could be services with a variety of “base case” trainings available for a variety of roles (e.g., likely starting out with roles that have some level of task similarity across companies, such as customer service representative, delivery person, driver, etc). There could be a base case training that is available for certain roles, with additional elements added in as “customized modules” based on more specific elements of what is needed for a specific company or role.


Financial modeling: Already, many financial modeling processes are automated, with tools such as Capital IQ. However, many more complex models such as LBOs, market growth models, and others, are still being largely done by financial analysts (of course, with exceptions). I could envision that by integrating with data sources (such as Capital IQ, Bloomberg, etc – which is already possible), and building in more complex rules around how to model certain events, forecasts, or scenarios, much of this could also be made more automated and made more modular, such that we could build in certain parameters (i.e., type of model, data sources, some assumptions), and an 80% version of a model could be created, that could then be reviewed and edited by a human.

Writing legal memos/PR/marketing/business documents: Writing certain legal, PR, marketing, or other business documents could also be modularized to some extent. Of course, the specific documents to be written would depend on the field and area, but I could imagine that certain more standard memos, press releases, or overview documents could be standardized in format (across archetypes – i.e., memo used for x, y, or z topic/situation), each with a different format and typical information included, and then users could provide the information needed to “personalize” the document with the information that is required based on the archetype.


Decision-making and prediction processes: Ultimately, I also see automation and systematization being useful in a variety of decision-making process, including investing (PE, VC, hedge funds), acquisition planning, and more. The methodology would involve each decision-making party systematizing their decision-making process by determining the standard criteria to be considered in every decision and minimum requirement across all criteria, and then assessing options across these. I could envision that there could be various archetypes of decisions set up (i.e., different types of investments, for example, maybe depending on industry/geography/size/goal), which would each have a different methodology and criteria for reaching the decision. The user could then fill in the required information to reach the necessary archetype, be prompted to provide additional information needed for this specific decision (or this would be pulled in from other documents), and then receive a recommendation.


III. What this means going forward

Ultimately, this means that as we move increasingly in this direction, we will be more able to move beyond pure task automation, and into more complex work automation that typically requires thinking and complex decision-making. The key is to realize that everything is ultimately built off of logic, even human decision-making, but that this logic is simply very complex. However, if we can try to systematize this complex and multi-layered logic, we can begin to build modular processes across a variety of areas that have thus far been primarily in the domain of human thinking.

Once “thinking processes” are able to be systematized, the systems that we trust to do the initial thinking for us would ideally be able to create ~80% versions of whatever task we are doing, with humans then stepping in to review and adjust as needed. This could be like a “turbo tax” tool for businesses, in that work is increasingly made modular with clear steps or inputs, with a first version output that can then be adjusted.

What does this mean? Having such tools would enable us to focus our attention on the even more complex work that requires our full attention, and away from lower-level tasks that are more able to be systematized (with increasingly more seemingly complex tasks being added to this category over time).

IV. Sources
1.  Pyöriä, P. (2005). “The Concept of Knowledge Work Revisited”. Journal of Knowledge Management9 (3): 116–127. doi:10.1108/13673270510602818  .https://www.emerald.com/insight/content/doi/10.1108/13673270510602818/full/html


2. Davenport, Thomas H. (2005). Thinking For A Living: How to Get Better Performance and Results From Knowledge Workers  . Boston: Harvard Business School Press. ISBN1-59139-423-6.

3. Deloitte University Press, The Future of Knowledge Work, https://www2.deloitte.com/content/dam/insights/us/articles/the-future-of-knowledge-work/DUP416_The-future-of-knowledge-work.pdf

4. https://www.nintex.com/blog/will-automation-knowledge-work-really-mean/

5. https://www.researchgate.net/publication/270771273_The_process_of_atomization_of_business_tasks_for_crowdsourcing

One thought on “Modularization of knowledge work”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s