KENNESAW, Ga. | Oct 15, 2024
Let’s dive into an example of AI making our daily life easier and enhancing our job performance - all while creating small value each time that adds up to massive total value. The process of performing research and gathering information has drastically evolved over time, driven by advances in both information management and analytics. Of course, large language models and generative AI have been the latest innovations to further streamline how we gather information. Below, I’ll first review a common research process and then I’ll show how the execution of that research process has changed over time and how the full impact of that progress is being underestimated.
The Basic Research Process
Let’s say I need information on anything from a product, to an event, to an analytical method, to anything else. Generally, there are six steps that I must execute to get to a satisfactory result. I must:
1) Define what information I am looking to for
2) Identify sources that may have all or part of the information I need
3) Review and read through the identified sources to understand what they contain
4) Extract the relevant pieces of information found during that review
5) Consolidate and summarize all the relevant information extracted from all of the sources
6) Determine specific options for how to act (or not) based on that consolidated summary
Over time, we’ve gone from a world where a human had to do all those steps to today where the steps can be either entirely or mostly handled for us through search analytics, large language models, and generative AI. Let’s look at the evolution.
The Old Days
We’ll define the old days as anything prior to the early 2000’s when the web and search engines became ubiquitous. For example, early in my career we were pretty much on our own to figure out how to solve a coding problem. I had shelves full of detailed product and language manuals that were my main source of information. I could also ask a few local coworkers or call a product help desk. Outside of that, it was on me to either find what I needed from those limited resources, to figure it out on my own, or to fail. Similarly, if I needed information to support a school paper, I had to go to the library and personally seek out books to examine while being limited to what that specific library happened to stock.
In the old days, then, I had to do all six steps myself. For a coding problem, that included the tedious manual scanning of tables of contents in the manuals, reading of relevant sections, and possibly also scanning the folders of code on my computer to find something I recalled from the past that could pertain to my current need. It was manual, time consuming, and tedious. Worse, the process accessed a very small base of information.
The Search Engine & Sharing Era
From the early 2000’s until recently, research and information gathering changed dramatically. First, a massive trove of documents, articles, and code was uploaded to the web. Then, search engines did their magic to index and tag those documents to make them easy to find. In addition, widespread sharing of knowledge through sites like GitHub and social media platforms enabled us to identify and interact with countless other people who could provide relevant guidance to us.
This made step 2) very easy. It also expedited steps 3) & 4) when someone else who previously had a question like our own documented a summary of what they found. In other words, we could find very relevant information from very relevant people very quickly. However, we still had to consolidate and summarize what we found across those conversations, documents, and code samples and decide how to act on the information.
The AI Age
Since 2023, artificial intelligence has taken things even further. Large language models can now take those same documents and code examples found on the web and almost fully execute steps 3), 4), and 5) while providing significant help in executing step 6).
The models still begin by, for instance, identifying the top 10 documents that appear most relevant to my question after interpreting my prompt and matching it against the document repository. But they don’t stop there. Today’s language models take it further and consolidate and summarize the information in those documents into a nice, concise narrative for me. The new Google AI Overview is an example of this. What’s more, we can also ask the LLM for suggested actions to take based on that summary. While the suggestions might not be perfect, they are a great starting point. Count step 6) as expedited but not fully automated.
Assessing The Value
Today, after defining what information we need, we can completely automate steps 2) – 5) and partially automate step 6). Better yet, we’re able to perform those steps in just seconds while considering a far larger set of base documents and knowledge than ever before. By getting detailed answers so quickly, we can iterate and ask more questions to get a better result, faster than before.
People tend to focus on the flashy examples of AI being used in some novel or creative way. However, I think that the amount of value that will come from the automation of research is far larger than most people realize. Billions of people will save time on all of their research endeavors. While the value of each instance is small, the total across those efforts represents massive value!
This simple research example is one of AI making our daily life easier and enhancing our job performance – all while repeatedly creating small amounts of value that add up to something very significant. I know that I don’t miss having to execute the entire research process myself. I’m happy to let AI handle this type of task for me. Aren’t you?