Using LLMs as Interviewers: Extracting Knowledge Through Dialogue

By

Introduction: The Challenge of Context

When we ask a large language model (LLM) to handle a complex task—such as designing a new software feature—we must supply it with substantial background information. This context typically includes user interface descriptions, implementation guidelines, details about external systems, and more. The usual approach is for a human to write this context manually, which can be time‑consuming and effort‑intensive. An alternative, however, is to let the LLM itself create that context by interviewing a human. This technique, often called an interrogatory LLM, flips the script: instead of the human feeding the model, the model asks the human targeted questions to gather the necessary details.

Using LLMs as Interviewers: Extracting Knowledge Through Dialogue
Source: martinfowler.com

How the Interrogatory LLM Works

In practice, you prompt the LLM to act as an interviewer. The model asks you all the questions it needs to assemble the appropriate context for a given task. You can provide direct answers, point it to other sources, or instruct it to consult external databases if needed. Once the interview is complete, the LLM produces a comprehensive context report that can be used by another instance of the model (or a different model) to perform the next step.

A notable detail, first described by Harper Reed in his blog, is the instruction that the LLM should ask only one question at a time. This prevents the model from overwhelming the user and ensures a focused, step‑by‑step dialogue. (Many users, including myself, find that the LLM needs frequent reminders about this single‑question rule.)

Using Interrogatory LLMs for Document Review

Another powerful application is document validation. Suppose you have a software specification or any document that captures domain knowledge. Instead of asking a human expert to read and critique the document—a task many people find tedious and difficult—you can feed the document to an interrogatory LLM. The model then interviews the expert, using their responses to identify inaccuracies or gaps in the document. This conversational approach can be far more productive than traditional review, especially if the document is poorly written or the expert is short on time.

Combining Both Approaches

These two methods can be used in tandem. For example, one interrogatory LLM can build an initial document by interviewing a subject‑matter expert. Then, a second interrogatory LLM can review that document with another expert, cross‑checking facts and improving completeness. This creates a collaborative loop where LLMs facilitate knowledge capture and verification without forcing humans to write or read lengthy texts.

Broader Applications: Helping Non‑Writers

The interrogatory LLM technique extends beyond the context of AI tasks. Many people find writing difficult—it can be a slow, painful process. Yet writing is often essential for crystallizing thoughts and sharing knowledge. For individuals who struggle with writing, being interviewed by an LLM can be a much more natural way to externalize their expertise. The resulting text may carry the characteristic “AI‑writing flavor” that some find off‑putting, but it is far better than having the knowledge lost due to rushed or absent documentation.

Conclusion: A Dialogue‑Driven Future

The interrogatory LLM represents a shift from human‑to‑model data feeding to a more interactive, question‑based exchange. By letting the model ask questions one at a time, we can efficiently gather high‑quality context, review documents with less friction, and help non‑writers articulate what they know. As LLMs become more conversational, this approach will likely become a standard tool for knowledge extraction and collaboration.

Tags:

Related Articles

Recommended

Discover More

Using the Hydrogenosome Discovery to Slash Livestock Methane EmissionsHow DNA from Living Descendants Helped Identify Doomed Franklin Expedition SailorsCritical Open Source Projects Rescued from Abandonment: Chainguard CEO Launches Forking Initiative to Secure Software Supply ChainThe Sunset of Cheap AI Subscriptions: 7 Critical Changes You Must KnowSecuring Site-to-Site Networks: Cloudflare Brings Post-Quantum Encryption to IPsec