Get startedGet started for free

One Truth from Many Sources

1. One Truth from Many Sources

Welcome back! In this video, we'll see how AI supports research tasks by assisting with synthesizing information across multiple sources. Let's dive in!

2. Synthesizing Across Sources

Let's say we're developing a market-entry strategy for an energy company planning to launch a fast-charging network for electrical vehicles in Germany. After some searching, we have found three relevant documents:

3. Synthesizing Across Sources

a Trade.gov market-intelligence article on the web,

4. Synthesizing Across Sources

an academic paper on EV purchasing subsidies, and

5. Synthesizing Across Sources

an infographic. We want to use an AI system to aggregate the information from these sources before moving on to the analysis.

6. Synthesizing Across Sources

Let's build our prompt following the GSCE framework,

7. Synthesizing Across Sources

starting with the context. This includes our three sources: a website, scientific article, and infographic.

8. Synthesizing Across Sources

Then the goal, which includes the creation of a comparison table.

9. Synthesizing Across Sources

And finally some information about the style we expect. Note that in this case, we don't have any examples to provide.

10. Synthesizing Across Sources

Let's send this prompt to the model and observe the outcome. One of the first things we notice is the "processing" bar that appears from the moment we send the prompt until the response begins. This happens because the model needs time to think about our request to produce a better answer. We'll discuss how AI "thinks" in just a moment. After some time, we can see that the model provides the table we requested. Another important observation is that the model only references the sources we supplied, which means we can trace every claim and statement back to its origin.

11. Reasoning Models

Before moving on, let's take a closer look at how "thinking" works in AI systems. Models do not "think" the way humans do, but there is a category of models known as reasoning models that pretend to. These models break complex tasks into simpler intermediate steps that they can carry out to achieve the final goal. As a result, the outputs are usually more accurate. In some interfaces we can even see the "plan" the model crafts in order to fulfill the task.

12. Checking Source Consistency

Now that we have the data in a single table, we should check whether our sources are consistent. Our sources come from different years: the infographic is from 2021, the website from 2024, and the paper from 2025,

13. Checking Source Consistency

so we could ask the model to check data consistency over time. The model reports that the three documents are temporally consistent and complement one another.

14. Response Quality

As consultants, we should not trust AI systems blindly - checking the correctness and quality of their responses is essential. For example, we could ask the model to re-read the scientific paper and verify the data it reported. This mitigates so-called "AI hallucinations."

15. Hallucinations

Hallucinations happen when a model confidently generates information that is not supported by the sources. We can mitigate hallucinations by asking the model to use only the provided sources, requiring citations in outputs, and encouraging the model to be transparent about when its unsure - like asking it to answer "Not specified" when it can't determine the answer. We also need to verify any critical facts or numbers that will be used in our downstream analysis.

16. Let's practice!

Time for you to begin synthesizing information across multiple sources!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.