We begin with a problem and a question, and then integrate the advice of multiple experts to determine the best approach to addressing the question. Often these are open-ended problems that require us to build processes for knowledge aggregation and engineering to advance both general and domain-specific AI.
AI and machine learning presents us with new opportunities to keep up with evidence production. We train these models to tell us what we want to know about a body of evidence and always work with our expert community for the validation process.
The approach that we use to train our models mirrors some human-centric processes that are important for decision-makers: foresight analysis and evidence synthesis. Both of these methods help us surface relevant insight from dense materials and determine impact. We can then structure them in a way that demonstrates cross-disciplinary insight and impact.
Using natural language processing and our content filters, we work with clients to curate data from many sources, including scientific research, patents, “grey” literature, news, social media, and internal databases.
We use state of the art transformer models that are designed to perform at various levels. For instance, we can surface relevant insights from millions of abstracts and classify them in non-traditional ways, such as showing outcomes across different populations. This helps with evidence gap screening. Or, we can extract very specific information from within studies, such as climate projections from across hundreds of studies, and format it for future analysis.