Reflections on Responsible AI

Reflections on Responsible AI

Reflections on Responsible AI

Reflections on Responsible AI

Mar 6, 2025

Jon Reifschneider, Co-founder & CEO


This week the Inquisite team attended the Responsible AI Symposium at Duke University, where Pramod and I are on the faculty. It was a thought-provoking experience to listen to the diverse perspectives on what "Responsible AI" means and how various individuals and organizations are thinking about it.


But we believe that the real value of this kind of event is in reflecting on the main take-aways and then translating them into action. Therefore, we spent some time reflecting on how we are doing as an organization in living our commitment to responsible AI, and what best practices we took away from what we heard that we can go implement in our products. We also strongly believe in the open sharing of knowledge and practices, and therefore we've decided to make our reflections and ideas public through this post.


A starting point for our reflection was how we define responsible AI in a way that is more specific and measurable. The definition we settled on has four key elements:

  • Human-centric - built to augment and support humans, not de-value or replace

  • Safe - causes no harm, either directly through actions or through words that are toxic, hurtful or biased

  • Secure - maximally protects the privacy of user data and uses it for no purpose other than each user's direct benefit with explicit approval

  • Trustworthy - minimizes the possibility of intentionally or unintentionally providing false information


As we evaluated our current state and improvement opportunities, our ideas mapped to two levels:

  1. The organizational level: how we build and run our company and our products.


  2. The product level: how we design and deliver value for our users and enable them to be responsible with their use of AI.

Responsibility at an organizational level

Our commitment to Responsible AI at an organizational level is reflected in two elements of our foundation as an organization:

  1. We are a mission-oriented organization, and our mission is so important to our existence that it is encoded into our legal structure as a Public Benefit Corporation so that we are (and always will be) required to place equal emphasis on our societal mission as we do on earning money as a for-profit company.


  2. We believe in building AI that supports humans in their work, rather than trying to replace them. This is the core philosophy behind the design of the tools we build.


Societal benefit at the core

When we first set up Inquisite, we knew that we wanted it to be a financially self-sustaining organization but also that we wanted to set it up in a way that would ensure its societal mission always stayed front and center in decision-making. This led us to establish Inquisite as a Public Benefit Corporation, a legal structure that mandates balancing both public benefit and profit in making decisions. By setting up our company like this, we could ensure that no matter where things might go in the future or who might be involved, the company would always be required to put equal emphasis on its societal mission to advance science through AI.

Human-in-the-loop

There are other organizations working to automate some or all of the scientific process. While we may share the same goal of accelerating science research, our approach is different - we strongly believe in building tools to help human scientists do their best work, rather than replacing them in the process. In the design of our tools we spend a great deal of time thinking through the right balance of human control and AI efficiency. This is an evolving thing, and we may not always get this exactly right, but we will continue to focus on it to make sure that our human users stay in control.

Responsibility at a product level

The experience of our users in working with our platform is where the rubber meets the road in terms of responsible AI. After reflecting on how we are doing here, we feel that we have implemented some good practices and we have ideas of other things we will do to improve.

Our current approach to responsible AI products

In building Inquisite, we put in place a number of design principles that we feel are best practices for the responsible use of AI:

  1. We always ground any AI-generated text by citing sources. We do not allow large language models (LLMs) free rein to generate text on their own based on their training - we only use LLMs to summarize, synthesize and extract data based on trusted content sources.


  2. We carefully screen any sources used. When possible, we use research literature to support text. In cases where a user selects to include internet sources, we use a curation and scoring approach we've developed in-house to filter through the internet and surface the most trusted sources.


  3. We use only the best performing LLMs (even for our free tier). If our users cannot trust the content they find in Inquisite, they will not use it. Using models that are smaller/cheaper than the current state-of-the-art means that hallucination risk is higher and the model is more likely to present factually incorrect information. We do not believe in doing this, regardless of whether a user is paying or not. We will always use the best models for the job, regardless of cost or provider. We use several state-of-the-art models today in our pipeline from various companies, and we continually evaluate and adjust as new models are released.


We believe that responsible use of LLMs includes grounding generated content in trusted sources and enabling users to easily refer to the sources. Last week we took a major additional step in the product by citing not only the overall source for each statement, but referencing the specific sentence(s) that the model cited from in generating text. This allows our users to quickly verify important facts.

How we can keep improving

In our reflection we've also identified areas where we can do a better job. A few of the key ideas for improvement are:

  1. Make it clear when the user is being shown AI-generated text. In some places we use icons or text to let the user know that text is LLM-generated, but we do not do this consistently throughout the product. Transparency on when content is AI-generated is critical for trust.


  2. Greater transparency into the models used and our search process. We currently provide little insight into what models we are using or how we are deploying them within our search pipeline. Providing greater transparency into this will also help increase trust.


  3. Strengthen our guardrails against harmful responses to adversarial prompts. Inquisite's built-in guardrails greatly reduce the ability of our LLMs to provide harmful or biased responses to adversarial prompts from users. This is a quickly evolving area and we will invest more effort in strengthening our guardrails going forward.

Building trust through responsible AI

We believe that implementing Responsible AI best practices is not just a good thing to do, but makes good business sense as well. Increased explainability, accuracy and safety lead to higher user trust, which leads to increased usage.


We aspire to lead the way in our area in our commitment to and implementation of responsible AI best practices. Defining exactly what those best practices are, at a time when the field of AI is evolving so quickly, is a significant challenge itself. We hope to work with other organizations in industry, academia and government to accelerate the definition of both required standards and recommended practices to ensure that AI tools serve the needs of users rather than the companies who build them.


Accelerate R&D with advanced AI

Accelerate R&D with advanced AI

Accelerate R&D with advanced AI

Make more progress in less time with help from Inquisite's powerful AI Assistants for even the most knowledge-intensive workflows.

Make more progress in less time with help from Inquisite's powerful AI Assistants for even the most knowledge-intensive workflows.

Make more progress in less time with help from Inquisite's powerful AI Assistants for even the most knowledge-intensive workflows.

Get Started Free

Get Started Free

Get Started Free

Get Started Free

Accelerating science with cutting-edge AI for a better world.

Copyright © 2024 Spotlight Labs Inc.

Accelerating science with cutting-edge AI for a better world.

Copyright © 2024 Spotlight Labs Inc.

Accelerating science with cutting-edge AI for a better world.

Copyright © 2024 Spotlight Labs Inc.

Accelerating science with cutting-edge AI for a better world.

Copyright © 2024 Spotlight Labs Inc.