Elicit's core bets: Be systematic, transparent, and unbounded

Eventually, there will be many AI assistants for chat, question-answering, search, web browsing, writing, and general computer use. They'll generally aim to be trustworthy, easy-to-use tools for important questions and decisions. What makes Elicit different?

The core bets that distinguish Elicit are that its reasoning is built to be

  • systematic,
  • transparent, and
  • unbounded.

Taken together, these bets will make Elicit more accurate, trustworthy, and high-impact than alternatives.

Systematic

Elicit’s reasoning is systematic if it follows a deliberate step-by-step plan (whether generated ahead of time or on the fly) that arrives at good answers and decisions by construction, not by accident.

A plan is good by construction if it is (among other things):

  • Made out of valid reasoning steps - think logical validity, Bayesian updating, good epistemics
  • Comprehensive - looking at all relevant sources, all plausible arguments, etc.
  • Based on crisp problem decompositions that “carve nature at its joints”

In many cases, good reasoning is defined by systematicity. Consider long-range forecasting, and policy decisions informed by such forecasts: The quality of a forecast or decision depends on the assumptions, evidence, and reasoning used to produce it.

Systematicity helps with accuracy and trustworthiness because it leads to good results more reliably than ad-hoc processes.

Transparent

Elicit’s reasoning is transparent if users can audit it and understand how Elicit arrived at an answer. Transparency helps with accuracy (spot errors) and trustworthiness (know why and how it’s accurate).

For true transparency, it's not enough to just show words generated by the model that correlate with the reasoning—the reasoning shown needs to be a faithful representation of the causal structure of the underlying model behavior.

Systematicity and transparency together are often enough for trustworthiness.

Unbounded

Elicit’s reasoning is unbounded if Elicit can always keep improving an answer or its confidence in an answer - it’s not bounded to 80/20 answers. This helps with accuracy, trustworthiness, and impact - Elicit isn’t just providing a small input into a decision, but can make up an increasingly big share.

Boundedness is a key limitation for all existing AI tools - unlike a human assistant, you can’t tell them to go off, think longer, do more research, and come back with a better answer.