Elicit's mission is to scale up good reasoning

Now that Elicit is a standalone company let's review our mission.

In brief:

  • Our mission is to scale up good reasoning.
  • Good reasoning is reasoning that reliably arrives at true beliefs and good decisions.
  • Good reasoning is rare but essential for government, companies, and research.
  • Advanced AI systems are an opportunity to radically scale up good reasoning.
  • Getting this right is crucial for guiding the transition to a society that runs on machine thinking.

What is good reasoning?

We want people, organizations, and machines to arrive at:

  • True beliefs
  • Good decisions

Good reasoning is reasoning that reliably leads to these outcomes.

The status quo

Right now, good reasoning is rare. Bad reasoning is everywhere:

  • Governments allocate resources for pandemics and similar risks based on biases, political pressures, or vested interests
  • Courts judge cases based on selective, fallacious, or unfair use of evidence, arguments, and law
  • Companies decide their strategies based on overconfidence, underestimation, or complacency
  • Investors allocate capital based on herd mentality, hype, or fear
  • Researchers do work that is unimpactful or harmful, and manipulate or hide data

We know that better reasoning is possible. We already see glimpses of it in rare cases:

  • GiveWell’s and OpenPhil’s reports that evaluate the evidence for and cost-effectiveness of charitable interventions
  • Cochrane’s systematic reviews that aggregate the medical literature and reflect on what the findings are
  • Some of the work done by Elizabeth Van Nostrand (on various medical questions), Zvi Mowshowitz (on Covid), Scott Alexander (misc), and Holden Karnofsky (misc) who do deep dives on topics, try to dig up the raw data, build mental models, and figure out what’s actually true
  • Superforecasters who start with outside view base rates, then apply corrections based on the particulars of the situation
  • The IPCC’s synthesis of the scientific consensus on climate change and its impacts, and its policy recommendations

The better world

We want to live in a world where good reasoning is applied much more widely:

  • Researchers do work that addresses important and neglected problems, with high standards for evidence and conclusions
  • Governments allocate resources for pandemics, AI risk, and other risks based on rigorous, evidence-based analyses
  • Courts judge cases based on careful, logical, and fair evaluation of evidence, arguments, and law
  • Companies decide their strategies based on accurate, reasoned assessment of customers, competitors, and market
  • Investors allocate capital based on financial analysis, growth potential, risk profile, and investment goals and strategies

Good reasoning should be the default, not the exception.

Scaling up good reasoning with AI

We think it’s likely that, over the next 2-20 years, AI systems will have the capacity to do human-like reasoning at superhuman quality and quantity. This could lead to a radical shift in our society, with most thinking happening in machines instead of human brains.

This presents both an opportunity and a need:

  • Opportunity: Automation could make it extremely cheap to apply good reasoning, enabling a massive scale-up.
  • Need: It’s not obvious that all this automated thinking will be deployed well by default. It’s easier to train AI systems to generate outputs that look good (e.g. persuasive, impressive, entertaining, helpful-seeming) than outputs that are well-reasoned and robustly helpful. This is especially bad because good decision-making is unusually important during the transition to a society that runs on machine thinking.

Our mission is to direct machine thinking towards supporting true beliefs and good (beneficial) decisions, using the opportunity and addressing the need. Join us!