SynthBoard
PricingEnterprise

Product

  • Features
  • Pricing
  • Use Cases
  • Decision Intelligence
  • Compare

Resources

  • Help Center
  • Blog
  • Glossary
  • Contact

Company

  • About
  • Enterprise

Legal

  • Privacy Policy
  • Terms of Service
  • Security
  • Refund Policy
Stay Updated

Get AI Insights Weekly

Join our newsletter for product updates, decision-making insights, and exclusive member content.

No spam, unsubscribe anytime. Read our Privacy Policy.

SynthBoardDecision Intelligence Platform
© 2026 SynthBoard AI

Built with ❤️ for the future of AI collaboration

Reference

Decision Intelligence Glossary

Key terms and definitions for AI-powered decision intelligence, multi-agent AI systems, and structured strategic analysis.

Decision IntelligenceAI-Powered Decision IntelligenceMulti-Agent AIAI SycophancyConsensus ScoringAdversarial AnalysisStructured DisagreementSynthOCEAN Model (Big Five)Cognitive FrameworkAnti-Sycophancy ProtocolMinority OpinionSession ModeMulti-LLM ArchitectureCognitive DiversityClaim ExtractionDecision FatigueRed TeamingPre-Mortem AnalysisEnsemble IntelligencePrompt Framing EffectAgent OrchestrationConfidence CalibrationStrategic ForkBoardroom SessionSynthesis LayerDecision QualityReasoning FrameworkPosition TrackingScenario PlanningFirst Principles ThinkingBayesian ReasoningGame Theory AnalysisSystems ThinkingModel FingerprintIntellectual MonocultureConviction ScoreDecision Audit TrailCognitive LoadDivergent ThinkingConvergent Synthesis

Decision Intelligence

A discipline that applies data science, social science, and artificial intelligence to systematically improve the quality of decisions. Coined by Cassie Kozyrkov at Google, decision intelligence bridges the gap between having data and making good decisions by structuring how options, risks, and tradeoffs are analyzed. Unlike business intelligence (which tells you what happened), decision intelligence helps you decide what to do next.

Complete guide

AI-Powered Decision Intelligence

The application of AI systems — particularly multi-agent architectures, large language models, and consensus mechanisms — to the decision intelligence discipline. AI-powered decision intelligence uses multiple AI agents with competing viewpoints to analyze strategic questions from diverse perspectives, producing synthesized recommendations with confidence scores rather than single-perspective answers.

What is AI Decision Intelligence?

Multi-Agent AI

An AI architecture that uses multiple specialized agents — each with distinct expertise, reasoning frameworks, and objectives — to analyze problems from different perspectives rather than relying on a single model. In decision intelligence, multi-agent systems create cognitive diversity that catches blind spots, biases, and risks that any single model would miss. Research in collective intelligence consistently shows that diverse perspectives outperform uniform expertise.

Multi-LLM architecture explained

AI Sycophancy

The tendency of AI language models to agree with users rather than challenge them, caused by reinforcement learning from human feedback (RLHF) that optimizes for user satisfaction. Sycophantic AI tells you what you want to hear instead of what you need to hear. Research from Anthropic has shown that models will change correct answers to incorrect ones when users express doubt. For strategic decisions, sycophancy produces confirmation bias with AI-generated confidence — the worst possible combination.

Why sycophancy kills decisions

Consensus Scoring

A structured method for quantifying the degree of agreement and disagreement across multiple AI agents after they independently analyze the same decision. Rather than forcing agents to a single answer, consensus scoring maps clusters of agreement, points of genuine conflict, and the confidence levels behind each stance. High consensus with high confidence is a strong signal. Low consensus with high confidence indicates a genuine strategic fork that needs empirical validation.

How consensus scoring works

Adversarial Analysis

The practice of deliberately seeking counter-arguments, risks, and flaws in a proposed plan or thesis. In decision intelligence, adversarial analysis is structurally embedded through agents with contrarian roles (e.g., Devil's Advocate, Skeptic) who are architecturally motivated to challenge the prevailing view. Devil's advocate protocols improve decision quality by 18-24% in controlled studies.

Structured Disagreement

A decision-making approach where disagreement is deliberately engineered and preserved as valuable signal rather than suppressed. Research shows that premature consensus is the single biggest predictor of group decision failure. Structured disagreement assigns competing roles, uses different reasoning frameworks, and tracks position changes — ensuring that conflicting perspectives are examined rather than smoothed away.

Synth

SynthBoard's term for an AI advisor. Each Synth has a distinct personality built on the OCEAN Big Five model, a cognitive reasoning framework (e.g., First Principles, Bayesian, Game Theory), and 7 dimensions of behavioral DNA. Synths are not just differently prompted — they have genuine cognitive diversity engineered into their architecture, with built-in conviction to hold their ground rather than agree with other agents.

Meet the 24 Synths

OCEAN Model (Big Five)

The Big Five personality trait model used in psychology: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. In SynthBoard, each Synth has calibrated OCEAN scores that shape their behavior. A high-Openness agent explores creative alternatives; a high-Conscientiousness agent demands structured evidence; a low-Agreeableness agent naturally pushes back on consensus. This produces genuine cognitive diversity rather than cosmetic variation.

Cognitive Framework

A structured reasoning approach assigned to each AI agent. SynthBoard uses six frameworks: First Principles (break problems down to fundamental truths), Bayesian Reasoning (update beliefs based on evidence), Game Theory (model strategic interactions), Adversarial Thinking (seek flaws and counter-arguments), Scenario Planning (model multiple futures), and Systems Thinking (analyze interconnections and feedback loops). Each framework produces qualitatively different analysis of the same question.

Anti-Sycophancy Protocol

Structural safeguards built into multi-agent AI systems that prevent agents from defaulting to agreement bias. These protocols ensure that each agent maintains genuine conviction in its position rather than drifting toward consensus without justification. The goal is to preserve the structured disagreement that makes multi-agent analysis valuable — because in strategic decision-making, genuine challenge produces better outcomes than comfortable agreement.

Minority Opinion

A dissenting view from one or more agents that contradicts the majority consensus. In SynthBoard, minority opinions are explicitly surfaced and preserved with their full reasoning chain — not buried or suppressed. Research on group decision-making consistently shows that minority viewpoints, even when wrong, improve overall decision quality by forcing the majority to examine and articulate their reasoning more carefully.

Session Mode

A preset configuration that shapes how Synths behave during a SynthBoard session. The 10 session modes include: Strategic Analysis (balanced multi-perspective analysis), Devil's Advocate (maximizes contrarian analysis), Red Team (focuses on vulnerabilities and attack vectors), Innovation Lab (emphasizes creative and unconventional approaches), and more. Each mode adjusts agent behavior, synthesis strategy, and the types of claims that get prioritized.

See all session modes

Multi-LLM Architecture

An AI system architecture that uses models from multiple providers (e.g., OpenAI, Anthropic, Google) rather than relying on a single model. Each model has a distinct fingerprint of strengths, weaknesses, and biases from its training data. Multi-LLM architecture creates model diversity that prevents intellectual monoculture — the AI equivalent of planting multiple crop strains to prevent a single disease from wiping out the entire harvest.

Multi-LLM deep dive

Cognitive Diversity

Variation in how individuals (or AI agents) approach problems, process information, and reach conclusions. Research by Scott Page and others shows that cognitive diversity is a stronger predictor of group decision quality than individual expertise. In multi-agent AI, cognitive diversity is engineered through different personality traits, reasoning frameworks, and model providers — ensuring agents genuinely think differently rather than generating surface-level variation.

Claim Extraction

The process of identifying discrete, structured assertions from each agent's analysis. A claim is a specific, evaluable statement like "market timing favors a delayed raise" — not vague advice like "consider your options carefully." Claim extraction transforms narrative analysis into structured components that can be compared across agents, enabling consensus scoring and systematic identification of agreement and disagreement.

Decision Fatigue

The deterioration of decision quality that occurs after a prolonged period of decision-making. As cognitive resources deplete, individuals default to status-quo choices, impulsive actions, or outright avoidance. Research by Roy Baumeister demonstrated that willpower and decision-making draw from the same finite mental reservoir, meaning the tenth decision of the day is measurably worse than the first. In decision intelligence, offloading analytical heavy-lifting to structured AI debate preserves executive cognitive capacity for the judgment calls that truly require human intuition.

Red Teaming

A structured adversarial practice in which a dedicated team deliberately attacks a plan, strategy, or system to expose vulnerabilities before real-world adversaries do. Originating in military wargaming, red teaming has become standard practice in cybersecurity, corporate strategy, and AI safety. Effective red teams operate with full independence and explicit permission to challenge assumptions. In SynthBoard, experts like The Skeptic and The Devil's Advocate serve as permanent red team members, ensuring every strategic recommendation has been rigorously challenged against its strongest counter-arguments.

Meet the Synths

Pre-Mortem Analysis

A decision-making technique developed by psychologist Gary Klein in which a team imagines that a project or decision has already failed, then works backward to identify the most likely causes. Unlike a post-mortem (which examines actual failures), a pre-mortem leverages prospective hindsight to surface risks that optimism bias typically obscures. Studies show that pre-mortems increase the ability to identify reasons for future outcomes by 30%. This technique is especially powerful in multi-expert AI systems where different experts can independently generate failure scenarios from their unique cognitive frameworks.

Ensemble Intelligence

The collective intelligence that emerges when multiple AI experts with diverse training data, reasoning frameworks, and cognitive profiles analyze the same problem independently before their outputs are synthesized. Inspired by ensemble methods in machine learning — where combining multiple weak models produces a strong one — ensemble intelligence in decision-making produces recommendations that are more robust, nuanced, and calibrated than any single expert could achieve alone. The key requirement is genuine diversity: each expert must differ in how they think, not just what they say.

Multi-LLM architecture explained

Prompt Framing Effect

The phenomenon whereby the phrasing, structure, and implicit assumptions embedded in a question systematically bias the responses generated by large language models. Just as framing effects in behavioral economics cause humans to make different choices depending on how options are presented, AI models are highly sensitive to prompt framing — a question framed as "what are the risks?" will produce a fundamentally different analysis than "what are the opportunities?" even when the underlying situation is identical. Decision intelligence platforms mitigate this by assembling experts with different cognitive frames to analyze the same core question.

Agent Orchestration

The coordination layer that manages how multiple AI experts are assembled, sequenced, and synthesized during a structured analysis session. Orchestration determines which experts participate, what reasoning frameworks they apply, how they interact across rounds, and how their outputs are aggregated into actionable recommendations. Effective orchestration balances diversity of perspective with coherence of output — ensuring experts genuinely challenge each other while producing a synthesized result that decision-makers can act on.

See how SynthBoard orchestrates

Confidence Calibration

The process of ensuring that an AI system's stated confidence in its conclusions accurately reflects the actual probability of those conclusions being correct. A well-calibrated model that claims 80% confidence should be right approximately 80% of the time. Most large language models are poorly calibrated — they express high confidence even when they are wrong. In multi-expert decision intelligence, confidence calibration is improved through independent thinking: when multiple experts with different biases converge on the same conclusion with high conviction, the calibration signal is significantly stronger than any single model's self-assessment.

Strategic Fork

A decision point where available paths diverge significantly and the choice is difficult or impossible to reverse once committed. Strategic forks are high-stakes by definition — they create path dependency, meaning future options are constrained by today's choice. Examples include entering a new market, choosing a technology platform, or accepting an acquisition offer. These decisions benefit most from structured multi-perspective analysis because the cost of getting them wrong is compounded over time, and the cognitive biases that plague individual decision-makers (anchoring, sunk cost, status quo) are at their most dangerous.

Boardroom Session

A structured multi-expert deliberation in SynthBoard where selected AI experts analyze a strategic question across multiple rounds, producing claims, counter-arguments, and a synthesized recommendation with consensus scoring. Each session configures expert personas, reasoning frameworks, and session modes to match the decision type. The boardroom metaphor reflects the goal: replicating the value of a diverse advisory board that challenges your thinking — without the politics, scheduling conflicts, or information asymmetry of a real boardroom.

See the Boardroom in action

Synthesis Layer

The system component that aggregates, reconciles, and distills individual expert outputs into a unified, actionable recommendation. The synthesis layer identifies clusters of agreement, maps genuine points of disagreement, extracts minority opinions worth preserving, and produces a coherent narrative that decision-makers can act on. Unlike simple averaging or majority voting, sophisticated synthesis preserves the reasoning chains behind each position, enabling users to understand not just what the recommendation is, but why each expert holds their view and where the genuine uncertainties lie.

Decision Quality

A framework for evaluating the rigor of a decision process independent of its outcome. A good decision can produce a bad outcome (and vice versa) due to factors beyond the decision-maker's control. Decision quality is assessed across dimensions including: clarity of objectives, quality of information, range of alternatives considered, soundness of reasoning, and alignment with values. Strategic Decision Group research shows that organizations that measure decision quality systematically outperform those that judge decisions solely by results.

Reasoning Framework

A structured approach to analysis that determines how an agent processes information and reaches conclusions. Common reasoning frameworks include First Principles (decompose to fundamental truths), Bayesian (update beliefs with evidence), Game Theory (model strategic interactions), Systems Thinking (analyze feedback loops and emergent behavior), and Scenario Planning (model multiple futures). Each framework produces qualitatively different insights from the same data. Assigning different frameworks to different experts is a primary mechanism for generating genuine cognitive diversity in multi-expert systems.

How Synths think differently

Position Tracking

The systematic monitoring of how each AI expert's stance on a question evolves — or holds firm — across multiple rounds of deliberation. Position tracking reveals which arguments are persuasive enough to shift an expert's view and which positions remain entrenched despite challenge. An expert that changes position in response to strong evidence demonstrates intellectual honesty; one that holds firm despite mounting counter-evidence may reveal a genuine structural disagreement worth investigating. Position tracking transforms a static set of opinions into a dynamic map of how ideas compete and evolve.

Scenario Planning

A strategic methodology that models multiple plausible futures rather than attempting to predict a single outcome. Developed at Royal Dutch Shell in the 1970s, scenario planning creates internally consistent narratives about how key uncertainties might resolve — best case, worst case, and several realistic alternatives. Decisions are then stress-tested against each scenario to identify strategies that are robust across multiple futures. In AI-powered decision intelligence, different experts can independently develop and advocate for different scenarios, producing richer possibility spaces than any single analyst would construct.

First Principles Thinking

A reasoning approach that breaks complex problems down to their most fundamental, independently verifiable truths, then rebuilds solutions from the ground up rather than reasoning by analogy or convention. Popularized in modern business by Elon Musk, first principles thinking originated with Aristotle and is foundational to the scientific method. This framework is particularly valuable for decisions where conventional wisdom may be outdated or where existing solutions carry accumulated assumptions that no longer hold. It forces the question: "What do we actually know to be true?"

Bayesian Reasoning

A systematic method for updating beliefs based on new evidence, derived from Bayes' theorem in probability theory. Bayesian reasoners start with a prior belief (informed by existing knowledge), observe new data, and calculate a revised posterior probability that incorporates both the prior and the evidence. This framework is particularly powerful for strategic decisions under uncertainty because it provides a principled way to incorporate new information without overreacting to noise or anchoring too heavily on initial assumptions. Bayesian experts in multi-expert systems naturally become more calibrated as evidence accumulates across rounds.

Game Theory Analysis

The application of mathematical models of strategic interaction to analyze decisions where outcomes depend not just on your actions but on the actions of competitors, partners, regulators, or other actors. Game theory identifies dominant strategies, Nash equilibria, and potential cooperation or defection dynamics. In business strategy, it illuminates questions like pricing wars, market entry timing, partnership negotiations, and competitive response. AI experts using game-theoretic frameworks explicitly model other actors' incentives and likely moves, producing analysis that accounts for competitive dynamics rather than treating decisions in isolation.

Systems Thinking

An analytical approach that examines interconnections, feedback loops, delays, and emergent behavior within complex systems rather than analyzing components in isolation. Pioneered by Jay Forrester and popularized by Peter Senge, systems thinking reveals how interventions in one part of a system can produce unexpected consequences elsewhere. In strategic decision-making, systems thinking prevents the common failure of optimizing one metric while inadvertently degrading another. AI experts employing this framework map causal relationships and identify leverage points where small changes can produce outsized positive effects.

Model Fingerprint

The unique pattern of strengths, weaknesses, biases, and reasoning tendencies that characterize each large language model, shaped by its training data, architecture, and alignment process. GPT models, Claude models, and Gemini models each exhibit distinct fingerprints — different areas of expertise, different failure modes, and different default assumptions. In multi-LLM architectures, leveraging model fingerprint diversity is analogous to assembling a team with complementary skill sets: the goal is not to find the "best" model but to combine models whose blind spots don't overlap.

Why multi-LLM matters

Intellectual Monoculture

The risk that arises when strategic analysis relies on a single AI model's worldview, training biases, and reasoning patterns. Just as agricultural monoculture creates vulnerability to a single pathogen, intellectual monoculture means every analysis shares the same blind spots, cultural biases, and failure modes. If the model underweights tail risks, every recommendation it produces will underweight tail risks. Multi-agent and multi-LLM architectures are the primary defense against intellectual monoculture, introducing the diversity of perspective that prevents systematic blind spots from compounding into catastrophic decision failures.

Breaking AI monoculture

Conviction Score

A quantified measure of how strongly an AI agent holds its position after being exposed to counter-arguments from other agents in a multi-round deliberation. Unlike simple confidence scores (which reflect initial certainty), conviction scores are rigorously challenged — they represent the residual strength of a position after adversarial challenge. High conviction that survives multiple rounds of debate is a stronger signal than high initial confidence that was never challenged. Conviction scoring helps decision-makers distinguish between positions that are genuinely robust and those that merely sounded confident.

Decision Audit Trail

A comprehensive record of the reasoning, evidence, expert positions, counter-arguments, and consensus evolution at each stage of a structured decision process. Decision audit trails serve multiple purposes: they enable post-decision review to improve future decision quality, provide accountability documentation for regulated industries, and allow stakeholders who were not present during the analysis to understand how and why a recommendation was reached. In AI-powered decision intelligence, audit trails are generated automatically, preserving the full deliberation history that human meetings typically lose.

How SynthBoard tracks decisions

Cognitive Load

The total amount of mental effort required to process information, evaluate options, and reach a decision. Cognitive load theory, developed by John Sweller, distinguishes between intrinsic load (complexity inherent to the problem), extraneous load (unnecessary complexity from poor information design), and germane load (effort spent building understanding). Strategic decisions carry enormous intrinsic cognitive load — multiple variables, uncertain outcomes, competing stakeholders. Decision intelligence platforms reduce extraneous load by structuring the analysis, freeing decision-makers to focus their cognitive resources on the judgment calls that matter most.

Divergent Thinking

The creative process of generating multiple distinct solutions, perspectives, or possibilities before evaluating or narrowing them. Coined by psychologist J.P. Guilford, divergent thinking is the opposite of premature convergence — the tendency to lock onto the first reasonable answer. In strategic decision-making, divergent thinking expands the option space and prevents anchoring bias. Multi-expert AI systems are structurally designed for divergent thinking: each expert independently generates its own analysis, ensuring that the full range of perspectives is explored before synthesis begins.

Convergent Synthesis

The process of combining diverse, sometimes contradictory perspectives into a unified, actionable recommendation that preserves the most valuable insights from each viewpoint. Convergent synthesis is the complement to divergent thinking — it happens after multiple perspectives have been fully explored and challenged. Effective synthesis does not simply average positions or pick the majority view; it identifies the strongest arguments from each side, maps genuine areas of agreement, and produces a recommendation that is more nuanced and robust than any single contributing perspective. In SynthBoard, the synthesis layer performs this convergence automatically after each round of expert deliberation.

How consensus scoring works

See Decision Intelligence in Action

Assemble 24 AI experts and ask them anything about your next strategic decision. Start free with 100 credits every month.

Start Free Read the Guide