← Back to Prime | Library Index
Source: "Intelligence Methodology: Agile Rigor Update" (Gemini analysis) Focus: Doctrinal evolution, frontier techniques, empirical critiques, Human-Machine Teaming Reading Time: ~15 minutes | Audience: Advanced practitioners, AI integrators | Prerequisites: 01 — Tradecraft Primer
The most forward-looking document in the corpus. Defines the "Agile Rigor" paradigm — the shift from exhaustive structure to proportional, empirically-validated, AI-augmented methodology.
| Dimension | Old Paradigm (2009–2019) | Modern Paradigm (2020–2026) |
|---|---|---|
| Foundation | Intuition-First: Structure as "add-on" | Structure-First: SATs as "Analytic Operating System" |
| Validation | Face Validity: "looks rigorous" | Empirical Validity: "proven to mitigate specific biases" |
| Execution | Human-Solo: individual expertise | Human-Machine Teaming: hybrid intelligence |
| Depth | Deep Rigor: exhaustive matrices (ACH) | Agile Rigor: Problem Restatement + Inconsistencies Finder |
- Original 3 categories (Diagnostic/Contrarian/Imaginative) categorized intent but failed to provide a process roadmap
- Led to "tool-kit cherry-picking" — analysts selecting techniques in isolation
- 6 families provide a "structured safety net" at every stage of the analytic lifecycle
| Family | Phase | Strategic Objective | Key Techniques |
|---|---|---|---|
| Getting Organized | Project Inception | Define problem, identify client, plan timeline | Customer Checklist, Issue Redefinition, AIMS |
| Exploration | Data Collection & Scoping | Expand search space, surface initial insights | Brainstorming, Mind Maps, Network Analysis |
| Diagnostic | Evidence Evaluation | Test validity of data and hypotheses | ACH, Inconsistencies Finder, Quality of Information Check |
| Reframing | Mindset Challenging | Expose hidden assumptions and groupthink | Devil's Advocacy, Red Team, Counterfactual Reasoning |
| Foresight | Uncertainty Management | Model futures, identify drivers, establish indicators | Alternative Futures, Quadrant Crunching, Opportunities Incubator |
| Decision Support | Actionable Output | Align findings with decision space | Decision Matrix, Force Field Analysis, Bowtie Analysis |
| Metric | Definition | Logic for Tool Choice |
|---|---|---|
| Type of Uncertainty | Epistemic ("known unknowns") vs. Aleatory (inherent randomness) | Epistemic → Diagnostic tools; Aleatory → Foresight tools |
| Data Volatility | Speed at which information becomes obsolete | High volatility → Lean techniques (Inconsistencies Finder) |
| Bias Susceptibility | Which specific mental shortcuts are most dangerous | High confirmation bias → Reframing tools (Counterfactual Reasoning) |
| Analytic Complexity | Number of interacting variables; degree of non-linearity | High complexity → Cross-Impact Matrices or Bowtie Analysis |
Purpose: Look at "what might have been" to better understand "what is"
- Select Pivot Point: Identify a specific past event/decision driving the current state
- Smallest Change: Posit the smallest possible change to that event
- Trace Effects: Rigorously trace subsequent effects of the counterfactual
Use in Disinformation: Deconstructs "historical inevitability" narratives by showing multiple plausible paths not taken, exposing the constructed nature of adversary stories.
Purpose: Optimized for "Post-Truth" environments — analyzes structural logic of competing stories rather than fact-checking individual claims
- Narrative Mapping: Identify primary and competing (adversarial) narratives
- Structural Decomposition: Break into Protagonist, Antagonist, Crisis, Resolution
- Evidence Stress-Testing: Test where each narrative "breaks" against evidence
- Vulnerability Identification: Find elements with high emotional resonance but low factual support
Use in Influence Ops: Map adversary's "narrative arc" → predict future disinformation "beats" → provide "pre-bunking" strategies.
Purpose: Visual framework combining threat and consequence analysis for high-impact/low-probability risks
- Center (Top Event): The catastrophic event being avoided
- Left Side: Potential causes (threats) + preventative measures (barriers)
- Right Side: Potential outcomes (consequences) + reactive measures (mitigations)
- Escalation Factors: Conditions that might weaken barriers on either side
Use in Cyber: Quickly visualize defensive posture health; new vulnerabilities show which barriers weaken and what consequences follow.
Purpose: Address the intelligence bias of focusing exclusively on threats while ignoring strategic advantages
- Environmental Scanning: Look for emerging trends in technology, economics, social behavior
- Driver Convergence: Identify where multiple trends converge to create "windows of opportunity"
- Pathfinding: Determine indicators signaling the window is opening
Note: The following findings are reported in methodological reviews and meta-analyses of SAT effectiveness. Specific study designs and sample sizes vary; readers should consult original sources for full methodological context.
The Bipolarity Problem (as reported in Dhami et al., 2024):
- Classic training warns against overconfidence
- ACH users become hyper-aware of uncertainty → "under-confidence" or "pseudo-uncertainty"
- Result: Increased confidence in assessment (process felt rigorous) without corresponding increase in accuracy
- "Calibration gap" gives decision-makers false security
Noise Neglect (as reported in Denzler, 2024):
- Standard ACH: 5 hypotheses × 10 evidence items = 50 individual judgments
- Each judgment carries small margin of error (random noise)
- When aggregated, noise from 50 small judgments outweighs the signal
- Analysts give too much weight to "neutral" evidence that fits multiple hypotheses
- Conclusion: "More structure" is not always "better structure"
- Generative AI assists in evidence synthesis
- Over-reliance leads to "cognitive offloading"
- Significantly reduces analyst's baseline critical thinking scores and task-specific self-confidence
Curated structured prompts guiding LLMs through specific SAT protocols.
Mechanism:
- Structural Framing: Instead of general questions, use IPL prompts forcing specific technique execution (e.g., "Conduct an Inconsistencies Finder analysis on these 20 data points against hypothesis X")
- Mitigating Hallucination: Structured template reduces off-target generation
- Traceability: Digital audit trail of logic used by both human and machine
| Family | Human Role | Machine Role | Hybrid Output |
|---|---|---|---|
| Exploration | Define scope; identify "black swan" possibilities | Scan massive datasets for weak signals/anomalies | Prioritized "areas of interest" map |
| Diagnostic | Final judgment on source reliability and intent | Calculate probability coherence; reduce subadditivity | Calibrated probability assessment |
| Foresight | Identify "So What?" for policymaker | Generate 1000s of scenario permutations | Robust early warning system |
- Rewrite the question in at least 3 different ways
- Shift focus: Actor → System, Threat → Vulnerability
- Breaks anchoring bias; prevents "Type III Error" (solving wrong problem)
- Focus ONLY on the "lead" hypothesis
- Search specifically for evidence that contradicts it
- Speed: Bypasses noise neglect of large matrices
- Logic: Forces "scientific" mindset — trying to prove yourself wrong
The future of intelligence analysis lies in mitigating "Cognitive Drag" through selective application of structure. The goal is not elimination of uncertainty — which is impossible — but its management through hybrid human-AI intelligence and empirically validated workflows.
| Document | Relationship |
|---|---|
| 08 — Updates and Optimizations | Complementary post-2009 evolution timeline and technology updates |
| 05 — 66 Techniques Taxonomy | The full 66-technique taxonomy referenced in the 3rd Edition expansion |
| 07 — Axioms and Laws | The foundational principles underlying Agile Rigor |
| 01 — Tradecraft Primer 2009 | The original doctrine that Agile Rigor evolved from |