• Causal Mapping
  • 1 What’s in this guide?
  • I Introduction
  • 2 Manifesto: piecing together fragments of causal information
  • 3 Causal maps: a unifying idea
    • 3.1 Causal map paradigms
    • 3.2 Mini-maps with more than one influencing item
    • 3.3 Semi-formal definition of a causal map paradigm
    • 3.4 Hybrid sets of rules
    • 3.5 Causal multi-maps?
    • 3.6 A brief overview of different causal map paradigms
      • 3.6.1 Approaches already called “Causal Mapping”
      • 3.6.2 (Comparative) Cognitive Mapping
      • 3.6.3 Theories of Change for a project or programme,
      • 3.6.4 Programme theories in theory-based evaluation
      • 3.6.5 Fuzzy Cognitive Maps
      • 3.6.6 Contribution analysis
      • 3.6.7 Systems Diagrams
      • 3.6.8 DAGs as promoted by Judea Pearl
      • 3.6.9 Structural Equation Models
      • 3.6.10 Bayesian belief networks
      • 3.6.11 (arguably) diagrams used in Realist Evaluation
      • 3.6.12 (sometimes) diagrams used in Outcome Harvesting
      • 3.6.13 Causal Maps as constructed in QuIP
      • 3.6.14 Mental models
      • 3.6.15 Knowledge graphs, semantic networks
      • 3.6.16 NCA (Necessary Condition Analysis)
      • 3.6.17 Influence diagrams
      • 3.6.18 Paradigms which include the aspect of belief
      • 3.6.19 “System Effects”
      • 3.6.20 QuIP maps
      • 3.6.21 Maps of maps
    • 3.7 Summary
  • 4 Causal mapping
  • 5 The causal map app in four pictures
  • 6 We need a “soft arithmetic” for causal maps
    • 6.1 An ambitious project
    • 6.2 Users of a causal map expect to be able to deduce some kind of comparative information from it
    • 6.3 Asking and answering those kinds of “typical questions” of a causal map boils down to assigning some kinds of numbers to its elements
    • 6.4 Aren’t there strategies to encode causal information without using any kind of number?
  • 7 We need rules about how to encode causal information in a causal map (and decode it again)
    • 7.1 How do causal claims work within ordinary language?
    • 7.2 Understanding the elements of a causal map by agreeing how to make deductions with them
  • 8 Soft arithmetic is also the answer to understanding causal maps
  • 9 Metalanguage: some words for talking about causal maps
    • 9.1 “The cause”, “a cause”, “(causal) influence”
    • 9.2 Nodes, variables, vertices …
    • 9.3 Influence variable, consequence variable, package of variables
    • 9.4 Mechanism, theory ….
  • II The rules of Soft Arithmetic for causal maps
  • 10 The mini-map coding rule
    • 10.1 Meaning depends also on elicitation context
  • 11 The mini-map coding rule – functional form
  • 12 The extras rule: adding extra information, in particular about the levels or values of the variables
    • 12.1 Interpretation
    • 12.2 Corollary: Ordinary reasoning
  • 13 The juxtaposition rule
  • 14 Rules for joining maps
    • 14.1 The chaining rule
    • 14.2 The shared consequence rule
    • 14.3 The shared influence rule
    • 14.4 The shared arrow rule
  • 15 Rules for joining maps: what counts as “the same” variable?
  • 16 The chaining rule, functional form (zooming out / black-box rule)
    • 16.1 Two special cases
      • 16.1.1 Bare influence
      • 16.1.2 Total control
    • 16.2 Technical note
  • 17 The shared consequence rule, functional version
    • 17.1 Contradictory combinations
    • 17.2 But which case is which?
    • 17.3 Extension to causal packages
  • 18 The shared influence rule, functional version
  • 19 The shared arrow rule, functional form.
  • 20 The rule for merging arrows. Weight of evidence.
    • 20.1 Problems with evidence
  • 21 The chaining rule with loops
  • 22 The rule for conceptual links
    • 22.1 Variations
    • 22.2 Problems
  • 23 Combining “extra” information like the values or levels of variables
  • 24 Context
    • 24.1 Intersections
    • 24.2 Replacing contexts with variables
  • 25 Maybe the nodes are not variables, maybe they are … propositions, events, schemas?
    • 25.1 Events
    • 25.2 Propositions
    • 25.3 Schemas / Schemata
  • 26 Variable-based and propositional-based approaches to causal diagrams: chalk and cheese??
    • 26.1 The variable-based approach
    • 26.2 More on the proposition-based approach
      • 26.2.1 The items are propositions, not variables
      • 26.2.2 The items have been individually experienced
      • 26.2.3 The propositions have contrastive implications
      • 26.2.4 Two-sidedness of propositions
      • 26.2.5 Aggregation
      • 26.2.6 The items can express changes, states or events
      • 26.2.7 Golden threads
    • 26.3 Statistics?
    • 26.4 Conclusion
  • 27 Limitations to the proposition-based approach
    • 27.1 Problems within the proposition-based approach
      • 27.1.1 Problem with only coding personal observations rather than personal causal knowledge
      • 27.1.2 Problems with establishing common codes using difference of degree
      • 27.1.3 Need for a distinction between “plus” and “minus” links?
    • 27.2 Limitations: Problems with extending the proposition-based approach
      • 27.2.1 No way to code multiple causation
      • 27.2.2 Problems with expressing difference of degree
      • 27.2.3 Explicit causal knowledge
      • 27.2.4 Vulnerability to luck / coincidence
      • 27.2.5 Unable to code necessary/sufficient conditions
    • 27.3 Weaknesses of the variable-based approach
    • 27.4 What is to be done? Suggestions for QuIP
  • 28 Clarifying how to ask QuIP questions in contexts when the final outcomes are not necessarily “changes”
    • 28.1 Classic QuIP: the focus on changes
    • 28.2 Even final outcomes don’t have to be changes
      • 28.2.1 One-off events
      • 28.2.2 Maintaining the status quo
      • 28.2.3 Status quo as a divergent counterexample
    • 28.3 Preceding items don’t have to be changes either
    • 28.4 How to focus on the relevant contrast
    • 28.5 Drivers
    • 28.6 Focus on initial items (“drivers”)
  • 29 Coding individual causal fragments using propositions (revised)
    • 29.1 Basic coding rule
      • 29.1.1 Every causal propositional claim is actually three claims
      • 29.1.2 The items have been witnessed
      • 29.1.3 The propositions have contrastive implications
      • 29.1.4 Can include claims about absence of expected states
    • 29.2 Mini Extension: causal chains
    • 29.3 Extension: propositional claims about additive contributions
      • 29.3.1 Encoding incomplete claims about additive contributions
    • 29.4 Examples we can code
    • 29.5 Examples we cannot code
    • 29.6 That’s it
  • 30 Combining causal fragments from different sources using propositions
    • 30.1 Same item
      • 30.1.1 Extension 2: Subsuming items under more general items
      • 30.1.2 Extension 3: Recoding items from different sources as gradations of more general items
      • 30.1.3 Extension 3a: allowing negative gradations
    • 30.2 What these extensions still don’t do
      • 30.2.1 Can’t distinguish between AND and OR
      • 30.2.2 Necessary / sufficient conditions
  • 31 Coding causal maps with propositions
    • 31.1 Basic coding rule
      • 31.1.1 Three claims
      • 31.1.2 The items have been witnessed
      • 31.1.3 The propositions have contrastive implications
      • 31.1.4 Can include claims about absence of expected states
    • 31.2 Mini Extension: causal chains
    • 31.3 Mini Extensions: certainty and trust
    • 31.4 How does classic QuIP extend this basic idea?
      • 31.4.1 Extension 1: Combining causal fragments from different sources: same item
      • 31.4.2 Extension 2: Subsuming items under more general items
      • 31.4.3 Extension 3: Recoding items from different sources as gradations of more general items
      • 31.4.4 Extension 3a: allowing negative gradations
    • 31.5 That’s still not enough
      • 31.5.1 No way to encode explicit claims of multiple causation
      • 31.5.2 No way to encode implicit claims of multiple causation
    • 31.6 Solution: propositional claims about contributions, and assume additive causation
    • 31.7 What these extensions still don’t do
      • 31.7.1 Can’t distinguish between AND and OR
      • 31.7.2 Necessary / sufficient conditions
  • 32 Types of variable
  • 33 Causal thinking is essentially contrastive thinking
  • 34 Lo/hi Variables, types of variable, and contrasts
  • 35 “For each ..….” variables
  • 36 Coding specific influences – individually and in combination
    • 36.1 Summary of the argument so far
    • 36.2 Encoding causal influences between lo/hi variables
    • 36.3 Breaking down the problem
    • 36.4 Some “mono functions”: functions with a single influence variable
    • 36.5 Some “multi functions”: functions for packages with more than one influencing variable:
      • 36.5.1 SOFTADD
    • 36.6 “Package-free” combinations
    • 36.7 Contradictions?
    • 36.8 Building up more functions from these building blocks
    • 36.9 Coding influences within in the app
  • 37 One function to rule them all?
    • 37.1 One universal type of function for coding causal claims with exclusively lo/hi variables.
  • 38 Coding a claim about the absence of a causal link
  • 39 Strength / importance
    • 39.1 Including the idea of strength in our functional equation
    • 39.2 Direct ways of eliciting how important is one variable’s contribution to another. (Fiona)
  • 40 Causal Inference
    • 40.1 Specifying the map
      • 40.1.1 Exogenous variables and the Grid
      • 40.1.2 Inference in action
  • 41 Upstream (Bayesian) inference
  • 42 Contour
    • 42.1 Flipping
  • 43 Valence
  • 44 Valence and direction
  • 45 Effect: two kinds
  • 46 Interventions and differences
    • 46.1 Shortcuts
  • 47 Contribution
    • 47.1 Why reporting correlation is not helpful
  • 48 Information about the source of our causal information
    • 48.1 Applying the “trust” attribute source-by-source
    • 48.2 Including information about sources
  • 49 Clusters of similar maps
  • 50 Maps of maps
  • 51 Probability density functions
  • III Simplification
  • 52 Simplifying causal maps: aggregation and filtering
    • 52.1 Why simplify a causal map?
    • 52.2 The tables available
      • 52.2.1 Table of Statements
      • 52.2.2 Table of Variables
      • 52.2.3 Table of Arrows
    • 52.3 The steps taken by the app when simplifying a network according to user commands and/or automatically if requested.
      • 52.3.1 Inference
      • 52.3.2 Calculation of source-by-source data
      • 52.3.3 Merging together variables which are in the same cluster
      • 52.3.4 Merge information about Statements into the arrows which are based on them
      • 52.3.5 Filter sources
      • 52.3.6 Hard-coded calculations
      • 52.3.7 Merge arrows
      • 52.3.8 Calculate QuIP-style metrics, e.g. summaries per domain.
      • 52.3.9 Filter out arrows below a minimum frequency
      • 52.3.10 Add arrow attributes to variables
      • 52.3.11 Filter out variables below a minimum frequency
  • 53 Aggregating and filtering beliefs
  • 54 Aggregation and filtering based on face value
  • 55 Aggregation and filtering based on particular research questions
  • 56 Aggregation and filtering based on metrics
    • 56.1 Network metrics
  • 57 Citation intensity
  • 58 Conspicuous absence
  • 59 Not forwards
  • 60 Homogenity of paths
    • 60.1 (non-) solution 1)
    • 60.2 Solution 2)
    • 60.3 Solution 3)
    • 60.4 Solution 4)
    • 60.5 Solution 5)
  • IV Visualising
  • 61 Visualising and formatting causal maps
    • 61.1 Conditional formatting
    • 61.2 Hard-coded formatting
    • 61.3 Constructing labels and tooltips
    • 61.4 Focus
  • 62 Coding using the UI (outdated)
    • 62.1 Show that the influence of one variable on another has a particular contour.
    • 62.2 Show that variable X is part of the definition of variable Y.
    • 62.3 Show that variables X and Y are linked by definition
    • 62.4 Show that a set of influence variables influence the consequence variable collectively.
  • V Summary
  • 63 Summary of the rules for inference in causal maps, aka “Soft Arithmetic”
    • 63.1 The inference rules for causal maps
    • 63.2 The mini-map coding rule
    • 63.3 Focus on “lo/hi” variables and functions between them
    • 63.4 Recording the actual values of variables
    • 63.5 The rule for conceptual links
    • 63.6 The rules for coding different types of influence; single influence variable
    • 63.7 The rules for coding different types of influence; packages of multiple influence variables
    • 63.8 INUS
      • 63.8.1 SOFTADD
    • 63.9 Coding the strength of the influence of a package
    • 63.10 Causal claim coding form
    • 63.11 Problems
  • VI The why question
  • 64 The “why question” as a generic method in social research
    • 64.1 Steps to asking “the why question”
      • 64.1.1 Formulate …
      • 64.1.2 Pose the question
      • 64.1.3 Get the answers into a spreadsheet
    • 64.2 When would you not want to use this approach?
  • VII The why question
  • 65 The “why question” as a generic method in social research
    • 65.1 Steps to asking “the why question”
      • 65.1.1 Formulate …
      • 65.1.2 Pose the question
      • 65.1.3 Get the answers into a spreadsheet
      • 65.1.4 Start analysing
    • 65.2 When would you not want to use this approach?
  • 66 The “why question” – asking about changes
    • 66.1 “Changes”
  • VIII Using the causal map app
  • 67 Using the causal map app – already partly outdated!
    • 67.1 About the tables
    • 67.2 Left-hand tabs for producing and coding your diagram
      • 67.2.1 Import
      • 67.2.2 Code
      • 67.2.3 Merge
      • 67.2.4 Display
    • 67.3 Right-hand tabs for outputs: viewing diagram etc
      • 67.3.1 Diagram
  • 68 Using the causal map app for real-time, collaborative theory construction
  • IX Case studies
  • 69 Case Study: the Strawberry Line
    • 69.0.1 Theories gathered from activists
    • 69.0.2 Stories gathered from the public
  • 70 Case study: Global Young Academy. Tracing the paths of GYA’s impact
    • 70.1 More details
      • 70.1.1 Aims
      • 70.1.2 Methods
      • 70.1.3 Findings
  • 71 Appendix: previous work
    • 71.1 From “Theorymaker” to “Causal Mapping” and from “Theories of Change” to “Causal Maps”
    • 71.2 Articles
    • 71.3 Presentations
    • 71.4 Poster
    • 71.5 Longer blog posts
    • 71.6 Resources and apps
  • 72 References
  • Published with bookdown

Causal Mapping

Section 38 Coding a claim about the absence of a causal link

Is it really the same as strength=0. I think not.