Website | Getting started | Documentation | Blog | Discord | Crates
DeepCausality is a hypergeometric computational causality library that enables fast, context-aware causal reasoning over complex multi-stage causality models. DeepCausality pioneers uniform reasoning across deterministic and probabilistic modalities by implementing the unified effect propagation process. Computational causality differs from deep learning (AI) as it uses a different foundation. Deep learning, at its core, is excellent at pattern matching and recognition. Object detection in computer vision excels with deep learning, fraud detection on credit card transactions excels with deep learning, and there are many more examples. Large Language Models (LLMs) like ChatGPT take the idea one step further and predict, for the most part, the next word in a sentence with stunning accuracy. When the prediction is off, you experience what is widely known as hallucination. However, LLMs and deep learning are fundamentally correlation-based methods, meaning there is no foundational concept of space, time, context, or causality. Computational causality comes in handy when you need:
- Deterministic reasoning: Same input, same output.
- Probabilistic reasoning: What are the odds that X is true? How confident can we be?
- Fully explainable: You get a logical line of reasoning.
These properties of causality are valuable in high-stakes and regulated industries such as medicine, finance, robotics, avionics, and industrial control systems. However, the classical methods of computational causality work in a particular way that is important to understand first.
Imagine a simple thermostat.
- Cause: The room temperature drops below 68 degrees Fahrenheit.
- Effect: The furnace turns on.
A typical classical causal model works because it relies on three fundamental assumptions:
- Time is a straight line. The temperature always drops before the furnace turns on. There's a clear "happen-before" relationship.
- The causal rules are fixed. The law "if temp < 68, then turn on furnace" is static and unchanging. It will be the same rule tomorrow as it is today.
- Context is implicit. Context is assumed as the implicit background, and therefore all data are captured in variables relative to the implicit context.
Previous computational causality frameworks (like those pioneered by Judea Pearl) are built on these three powerful assumptions. They provide the foundation to discover and reason about this fixed causality in a world where time moves forward predictably, the rules remain the same, and adding some variables captures the implicit context. The problem, however, emerges when these assumptions are no longer true.
Next, imagine a more complex system, like a financial market or a fleet of autonomous wildfire-fighting drones, and you'll see that reality operates differently:
-
Time is NOT a straight line. In a trading system, events happen on nanosecond scales, but the market context relies on different time scales, i.e., the hourly high price, the previous day's close price, or the daily trade volume. Time becomes multi-layered, multi-scaled, and complex.
-
The rules can change. This is the most important point. During a normal market day, "low interest rates cause stock prices to rise." But during a market crash (a "regime shift"), that rule breaks down entirely, and a new rule like "high fear causes all assets to fall" takes over. The causal relationships within a system have changed dynamically.
-
Context changes dynamically. The reason causal rules may change is because a system's context is changing dynamically. For an autonomous drone relying on a GPS signal, navigation might be valid, but the moment the drone enters a tunnel, the GPS signal gets temporarily lost and with it, the drone's ability to navigate. This is known as a regime shift and poses a fundamental challenge to all autonomous systems. Here, the context is particularly important because the computer vision system almost certainly identified the tunnel entrance, but without a workable context, the information cannot be used.
DeepCausality was created from the ground up for dynamic causality where context changes continuously and where the causal rules themselves may change in response to a changing context.
DeepCausality rethinks causality from the ground up based on a single foundation:
"Causality is a spacetime-agnostic functional dependency."
- "Functional dependency": This just means
Effect2 = function(Effect1)
. Instead of "cause and effect," think of a chain reaction where one event triggers a causal function that produces the next event. The focus is on the process of event propagation. - "Spacetime-agnostic": This is the radical part. Time and space are just another piece of contextual data for the causal function.
- "Explicit Context": Because the causal function is independent of spacetime, any time or space-related data needs to be provided via a context. A powerful hypergraph enables flexible context modeling, and DeepCausality enables a model to access and use multiple contexts.
The core of the idea is similar to a ripple in a pond. One ripple (an effect) propagates outward and creates the next ripple (another effect). DeepCausality is a framework for defining the rules of how those ripples spread. For more information about the underlying effect propagation process, see the Deep Dive document..
DeepCausality has three main components to make all this work:
- What it is: A self-contained, single unit of causality.
- What it does: It holds a single causal function (
E2 = f(E1)
). It receives an incoming effect, runs its causal function, and emits a new, outgoing effect.
- What it is: The explicit environment where the Causaloids operate. It holds all the factual data.
- What it does: The Context is a super-flexible data structure (a hypergraph) that holds all the facts about the world: the current time, sensor readings, locations on a map, etc.
- What it is: A programmable ethos, to encode and verify operational rules.
- What it does: A Causaloid might reason, "Based on the data, the most logical action is X." But before action X can be taken, the Effect Ethos steps in and checks against a set of rules. It answers the question "Should this happen?"
In summary, DeepCausality is a framework for building systems that can reason about cause and effect in complex, dynamic environments. It achieves this by treating causality as a process of effect propagation between simple, composable Causaloids that operate on an explicit, flexible Context, all governed by a verifiable safety layer called the Effect Ethos. DeepCausality is hosted as a sandbox project in the LF AI & Data Foundation.
-
DeepCausality is written in Rust with production-grade safety, reliability, and performance thanks to its UltraGraph backend.
-
DeepCausality provides recursive causal data structures that concisely express arbitrary complex causal structures.
-
DeepCausality enables context awareness across complex data stored in multiple contexts.
-
DeepCausality simplifies modeling of complex tempo-spatial patterns and non-Euclidean geometries.
-
DeepCausality supports adaptive reasoning.
-
DeepCausality comes with Causal State Machine (CSM).
-
DeepCausality supports programmable ethics via the EffectEthos.
In your project folder, just run in a terminal:
cargo add deep_causality
git clone https://github.com/deepcausality-rs/deep_causality.git
cd deep_causality
make example
Cargo works as expected, but in addition to cargo, a makefile exists that abstracts over several additional tools you for linting and formatting. To check and install missing tools, please run the following command:
make install
You find the install script in the script folder.
The script tests and tries to install all required developer dependencies. if the automatic install fails, the script will show a link with further installation instructions.
After all dependencies have been installed, the following commands are ready to use.
make build Builds the code base incrementally (fast) for dev.
make bench Runs all benchmarks across all crates.
make check Checks the code base for security vulnerabilities.
make example Runs the example code.
make fix Fixes linting issues as reported by clippy
make format Formats call code according to cargo fmt style
make install Tests and installs all make script dependencies
make start Starts the dev day with updating rust, pulling from git remote, and build the project
make test Runs all tests across all crates.
The scripts called by each make command are located in the script folder.
In addition to Cargo and related tools, the entire mono-repo is configured to build and test with Bazel. Please install bazelisk as it is the only requirement to build the repo with Bazel. For more details on working with Bazel, see the Bazel document.
Contributions are welcomed especially related to documentation, example code, and fixes. If unsure where to start, open an issue and ask. For more significant code contributions, please run make test and make check locally before opening a PR.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in deep_causality by you, shall be licensed under the MIT license without additional terms or conditions.
For details:
The project took inspiration from several researchers and their projects in the field:
- Judea Pearl at UCLA
- Lucien Hardy at the Perimeter Institute
- Kenneth O. Stanley
- Ilya Shpitser
- Miguel Hernan, Causal Lab at Harvard University
- Elias Bareinboim at Columbia University
- Causality and Machine Learning at Microsoft Research
- Causal ML at uber.
DeepCausality implements the following research publications:
- "Probability Theories with Dynamic Causal Structure"
- "A Defeasible Deontic Calculus for Resolving Norm Conflicts"
- "NWHy: A Framework for Hypergraph Analytics"
- "Uncertain T: A First-Order Type for Uncertain Data"
Parts of the implementation are also inspired by:
Finally, inspiration, especially related to the hypergraph structure, was derived from reading the Quanta Magazine.
This project is licensed under the MIT license.
For details about security, please read the security policy.
JetBrains, the premier software development tool provider, has granted a free all-product license under its open-source community support program to the DeepCausality project. The project team expresses its gratitude towards JetBrains generous contribution. Thank you for your commitment to OSS development!