Skip to content

Commit 16de92f

Browse files
authored
Merge pull request #28 from gamma-opt/interface-update
Interface update
2 parents e46b7fc + ead5718 commit 16de92f

38 files changed

+2902
-2010
lines changed

CONTRIBUTING.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
## Contribution guidelines
2+
3+
First of all, thank you for your interest in this project! Most of us working on this are researchers, not software engineers, so we expect there to be room for improvement in this package. If you find something that is unclear or doesn't work or should be done more efficiently etc., please let us know, but remember to be respectful.
4+
5+
If you are new to Julia package development, we strongly recommend reading the rather well-written guide in [the JuMP documentation](https://jump.dev/JuMP.jl/dev/developers/contributing/#Contribute-code-to-JuMP).
6+
7+
How to proceed when you have:
8+
- A question about how something works
9+
- Ask a question on our [discussion forum](https://github.com/gamma-opt/DecisionProgramming.jl/discussions)
10+
- If the reason for your confusion was that something was not properly explained in the documentation, create an issue and/or a pull request.
11+
- A bug report 🐛
12+
- Create an issue with a minimal working example that shows how you encountered the bug.
13+
- If you know how to fix the bug, you can create a pull request as well, otherwise we'll see your issue and start working on fixing whatever you found.
14+
- An improvement suggestion
15+
- It might be a good idea to first discuss your idea with us, you can start by posting on the [discussion forum](https://github.com/gamma-opt/DecisionProgramming.jl/discussions).
16+
- Create an issue and start working on a pull request.

README.md

Lines changed: 20 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -14,30 +14,36 @@ We can create an influence diagram as follows:
1414

1515
```julia
1616
using DecisionProgramming
17-
S = States([2, 2, 2, 2])
18-
C = [ChanceNode(2, [1]), ChanceNode(3, [1])]
19-
D = [DecisionNode(1, Node[]), DecisionNode(4, [2, 3])]
20-
V = [ValueNode(5, [4])]
21-
X = [Probabilities(2, [0.4 0.6; 0.6 0.4]), Probabilities(3, [0.7 0.3; 0.3 0.7])]
22-
Y = [Consequences(5, [1.5, 1.7])]
23-
validate_influence_diagram(S, C, D, V)
24-
sort!.((C, D, V, X, Y), by = x -> x.j)
25-
P = DefaultPathProbability(C, X)
26-
U = DefaultPathUtility(V, Y)
17+
18+
diagram = InfluenceDiagram()
19+
20+
add_node!(diagram, DecisionNode("A", [], ["a", "b"]))
21+
add_node!(diagram, DecisionNode("D", ["B", "C"], ["k", "l"]))
22+
add_node!(diagram, ChanceNode("B", ["A"], ["x", "y"]))
23+
add_node!(diagram, ChanceNode("C", ["A"], ["v", "w"]))
24+
add_node!(diagram, ValueNode("V", ["D"]))
25+
26+
generate_arcs!(diagram)
27+
28+
add_probabilities!(diagram, "B", [0.4 0.6; 0.6 0.4])
29+
add_probabilities!(diagram, "C", [0.7 0.3; 0.3 0.7])
30+
add_utilities!(diagram, "V", [1.5, 1.7])
31+
32+
generate_diagram!(diagram)
2733
```
2834

2935
Using the influence diagram, we create the decision model as follow:
3036

3137
```julia
3238
using JuMP
3339
model = Model()
34-
z = DecisionVariables(model, S, D)
35-
x_s = PathCompatibilityVariables(model, z, S, P)
36-
EV = expected_value(model, x_s, U, P)
40+
z = DecisionVariables(model, diagram)
41+
x_s = PathCompatibilityVariables(model, diagram, z)
42+
EV = expected_value(model, diagram, x_s)
3743
@objective(model, Max, EV)
3844
```
3945

40-
Finally, we can optimize the model using MILP solver.
46+
We can optimize the model using MILP solver.
4147

4248
```julia
4349
using Gurobi

docs/make.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,5 +33,6 @@ makedocs(
3333
# See "Hosting Documentation" and deploydocs() in the Documenter manual
3434
# for more information.
3535
deploydocs(
36-
repo = "github.com/gamma-opt/DecisionProgramming.jl.git"
36+
repo = "github.com/gamma-opt/DecisionProgramming.jl.git",
37+
push_preview = true
3738
)

docs/src/api.md

Lines changed: 37 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,21 +5,22 @@
55
### Nodes
66
```@docs
77
Node
8+
Name
9+
AbstractNode
810
ChanceNode
911
DecisionNode
1012
ValueNode
1113
State
1214
States
13-
States(::Vector{Tuple{State, Vector{Node}}})
14-
validate_influence_diagram
1515
```
1616

1717
### Paths
1818
```@docs
1919
Path
20-
paths(::AbstractVector{State})
21-
paths(::AbstractVector{State}, ::Dict{Node, State})
2220
ForbiddenPath
21+
FixedPath
22+
paths(::AbstractVector{State})
23+
paths(::AbstractVector{State}, ::FixedPath)
2324
```
2425

2526
### Probabilities
@@ -33,9 +34,10 @@ AbstractPathProbability
3334
DefaultPathProbability
3435
```
3536

36-
### Consequences
37+
### Utilities
3738
```@docs
38-
Consequences
39+
Utility
40+
Utilities
3941
```
4042

4143
### Path Utility
@@ -44,6 +46,20 @@ AbstractPathUtility
4446
DefaultPathUtility
4547
```
4648

49+
### InfluenceDiagram
50+
```@docs
51+
InfluenceDiagram
52+
generate_arcs!
53+
generate_diagram!
54+
add_node!
55+
ProbabilityMatrix
56+
add_probabilities!
57+
UtilityMatrix
58+
add_utilities!
59+
index_of
60+
num_states
61+
```
62+
4763
### Decision Strategy
4864
```@docs
4965
LocalDecisionStrategy
@@ -56,15 +72,13 @@ DecisionStrategy
5672
```@docs
5773
DecisionVariables
5874
PathCompatibilityVariables
59-
lazy_probability_cut(::Model, ::PathCompatibilityVariables, ::AbstractPathProbability)
75+
lazy_probability_cut
6076
```
6177

6278
### Objective Functions
6379
```@docs
64-
PositivePathUtility
65-
NegativePathUtility
66-
expected_value(::Model, ::PathCompatibilityVariables, ::AbstractPathUtility, ::AbstractPathProbability; ::Float64)
67-
conditional_value_at_risk(::Model, ::PathCompatibilityVariables{N}, ::AbstractPathUtility, ::AbstractPathProbability, ::Float64; ::Float64) where N
80+
expected_value(::Model, ::InfluenceDiagram, ::PathCompatibilityVariables; ::Float64)
81+
conditional_value_at_risk(::Model, ::InfluenceDiagram, ::PathCompatibilityVariables{N}, ::Float64; ::Float64) where N
6882
```
6983

7084
### Decision Strategy from Variables
@@ -76,13 +90,14 @@ DecisionStrategy(::DecisionVariables)
7690
## `analysis.jl`
7791
```@docs
7892
CompatiblePaths
93+
CompatiblePaths(::InfluenceDiagram, ::DecisionStrategy, ::FixedPath)
7994
UtilityDistribution
80-
UtilityDistribution(::States, ::AbstractPathProbability, ::AbstractPathUtility, ::DecisionStrategy)
95+
UtilityDistribution(::InfluenceDiagram, ::DecisionStrategy)
8196
StateProbabilities
82-
StateProbabilities(::States, ::AbstractPathProbability, ::DecisionStrategy)
83-
StateProbabilities(::States, ::AbstractPathProbability, ::DecisionStrategy, ::Node, ::State, ::StateProbabilities)
84-
value_at_risk(::Vector{Float64}, ::Vector{Float64}, ::Float64)
85-
conditional_value_at_risk(::Vector{Float64}, ::Vector{Float64}, ::Float64)
97+
StateProbabilities(::InfluenceDiagram, ::DecisionStrategy)
98+
StateProbabilities(::InfluenceDiagram, ::DecisionStrategy, ::Name, ::Name, ::StateProbabilities)
99+
value_at_risk(::UtilityDistribution, ::Float64)
100+
conditional_value_at_risk(::UtilityDistribution, ::Float64)
86101
```
87102

88103
## `printing.jl`
@@ -96,9 +111,10 @@ print_risk_measures
96111

97112
## `random.jl`
98113
```@docs
99-
random_diagram(::AbstractRNG, ::Int, ::Int, ::Int, ::Int, ::Int)
100-
States(::AbstractRNG, ::Vector{State}, ::Int)
101-
Probabilities(::AbstractRNG, ::ChanceNode, ::States; ::Int)
102-
Consequences(::AbstractRNG, ::ValueNode, ::States; ::Float64, ::Float64)
103-
LocalDecisionStrategy(::AbstractRNG, ::DecisionNode, ::States)
114+
random_diagram!
115+
random_probabilities!
116+
random_utilities!
117+
LocalDecisionStrategy(::AbstractRNG, ::InfluenceDiagram, ::Node)
118+
DecisionProgramming.information_set(::AbstractRNG, ::Node, ::Int64)
119+
DecisionProgramming.information_set(::AbstractRNG, ::Vector{Node}, ::Int64)
104120
```

docs/src/decision-programming/analyzing-decision-strategies.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# [Analyzing Decision Strategies](@id analyzing-decision-strategies)
22
## Introduction
3-
This section focuses on how we can analyze fixed decision strategies $Z$ on an influence diagram $G$, such as ones resulting from the optimization. We can rule out all incompatible and inactive paths from the analysis because they do not influence the outcomes of the strategy. This means that we only consider paths $𝐬$ that are compatible and active $𝐬 \in 𝐒(X) \cap 𝐒(Z)$.
3+
This section focuses on how we can analyze fixed decision strategies $Z$ on an influence diagram $G$, such as ones obtained by solving the Decision Programming model described in [the previous section](@ref decision-model). We can rule out all incompatible and inactive paths from the analysis because they do not influence the outcomes of the strategy. This means that we only consider paths $𝐬$ that are compatible and active $𝐬 \in 𝐒(X) \cap 𝐒(Z)$.
44

55

66
## Generating Compatible Paths
@@ -20,7 +20,7 @@ The probability mass function of the **utility distribution** associates each un
2020

2121
$$ℙ(X=u)=∑_{𝐬∈𝐒(Z)∣\mathcal{U}(𝐬)=u} p(𝐬),\quad ∀u∈\mathcal{U}^∗.$$
2222

23-
From the utility distribution, we can calculate the cumulative distribution, statistics, and risk measures. The relevant statistics are expected value, standard deviation, skewness and kurtosis. Risk measures focus on the conditional value-at-risk (CVaR), also known as, expected shortfall.
23+
From the utility distribution, we can calculate the cumulative distribution, statistics, and risk measures. The relevant statistics are expected value, standard deviation, skewness and kurtosis. Risk measures focus on the conditional value-at-risk (CVaR), also known as expected shortfall.
2424

2525

2626
## Measuring Risk
@@ -56,7 +56,7 @@ The above figure demonstrates these values on a discrete probability distributio
5656

5757

5858
## State Probabilities
59-
We denote **paths with fixed states** where $ϵ$ denotes an empty state using a recursive definition.
59+
We use a recursive definition where $ϵ$ denotes an empty state to denote **paths with fixed states**.
6060

6161
$$\begin{aligned}
6262
𝐒_{ϵ} &= 𝐒(Z) \\

docs/src/decision-programming/computational-complexity.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,15 @@ $$0 ≤ ∑_{i∈D}|𝐒_{I(i)∪\{i\}}| ≤ |D| \left(\max_{i∈C∪D} |S_j|\ri
3232

3333
In the worst case, $m=n$, a decision node is influenced by every other chance and decision node. However, in most practical cases, we have $m < n,$ where decision nodes are influenced only by a limited number of other chance and decision nodes, making models easier to solve.
3434

35-
## Numerical challenges
35+
## Numerical challenges
3636

37-
As has become evident above, in Decision Programming the size of the [Decision Model](@ref decision-model) may become large if the influence diagram has a large number of nodes or nodes with a large number of states. In practice, this results in having a large number of path compatibility and decision variables. This may results in numerical challenges.
37+
As has become evident above, in Decision Programming the size of the [Decision Model](@ref decision-model) may become large if the influence diagram has a large number of nodes or nodes with a large number of states. In practice, this results in having a large number of path compatibility and decision variables. This may result in numerical challenges.
3838

3939
### Probability Scaling Factor
40-
In an influence diagram a large number of nodes or some nodes having a large set of states, causes the path probabilities $p(𝐬)$ to become increasingly small. This may cause numerical issues with the solver or inable it from finding a solution. This issue is showcased in the [CHD preventative care example](../examples/CHD_preventative_care.md).
40+
If an influence diagram has a large number of nodes or some nodes have a large set of states, the path probabilities $p(𝐬)$ become increasingly small. This may cause numerical issues with the solver, even prevent it from finding a solution. This issue is showcased in the [CHD preventative care example](../examples/CHD_preventative_care.md).
4141

42-
The issue may be helped by multiplying the path probabilities with a scaling factor $\gamma > 0$ in the objective function.
42+
The issue may be helped by multiplying the path probabilities with a scaling factor $\gamma > 0$. For example, the objective function becomes
4343

44-
$$\operatorname{E}(Z) = ∑_{𝐬∈𝐒} x(𝐬) \ p(𝐬) \ \gamma \ \mathcal{U}(𝐬)$$
44+
$$\operatorname{E}(Z) = ∑_{𝐬∈𝐒} x(𝐬) \ p(𝐬) \ \gamma \ \mathcal{U}(𝐬).$$
4545

46-
The conditional value-at-risk function can also be scaled so that it is compatible with an expected value objective function that has been scaled.
46+
The path probabilities should also be scaled in other objective functions or constraints, including the conditional value-at-risk function and the probability cut constraint $∑_{𝐬∈𝐒}x(𝐬) p(𝐬) = 1$.

0 commit comments

Comments
 (0)