Symbolic learning is machine learning based on symbolic logic. Its peculiarity lies in the fact that the learned models enclose an explicit knowledge representation, which offers many opportunities:
- Verifying that the model's thought process is adequate for a given task;
- Learning of new insights by simple inspection of the model;
- Manual refinement of the model at a later time.
These levels of transparency (or interpretability) are generally not available with standard machine learning methods, thus, as AI permeates more and more aspects of our lives, symbolic learning is becoming increasingly popular. In spite of this, implementations of symbolic algorithms (e.g., extraction of decision trees or rules) are mostly scattered across different languages and machine learning frameworks.
Enough with this! The lesser and lesser minoritarian theory of symbolic learning deserves a programming framework of its own!
graph TD
SX[<font color="black">SoleExplorer.jl</font>]
subgraph Group1[ ]
MAR[<font color="black">ModalAssociationRules.jl</font>]
MDT[<font color="black">ModalDecisionTrees.jl</font>]
MDL[<font color="black">ModalDecisionLists.jl</font>]
end
S[<font color="black">Sole.jl</font>]
subgraph Group2[ ]
SF[<font color="black">SoleFeatures.jl</font>]
SD[<font color="black">SoleData.jl</font>]
MD[<font color="black">MultiData.jl</font>]
end
subgraph Group3[ ]
PHC[<font color="black">SolePostHoc.jl</font>]
SM[<font color="black">SoleModels.jl</font>]
end
subgraph Group4[ ]
SL[<font color="black">SoleLogics.jl</font>]
SR[<font color="black">SoleReasoners.jl</font>]
end
SB[<font color="black">SoleBase.jl</font>]
SX --> MDL
SX --> MDT
SX --> MAR
SX --> S
SX --> PHC
SL --> SB
SD --> SL
SD --> MD
SM --> SL
S --> SD
PHC --> SM
S --> SM
SF --> SD
MDL --> S
MDT --> S
MAR --> S
SR --> SL
style SX fill:#FFFFFF,stroke:#000000
style SB fill:#FFFFFF,stroke:#000000
style SL fill:#9558B2,stroke:#000000
style SD fill:#4063D8,stroke:#000000
style SM fill:#389824,stroke:#000000
style SF fill:#4063D8,stroke:#000000
style S fill:#FFFFFF,stroke:#000000
style MDL fill:#D56B3D,stroke:#000000
style MDT fill:#D56B3D,stroke:#000000
style MAR fill:#D56B3D,stroke:#000000
style PHC fill:#389824,stroke:#000000
style SR fill:#9558B2,stroke:#000000
style MD fill:#4063D8,stroke:#000000
Sole is a collection of Julia packages for symbolic learning and reasoning. Although at an embryonic stage, Sole.jl covers a range of functionality that is of interest for the symbolic community, but it also fills some gaps with a few functionalities for standard machine learning pipelines. At the time of writing, the framework comprehends the three core packages:
- SoleLogics.jl provides the logical layer for symbolic learning. It provides a useful codebase for computational logic, which features easy manipulation of:
- Propositional and (multi)modal logics (atoms, logical constants, alphabet, grammars, fuzzy algebras);
- Logical formulas (random generation, parsing, minimization);
- Logical interpretations (or models, e.g., Kripke structures);
- Algorithms for model checking (that is, checking that a formula is satisfied by an interpretation).
- SoleData.jl provides the data layer for representing logisets, that is, the logical counterpart to machine learning datasets:
- Optimized data structures, useful when learning models from datasets;
- Support for multimodal data.
- Optimized data structures, useful when learning models from datasets;
- SoleModels.jl defines the building blocks of symbolic modeling, featuring:
- Definitions for (logic-agnostic) symbolic models (mainly, decision rules/lists/trees/forests);
- Basic rule extraction, evaluation and ispection algorithms;
- Conversion from DecisionTree.jl and XGBoost.jl;
- Support for mixed, neuro-symbolic computation.
Additional packages include:
- ModalDecisionTrees.jl which allows you to learn decision trees based on temporal logics on time-series datasets, and spatial logics on (small) image datasets;
- ModalDecisionLists.jl which implements a sequential covering algorithm to learn decision lists;
Ever wondered what to do with a trained DecisionTree? Convert it to Sole, and inspect its knowledge in terms of logical formulas!
- Evaluate them in terms of
- accuracy (e.g., confidence, lift),
- relevance (e.g., support),
- interpretability (e.g., syntax height, number of atoms);
- Modify them;
- Merge them.
using MLJ
using MLJDecisionTreeInterface
using DataFrames
using Sole
X, y = @load_iris
X = DataFrame(X)
train, test = partition(eachindex(y), 0.8, shuffle=true);
X_train, y_train = X[train, :], y[train];
X_test, y_test = X[test, :], y[test];
# Train a model
learned_dt_tree = begin
Tree = MLJ.@load DecisionTreeClassifier pkg=DecisionTree
model = Tree(max_depth=-1, )
mach = machine(model, X_train, y_train)
fit!(mach)
fitted_params(mach).tree
end
# Convert to Sole model
sole_dt = solemodel(learned_dt_tree)
julia> # Make test instances flow into the model, so that test metrics can, then, be computed.
apply!(sole_dt, X_test, y_test);
julia> # Print Sole model
printmodel(sole_dt; show_metrics = true);
▣ V4 < 0.8
├✔ setosa : (ninstances = 7, ncovered = 7, confidence = 1.0, lift = 1.0)
└✘ V3 < 4.95
├✔ V4 < 1.65
│├✔ versicolor : (ninstances = 10, ncovered = 10, confidence = 1.0, lift = 1.0)
│└✘ V2 < 3.1
│ ├✔ virginica : (ninstances = 2, ncovered = 2, confidence = 1.0, lift = 1.0)
│ └✘ versicolor : (ninstances = 0, ncovered = 0, confidence = NaN, lift = NaN)
└✘ V3 < 5.05
├✔ V1 < 6.5
│├✔ virginica : (ninstances = 0, ncovered = 0, confidence = NaN, lift = NaN)
│└✘ versicolor : (ninstances = 0, ncovered = 0, confidence = NaN, lift = NaN)
└✘ virginica : (ninstances = 11, ncovered = 11, confidence = 0.91, lift = 1.0)
julia> # Extract rules that are at least as good as a random baseline model
interesting_rules = listrules(sole_dt, min_lift = 1.0, min_ninstances = 0);
julia> printmodel.(interesting_rules; show_metrics = true);
▣ (V4 < 0.8) ∧ (⊤) ↣ setosa : (ninstances = 30, ncovered = 7, coverage = 0.23, confidence = 1.0, natoms = 1, lift = 4.29)
▣ (¬(V4 < 0.8)) ∧ (V3 < 4.95) ∧ (V4 < 1.65) ∧ (⊤) ↣ versicolor : (ninstances = 30, ncovered = 10, coverage = 0.33, confidence = 1.0, natoms = 3, lift = 2.73)
▣ (¬(V4 < 0.8)) ∧ (V3 < 4.95) ∧ (¬(V4 < 1.65)) ∧ (V2 < 3.1) ∧ (⊤) ↣ virginica : (ninstances = 30, ncovered = 2, coverage = 0.07, confidence = 1.0, natoms = 4, lift = 2.5)
▣ (¬(V4 < 0.8)) ∧ (¬(V3 < 4.95)) ∧ (¬(V3 < 5.05)) ∧ (⊤) ↣ virginica : (ninstances = 30, ncovered = 11, coverage = 0.37, confidence = 0.91, natoms = 3, lift = 2.27)
julia> # Simplify rules while extracting and prettify result
interesting_rules = listrules(sole_dt, min_lift = 1.0, min_ninstances = 0, normalize = true);
julia> printmodel.(interesting_rules; show_metrics = true, syntaxstring_kwargs = (; threshold_digits = 2));
▣ V4 < 0.8 ↣ setosa : (ninstances = 30, ncovered = 7, coverage = 0.23, confidence = 1.0, natoms = 1, lift = 4.29)
▣ (V4 ∈ [0.8,1.65)) ∧ (V3 < 4.95) ↣ versicolor : (ninstances = 30, ncovered = 10, coverage = 0.33, confidence = 1.0, natoms = 2, lift = 2.73)
▣ (V4 ≥ 1.65) ∧ (V3 < 4.95) ∧ (V2 < 3.1) ↣ virginica : (ninstances = 30, ncovered = 2, coverage = 0.07, confidence = 1.0, natoms = 3, lift = 2.5)
▣ (V4 ≥ 0.8) ∧ (V3 ≥ 5.05) ↣ virginica : (ninstances = 30, ncovered = 11, coverage = 0.37, confidence = 0.91, natoms = 2, lift = 2.27)
julia> # Directly access rule metrics
readmetrics.(listrules(sole_dt; min_lift=1.0, min_ninstances = 0))
4-element Vector{NamedTuple{(:ninstances, :ncovered, :coverage, :confidence, :natoms, :lift), Tuple{Int64, Int64, Float64, Float64, Int64, Float64}}}:
(ninstances = 30, ncovered = 7, coverage = 0.23333333333333334, confidence = 1.0, natoms = 1, lift = 4.285714285714286)
(ninstances = 30, ncovered = 10, coverage = 0.3333333333333333, confidence = 1.0, natoms = 3, lift = 2.7272727272727275)
(ninstances = 30, ncovered = 2, coverage = 0.06666666666666667, confidence = 1.0, natoms = 4, lift = 2.5)
(ninstances = 30, ncovered = 11, coverage = 0.36666666666666664, confidence = 0.9090909090909091, natoms = 3, lift = 2.2727272727272725)
julia> # Show rules with an additional metric (syntax height of the rule's antecedent)
printmodel.(sort(interesting_rules, by = readmetrics); show_metrics = (; round_digits = nothing, additional_metrics = (; height = r->SoleLogics.height(antecedent(r)))));
▣ (V4 ≥ 1.65) ∧ (V3 < 4.95) ∧ (V2 < 3.1) ↣ virginica : (ninstances = 30, ncovered = 2, coverage = 0.06666666666666667, confidence = 1.0, height = 2, lift = 2.5)
▣ V4 < 0.8 ↣ setosa : (ninstances = 30, ncovered = 7, coverage = 0.23333333333333334, confidence = 1.0, height = 0, lift = 4.285714285714286)
▣ (V4 ∈ [0.8,1.65)) ∧ (V3 < 4.95) ↣ versicolor : (ninstances = 30, ncovered = 10, coverage = 0.3333333333333333, confidence = 1.0, height = 1, lift = 2.7272727272727275)
▣ (V4 ≥ 0.8) ∧ (V3 ≥ 5.05) ↣ virginica : (ninstances = 30, ncovered = 11, coverage = 0.36666666666666664, confidence = 0.9090909090909091, height = 1, lift = 2.2727272727272725)
julia> # Pretty table of rules and their metrics
metricstable(interesting_rules; metrics_kwargs = (; round_digits = nothing, additional_metrics = (; height = r->SoleLogics.height(antecedent(r)))))
┌────────────────────────────────────────┬────────────┬────────────┬──────────┬───────────┬────────────┬────────┬─────────┐
│ Antecedent │ Consequent │ ninstances │ ncovered │ coverage │ confidence │ height │ lift │
├────────────────────────────────────────┼────────────┼────────────┼──────────┼───────────┼────────────┼────────┼─────────┤
│ V4 < 0.8 │ setosa │ 30 │ 7 │ 0.233333 │ 1.0 │ 0 │ 4.28571 │
│ (V4 ∈ [0.8,1.65)) ∧ (V3 < 4.95) │ versicolor │ 30 │ 10 │ 0.333333 │ 1.0 │ 1 │ 2.72727 │
│ (V4 ≥ 1.65) ∧ (V3 < 4.95) ∧ (V2 < 3.1) │ virginica │ 30 │ 2 │ 0.0666667 │ 1.0 │ 2 │ 2.5 │
│ (V4 ≥ 0.8) ∧ (V3 ≥ 5.05) │ virginica │ 30 │ 11 │ 0.366667 │ 0.909091 │ 1 │ 2.27273 │
└────────────────────────────────────────┴────────────┴────────────┴──────────┴───────────┴────────────┴────────┴─────────┘
The formal foundations of the framework are given in giopaglia's PhD thesis: Modal Symbolic Learning: from theory to practice, G. Pagliarini (2024)
Additionally, there's a 10-hour PhD course on YouTube, as well as material for it (including Jupyter Notebooks displaying symbolic AI workflows with Sole).
The package is developed and maintained by the ACLAI Lab @ University of Ferrara.
Long live transparent modeling!