You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+17
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,20 @@
1
+
<pre>
2
+
o o o o o
3
+
| o | |\ /| | /
4
+
| o-o o--o o-o oo | | O | oo o-o OO o-o o o
5
+
| | | | | | | | | | | | | | | | \ | | \ /
6
+
O---oo-o o--O | o-o o-o-o o o o-o-o o o o-o o
7
+
|
8
+
o--o
9
+
o--o o o--o o o
10
+
| | | | o | |
11
+
O-Oo oo o-o o-O o-o o-O-o O-o o-o | o-O o-o
12
+
| \ | | | | | | | | | | | | | |-' | | | \
13
+
o o o-o-o o o-o o-o o o o o | o-o o o-o o-o
14
+
15
+
Logical Markov Random Fields.
16
+
</pre>
17
+
1
18
# LoMRF: Logical Markov Random Fields
2
19
3
20
LoMRF is an open-source implementation of [Markov Logic Networks](https://en.wikipedia.org/wiki/Markov_logic_network) (MLNs) written in [Scala programming language](http://scala-lang.org).
Copy file name to clipboardexpand all lines: doc/2_inference.md
+14-20
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,6 @@
1
-
# Inference
1
+
# Inference
2
+
3
+
In brief, inference in Markov Logic Networks (MLNs), is the process of estimating the marginal probability or the most probable truth state of the groundings of some query atoms, given a MLN theory (i.e., a collection of weighted formulas) and a collection of imput ground evidence atoms (i.e., input ground predicates with known truth state).
2
4
3
5
## Types of inference in LoMRF
4
6
@@ -18,8 +20,7 @@ In order to perform inference, we have to define the following:
18
20
19
21
### Inference using the `lomrf` commmand-line tool
20
22
21
-
To demonstrate the usage of LoMRF from commmand-line interface, assume that we have one knowledge base
22
-
file, named as `theoy.mln`, and one evidence file, named as `evidence.db`.
23
+
To demonstrate the usage of LoMRF from commmand-line interface, assume that we have one knowledge base file, named as `theoy.mln`, and one evidence file, named as `evidence.db`.
23
24
24
25
In our example knowledge-base we have the following predicates:
The results from MAP inference are stored in the `map-out.result` (see paramter `-r`)
74
75
75
76
77
+
## Probabilistic Inference Examples
78
+
79
+
See Sections [Probabilistic Inference Examples](2_1_inference_examples.md) and [Temporal Probabilistic Inference Examples](2_2_temporal_inference_examples.md). Sources from the examples are located in the LoMRF-data project (follow the instructions in [Download Example Data](6_2_download_example_data.md)).
80
+
81
+
82
+
76
83
## Command-line Interface Options ##
77
84
78
85
By executing the ```lomrf -h``` (or ```lomrf --help```) command from the command-line interface, we take a print of multiple parameters. Below we explain all LoMRF inference command-line interface parameters:
@@ -219,26 +226,13 @@ literal become unit clauses with positive literal and inverted sign in their cor
219
226
*`-dynamic, --dynamic-implementations <string>`**[Optional]** Comma separated paths to search recursively for dynamic
220
227
predicates/functions implementations (*.class and *.jar files).
221
228
222
-
## Examples
223
-
224
-
See Sections [Probabilistic Inference Examples](2_1_inference_examples.md) and [Temporal Probabilistic Inference Examples](2_2_temporal_inference_examples.md).
225
-
Sources from the examples are located in the LoMRF-data project (follow the instructions in
226
-
[Download Example Data](6_2_download_example_data.md)).
227
-
228
229
229
230
## References
230
231
231
-
Bart Selman, Henry Kautz, and Bram Cohen. (1993) Local Search Strategies for Satisfiability Testing. Final version appears
232
-
in Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. In David S. Johnson and Michael A. Trick, (Ed.),
233
-
DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 26, AMS. ([link](http://www.cs.cornell.edu/selman/papers/pdf/dimacs.pdf))
232
+
* Bart Selman, Henry Kautz, and Bram Cohen. (1993) Local Search Strategies for Satisfiability Testing. Final version appears in Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. In David S. Johnson and Michael A. Trick, (Ed.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 26, AMS. ([link](http://www.cs.cornell.edu/selman/papers/pdf/dimacs.pdf))
234
233
235
-
Henry Kautz, Bart Selman and Yueyen Jiang. (1996) A General Stochastic Approach to Solving Problems with Hard and Soft Constraints.
236
-
In Gu, D., Du, J. and Pardalos, P. (Eds.), The Satisfiability Problem: Theory and Applications, Vol. 35 of DIMACS Series in
237
-
Discrete Mathematics and Theoretical Computer Science, pp. 573–586. AMS. ([link](https://cs.rochester.edu/u/kautz/papers/maxsatDIMACSfinal.ps))
234
+
* Henry Kautz, Bart Selman and Yueyen Jiang. (1996) A General Stochastic Approach to Solving Problems with Hard and Soft Constraints. In Gu, D., Du, J. and Pardalos, P. (Eds.), The Satisfiability Problem: Theory and Applications, Vol. 35 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pp. 573–586. AMS. ([link](https://cs.rochester.edu/u/kautz/papers/maxsatDIMACSfinal.ps))
238
235
239
-
Tuyen N. Huynh and Raymond J. Mooney. (2011). Max-Margin Weight Learning for Markov Logic Networks. In Proceedings of the
240
-
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011),
241
-
Vol. 2, pp. 81-96. ([link](http://www.ai.sri.com/~huynh/papers/huynh_mooney_ecmlpkdd09.pdf))
236
+
* Tuyen N. Huynh and Raymond J. Mooney. (2009). Max-Margin Weight Learning for Markov Logic Networks. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD-09). ([link](http://www.ai.sri.com/~huynh/papers/huynh_mooney_ecmlpkdd09.pdf))
242
237
243
-
Poon, Hoifung and Domingos, Pedro (2006). Sound and Efficient Inference with Probabilistic and Deterministic Dependencies.
244
-
In Proceedings of the 21th National Conference on Artificial Intelligence (pp. 458-463), 2006. Boston, MA: AAAI Press. ([link](http://homes.cs.washington.edu/~pedrod/papers/aaai06a.pdf))
238
+
* Poon, Hoifung and Domingos, Pedro (2006). Sound and Efficient Inference with Probabilistic and Deterministic Dependencies. In Proceedings of the 21th National Conference on Artificial Intelligence (pp. 458-463), 2006. Boston, MA: AAAI Press. ([link](http://homes.cs.washington.edu/~pedrod/papers/aaai06a.pdf))
Copy file name to clipboardexpand all lines: doc/3_1_weight_learning_examples.md
+20-9
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Weight Learning Examples
2
2
3
-
Below we provide some example MLNs, in order to demonstrate the LoMRF weight learning command-line tool:
3
+
Below we provide simple example models, in order to demonstrate weight learning in LoMRF.
4
4
5
5
## Social Network Analysis
6
6
@@ -10,7 +10,10 @@ We would like to model a simple social network of friends, smoking, and cancer.
10
10
11
11
*Predicate schema:*
12
12
```lang-none
13
+
// Input predicates:
13
14
Friends(person, person)
15
+
16
+
// Query/non-evidence predicates:
14
17
Smokes(person)
15
18
Cancer(person)
16
19
```
@@ -22,15 +25,18 @@ Cancer(person)
22
25
Smokes(x) => Cancer(x)
23
26
24
27
// People having friends who smoke, also smoke and those having friends
25
-
// who don't smoke, don't smoke.
28
+
// who don't smoke, they don't smoke.
26
29
Friends(x, y) => (Smokes(x) <=> Smokes(y))
27
30
```
28
31
29
-
Of course, this does not hold for all smokers, so in Markov logic we can just tack a weight
30
-
on to the rule, or, as we do here, learn a weight from training data.
32
+
Since this cannot not hold for all smokers with absolute certainty, in Markov Logic we can associate a weight value to each logical formula, or use weight learning in order to estimate the weights from training data.
33
+
34
+
Please note, that both example formulas are not hard-constrained (i.e., ending with a full-stop character). On the other hand, although they are both soft-constrained, they are not (yet) associated with a weight value. The lack of weight value, indicates that the weight needs to be estimated by the weight learning algorithm.
31
35
32
36
### Training data (smoking-train.db)
33
37
38
+
In the following training data we are giving example relations between friends, e.g., the fact that the persons `Anna` and `Bob` are friends (using the true ground fact `Friends(Anna, Bob)`). Furthermore, we are stating the fact who is a smoker, e.g., `Anna` is a smoker, therefore we are givind the true ground fact `Smokes(Anna)`. Similarly, we are stating which persons have been diagnosed with cancer, e.g., `Canser(Anna)`. Please note that, due to Closed-world assumption we do not nessesary need to give which possible is false, e.g., the fact that `Bob` is not a smoker (i.e., `!Smokes(Bob)`). Below we are giving the full example of our training data:
39
+
34
40
```lang-none
35
41
Friends(Anna, Bob)
36
42
Friends(Bob, Anna)
@@ -60,16 +66,21 @@ Cancer(Edward)
60
66
61
67
***Weight learning execution***
62
68
69
+
In order to perform weight learning for this example we are giving the following:
Where the parameter '-i smoking.mln' is the input MLN theory, '-o smoking-learned.mln' is the resulting output theory with the estimated weights, '-t smoking-train.db' is the training data and the parameer '-ne Smokes/1,Cancer/1' specifies which predicates are the non-evidence predicates. After the execution of this example, the resulting file smoking-learned.mln is an MLN knowledge base with the learned weights. Using this file along with the test data, we can compute the truth value of each person smoking and getting cancer.
75
+
76
+
## Car traffic modelling
66
77
67
-
This produces the file smoking-learned.mln with the learned weights. Using this along with the test data, we can compute the truth value of each person smoking and getting cancer.
78
+
In the following example we are going to demonstrate weight learning using a naive implementation of [Hidden Markov Model](https://en.wikipedia.org/wiki/Hidden_Markov_model) for modelling car traffic (see [original example](http://alchemy.cs.washington.edu/tutorial/7Hidden_Markov_Models.html)).
79
+
We assume that each day a car may take one of the following actions (1) stopped, (2) driving, (3) or slowing down. Furthermore, we assume that these actions are dependent by the state of the stoplight in front of it, which can be either red, green or yellow.
68
80
69
-
## Hidden Markov Models
81
+
In a Markov process we need to model `states` and `observations` at certain points in `time`. Using first-order logic representation we can model a `state` and `observation` using predicates. On the other hand, time, car actions and traffic light observations are represented as variables in each one of these predicates.
70
82
71
-
Suppose, on a given day, we observe a car taking three actions: it is either stopped, driving, or slowing. We assume this is only dependent on the state of the stoplight in front of it: red, green or yellow. In a Markov process we need to model
72
-
`states` and `observations` at certain points in `time`. In LoMRF, we model a `state` and `observation` with a first-order predicate and `time` is a variable in each of these predicates.
83
+
Please find below the example knowledge base and training data:
This produces the file traffic-learned.mln with the learned weights. Using this along with the test data, we can compute the truth value of each state.
182
+
This produces the file traffic-learned.mln with the learned weights. Using the resulting trained MLN model along with the test data, we can compute the truth value of each state.
Copy file name to clipboardexpand all lines: doc/3_2_temporal_weight_learning_examples.md
+14-11
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,7 @@ Below we provide examples that demonstrate LoMRF weight learning capabilities in
4
4
5
5
## Activity Recognition
6
6
7
-
In this example we demonstrate how to perform weight learning for activity recognition, using a small fragment of the first
8
-
set of the [CAVIAR dataset](http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/). We use the same Probabilistic Event Calculus
9
-
formalism as presented in the [Quick Start](0_quick_start.md) section and the same knowledge base as the one defined in
10
-
the [Temporal Inference Examples](2_2_temporal_inference_examples.md).
7
+
In this example we demonstrate how to perform weight learning for activity recognition, using a small fragment of the first set of the [CAVIAR dataset](http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/). We use the same Probabilistic Event Calculus formalism as presented in the [Quick Start](0_quick_start.md) section and the same knowledge base as the one defined in the [Temporal Inference Examples](2_2_temporal_inference_examples.md).
11
8
12
9
###Training data
13
10
@@ -66,19 +63,25 @@ Happens(Active_ID1, 170)
66
63
67
64
The files of this example are the following:
68
65
* Knowledge base files:
69
-
* Main MLN file in CNF: [theory_cnf.mln](/Examples/Weight_Learning/Activity_Recognition/theory.mln)
70
-
* Definitions of moving activity: [definitions/moving.mln](/Examples/Weight_Learning/Activity_Recognition/definitions/moving.mln)
71
-
* Definitions of meeting activity: [definitions/meeting.mln](/Examples/Weight_Learning/Activity_Recognition/definitions/meeting.mln)
72
-
* Training file for batch learning: [training.db](/Examples/Weight_Learning/Activity_Recognition/training/batch/training.db)
73
-
* Training files for online learning: [micro-batches](/Examples/Weight_Learning/Activity_Recognition/training/online/)
66
+
* Main MLN file in CNF: [theory_cnf.mln](../Data/Examples/Weight_Learning/Activity_Recognition/theory.mln)
67
+
* Definitions of moving activity: [definitions/moving.mln](../Data/Examples/Weight_Learning/Activity_Recognition/definitions/moving.mln)
68
+
* Definitions of meeting activity: [definitions/meeting.mln](../Data/Examples/Weight_Learning/Activity_Recognition/definitions/meeting.mln)
69
+
* Training file for batch learning: [training.db](../Data/Examples/Weight_Learning/Activity_Recognition/training/batch/training.db)
70
+
* Training files for online learning: [micro-batches](../Data/Examples/Weight_Learning/Activity_Recognition/training/online/)
71
+
74
72
75
73
Parameters:
76
-
* Non-evidence predicates: `HoldsAt/2`
74
+
* Non-evidence predicates: `-ne HoldsAt/2`
75
+
* Input MLN theory: `-i theory_cnf.mln`
76
+
* Input training data: `-t training.db`
77
+
* Resulting output MLN theory: `-o learned.mln`
78
+
* Enable loss augmented inference (also known as seperation oracle) using the Hamming loss function by adding to the objective function during inference additional loss terms: `-lossAugmented`
79
+
* Specify the learning alogirhtm, i.e., Max-Margin (default), Adagrad or CDA: `-alg`
0 commit comments