You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/functions.md
+14-1
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,10 @@
1
1
# Functions and constraints
2
2
3
+
Once an expression is created it is possible to create the `Term`s defining the optimization problem.
4
+
5
+
These can consists of either [Smooth functions](@ref), [Nonsmooth functions](@ref), [Inequality constraints](@ref)
6
+
or [Equality constraints](@ref).
7
+
3
8
## Smooth functions
4
9
5
10
```@docs
@@ -20,7 +25,7 @@ sumpositive
20
25
hingeloss
21
26
```
22
27
23
-
## Inequalities constraints
28
+
## Inequality constraints
24
29
25
30
```@docs
26
31
<=
@@ -34,12 +39,20 @@ hingeloss
34
39
35
40
## Smoothing
36
41
42
+
Sometimes the optimization problem might involve only non-smooth terms which do not lead to efficient proximal mappings. It is possible to *smooth* this terms by means of the *Moreau envelope*.
43
+
37
44
```@docs
38
45
smooth
39
46
```
40
47
41
48
## Duality
42
49
50
+
In some cases it is more convenient to solve the *dual problem* instead of the primal problem.
51
+
52
+
It is possible to convert the primal problem into its dual form by means of the *convex conjugate*.
53
+
54
+
See the Total Variation demo for an example of such procedure.
Copy file name to clipboardExpand all lines: docs/src/index.md
+7-4
Original file line number
Diff line number
Diff line change
@@ -14,14 +14,17 @@ three different packages:
14
14
15
15
*[`ProximalAlgorithms.jl`](https://github.com/kul-forbes/ProximalAlgorithms.jl) is a library of proximal algorithms (aka splitting algorithms) solvers.
16
16
17
-
`StructuredOptimization.jl` can handle large-scale convex and nonconvex problems with nonsmooth cost functions: see ? for a set of demos.
17
+
`StructuredOptimization.jl` can handle large-scale convex and nonconvex problems with nonsmooth cost functions. It supports complex variables as well. See the demos and the [Quick tutorial guide](@ref).
18
+
19
+
## Citing
20
+
21
+
If you use `StructuredOptimization.jl` for published work, we encourage you to cite:
22
+
23
+
* N. Antonello, L. Stella, P. Patrinos, T. van Waterschoot, “Proximal Gradient Algorithms: Applications in Signal Processing,” [arXiv:1803.01621](https://arxiv.org/abs/1803.01621) (2018).
18
24
19
25
# Credits
20
26
21
27
`StructuredOptimization.jl` is developed by
22
28
[Lorenzo Stella](https://lostella.github.io) and
23
29
[Niccolò Antonello](https://nantonel.github.io)
24
30
at [KU Leuven, ESAT/Stadius](https://www.esat.kuleuven.be/stadius/).
[[2]](http://epubs.siam.org/doi/abs/10.1137/080716542) Beck, Teboulle, *A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems*, SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
71
+
72
+
[[3]](https://arxiv.org/abs/1606.06256) Themelis, Stella, Patrinos, *Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms*, arXiv:1606.06256 (2016).
73
+
74
+
[[4]](https://doi.org/10.1109/CDC.2017.8263933) Stella, Themelis, Sopasakis, Patrinos, *A simple and efficient algorithm for nonlinear model predictive control*, 56th IEEE Conference on Decision and Control (2017).
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function while the $l_1$-norm is a *nonsmooth* function.
21
+
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function $f$ wherelse the $l_1$-norm is a *nonsmooth* function $g$.
22
22
23
-
This can be solved using `StructuredOptimization.jl` using only few lines of code:
23
+
This problem can be solved using `StructuredOptimization.jl` using only few lines of code:
24
24
25
25
```julia
26
26
julia>using StructuredOptimization
@@ -46,11 +46,11 @@ It is possible to access to the solution by typing `~x`.
46
46
47
47
By default variables are initialized by `Array`s of zeros.
48
48
49
-
It is possible to set different initializations during construction `x = Variable( [1.; 0.; ...] )` or by assignement `~x .= [1.; 0.; ...]`.
49
+
Different initializations can be set during construction `x = Variable( [1.; 0.; ...] )` or by assignement `~x .= [1.; 0.; ...]`.
50
50
51
51
## Constraint optimization
52
52
53
-
Constraint optimization is also ecompassed by [Standard problem formulation](@ref):
53
+
Constraint optimization is also ecompassed by the [Standard problem formulation](@ref):
54
54
55
55
for a nonempty set $\mathcal{S}$ the constraint of
56
56
@@ -61,7 +61,7 @@ for a nonempty set $\mathcal{S}$ the constraint of
Currently `StructuredOptimization.jl` supports only *Proximal Gradient (aka Forward Backward) algorithms*, which require specific properties of the nonsmooth functions and costraint to be applicable.
137
140
141
+
In particular, the nonsmooth functions must lead to an *efficiently computable proximal mapping*.
142
+
138
143
If we express the nonsmooth function $g$ as the composition of
139
144
a function $\tilde{g}$ with a linear operator $A$:
140
145
```math
141
146
g(\mathbf{x}) =
142
147
\tilde{g}(A \mathbf{x})
143
148
```
144
-
than the problem can be solved when $g$ satisifies the following properties:
149
+
then a proximal mapping of $g$ is efficiently computable if it satisifies the following properties:
145
150
146
151
1. the mapping $A$ must be a *tight frame* namely it must satisfy $A A^* = \mu Id$, where $\mu \geq 0$ and $A^*$ is the adjoint of $A$ and $Id$ is the identity operator.
147
152
@@ -184,3 +189,6 @@ julia> @minimize ls( A*x - y ) + λ*norm(x[1:div(n,2)], 1) st x[div(n,2)+1:n] >=
184
189
```
185
190
as not the optimization variables $\mathbf{x}$ are partitioned into non-overlapping groups.
186
191
192
+
!!! note
193
+
194
+
When the problem is not accepted it might be still possible to solve it: see [Smoothing](@ref) and [Duality](@ref).
0 commit comments