Skip to content

Commit 6f58a6c

Browse files
committed
format markdown
1 parent f35f510 commit 6f58a6c

32 files changed

+1151
-1028
lines changed

.JuliaFormatter.toml

+1
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
style = "sciml"
2+
format_markdown = true

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
11
.DS_Store
22
/Manifest.toml
33
/dev/
4+
/docs/build/

README.md

+30-27
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[![codecov](https://codecov.io/gh/SciML/Optimization.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/SciML/Optimization.jl)
77
[![Build Status](https://github.com/SciML/Optimization.jl/workflows/CI/badge.svg)](https://github.com/SciML/Optimization.jl/actions?query=workflow%3ACI)
88

9-
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
9+
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
1010
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)
1111

1212
Optimization.jl is a package with a scope that is beyond your normal global optimization
@@ -23,25 +23,27 @@ Assuming that you already have Julia correctly installed, it suffices to import
2323
Optimization.jl in the standard way:
2424

2525
```julia
26-
import Pkg; Pkg.add("Optimization")
26+
using Pkg
27+
Pkg.add("Optimization")
2728
```
29+
2830
The packages relevant to the core functionality of Optimization.jl will be imported
2931
accordingly and, in most cases, you do not have to worry about the manual
3032
installation of dependencies. Below is the list of packages that need to be
3133
installed explicitly if you intend to use the specific optimization algorithms
3234
offered by them:
3335

34-
- OptimizationBBO for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl)
35-
- OptimizationEvolutionary for [Evolutionary.jl](https://github.com/wildart/Evolutionary.jl) (see also [this documentation](https://wildart.github.io/Evolutionary.jl/dev/))
36-
- OptimizationGCMAES for [GCMAES.jl](https://github.com/AStupidBear/GCMAES.jl)
37-
- OptimizationMOI for [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) (usage of algorithm via MathOptInterface API; see also the API [documentation](https://jump.dev/MathOptInterface.jl/stable/))
38-
- OptimizationMetaheuristics for [Metaheuristics.jl](https://github.com/jmejia8/Metaheuristics.jl) (see also [this documentation](https://jmejia8.github.io/Metaheuristics.jl/stable/))
39-
- OptimizationMultistartOptimization for [MultistartOptimization.jl](https://github.com/tpapp/MultistartOptimization.jl) (see also [this documentation](https://juliahub.com/docs/MultistartOptimization/cVZvi/0.1.0/))
40-
- OptimizationNLopt for [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) (usage via the NLopt API; see also the available [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/))
41-
- OptimizationNOMAD for [NOMAD.jl](https://github.com/bbopt/NOMAD.jl) (see also [this documentation](https://bbopt.github.io/NOMAD.jl/stable/))
42-
- OptimizationNonconvex for [Nonconvex.jl](https://github.com/JuliaNonconvex/Nonconvex.jl) (see also [this documentation](https://julianonconvex.github.io/Nonconvex.jl/stable/))
43-
- OptimizationQuadDIRECT for [QuadDIRECT.jl](https://github.com/timholy/QuadDIRECT.jl)
44-
- OptimizationSpeedMapping for [SpeedMapping.jl](https://github.com/NicolasL-S/SpeedMapping.jl) (see also [this documentation](https://nicolasl-s.github.io/SpeedMapping.jl/stable/))
36+
- OptimizationBBO for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl)
37+
- OptimizationEvolutionary for [Evolutionary.jl](https://github.com/wildart/Evolutionary.jl) (see also [this documentation](https://wildart.github.io/Evolutionary.jl/dev/))
38+
- OptimizationGCMAES for [GCMAES.jl](https://github.com/AStupidBear/GCMAES.jl)
39+
- OptimizationMOI for [MathOptInterface.jl](https://github.com/jump-dev/MathOptInterface.jl) (usage of algorithm via MathOptInterface API; see also the API [documentation](https://jump.dev/MathOptInterface.jl/stable/))
40+
- OptimizationMetaheuristics for [Metaheuristics.jl](https://github.com/jmejia8/Metaheuristics.jl) (see also [this documentation](https://jmejia8.github.io/Metaheuristics.jl/stable/))
41+
- OptimizationMultistartOptimization for [MultistartOptimization.jl](https://github.com/tpapp/MultistartOptimization.jl) (see also [this documentation](https://juliahub.com/docs/MultistartOptimization/cVZvi/0.1.0/))
42+
- OptimizationNLopt for [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) (usage via the NLopt API; see also the available [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/))
43+
- OptimizationNOMAD for [NOMAD.jl](https://github.com/bbopt/NOMAD.jl) (see also [this documentation](https://bbopt.github.io/NOMAD.jl/stable/))
44+
- OptimizationNonconvex for [Nonconvex.jl](https://github.com/JuliaNonconvex/Nonconvex.jl) (see also [this documentation](https://julianonconvex.github.io/Nonconvex.jl/stable/))
45+
- OptimizationQuadDIRECT for [QuadDIRECT.jl](https://github.com/timholy/QuadDIRECT.jl)
46+
- OptimizationSpeedMapping for [SpeedMapping.jl](https://github.com/NicolasL-S/SpeedMapping.jl) (see also [this documentation](https://nicolasl-s.github.io/SpeedMapping.jl/stable/))
4547

4648
## Tutorials and Documentation
4749

@@ -54,36 +56,34 @@ the documentation, which contains the unreleased features.
5456

5557
```julia
5658
using Optimization
57-
rosenbrock(x,p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
59+
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
5860
x0 = zeros(2)
59-
p = [1.0,100.0]
61+
p = [1.0, 100.0]
6062

61-
prob = OptimizationProblem(rosenbrock,x0,p)
63+
prob = OptimizationProblem(rosenbrock, x0, p)
6264

6365
using OptimizationOptimJL
64-
sol = solve(prob,NelderMead())
65-
66+
sol = solve(prob, NelderMead())
6667

6768
using OptimizationBBO
68-
prob = OptimizationProblem(rosenbrock, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
69-
sol = solve(prob,BBO_adaptive_de_rand_1_bin_radiuslimited())
69+
prob = OptimizationProblem(rosenbrock, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
70+
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
7071
```
7172

7273
Note that Optim.jl is a core dependency of Optimization.jl. However, BlackBoxOptim.jl
7374
is not and must already be installed (see the list above).
7475

7576
*Warning:* The output of the second optimization task (`BBO_adaptive_de_rand_1_bin_radiuslimited()`) is
76-
currently misleading in the sense that it returns `Status: failure
77-
(reached maximum number of iterations)`. However, convergence is actually
77+
currently misleading in the sense that it returns `Status: failure (reached maximum number of iterations)`. However, convergence is actually
7878
reached and the confusing message stems from the reliance on the Optim.jl output
79-
struct (where the situation of reaching the maximum number of iterations is
79+
struct (where the situation of reaching the maximum number of iterations is
8080
rightly regarded as a failure). The improved output struct will soon be
8181
implemented.
8282

8383
The output of the first optimization task (with the `NelderMead()` algorithm)
8484
is given below:
8585

86-
```julia
86+
```
8787
* Status: success
8888
8989
* Candidate solution
@@ -100,17 +100,19 @@ is given below:
100100
Iterations: 60
101101
f(x) calls: 118
102102
```
103+
103104
We can also explore other methods in a similar way:
104105

105106
```julia
106107
using ForwardDiff
107108
f = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
108109
prob = OptimizationProblem(f, x0, p)
109-
sol = solve(prob,BFGS())
110+
sol = solve(prob, BFGS())
110111
```
112+
111113
For instance, the above optimization task produces the following output:
112114

113-
```julia
115+
```
114116
* Status: success
115117
116118
* Candidate solution
@@ -134,9 +136,10 @@ For instance, the above optimization task produces the following output:
134136
```
135137

136138
```julia
137-
prob = OptimizationProblem(f, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
139+
prob = OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
138140
sol = solve(prob, Fminbox(GradientDescent()))
139141
```
142+
140143
The examples clearly demonstrate that Optimization.jl provides an intuitive
141144
way of specifying optimization tasks and offers a relatively
142145
easy access to a wide range of optimization algorithms.

docs/src/API/FAQ.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
## The Solver Seems to Violate Constraints During the Optimization, Causing `DomainError`s, What Can I Do About That?
44

55
During the optimization, optimizers use slack variables to relax the solution to the constraints. Because of this,
6-
there is no guarantee that for an arbitrary optimizer the steps will all satisfy the constraints during the
6+
there is no guarantee that for an arbitrary optimizer the steps will all satisfy the constraints during the
77
optimization. In many cases, this can cause one's objective function code throw a `DomainError` if it is evaluated
88
outside of its acceptable zone. For example, `log(-1)` gives:
99

@@ -16,15 +16,15 @@ log will only return a complex result if called with a complex argument. Try log
1616
To handle this, one should not assume that the variables will always satisfy the constraints on each step. There
1717
are three general ways to handle this better:
1818

19-
1. Use [NaNMath.jl](https://github.com/JuliaMath/NaNMath.jl)
20-
2. Process variables before domain-restricted calls
21-
3. Use a domain transformation
19+
1. Use [NaNMath.jl](https://github.com/JuliaMath/NaNMath.jl)
20+
2. Process variables before domain-restricted calls
21+
3. Use a domain transformation
2222

2323
NaNMath.jl gives alternative implementations of standard math functions like `log` and `sqrt` in forms that do not
2424
throw `DomainError`s but rather return `NaN`s. The optimizers will be able to handle the NaNs gracefully and recover,
2525
allowing for many of these cases to be solved without further modification. Note that this is done [internally in
2626
JuMP.jl, and thus if a case is working with JuMP and not Optimization.jl
27-
](https://discourse.julialang.org/t/optimizationmoi-ipopt-violating-inequality-constraint/92608/) this may be the
27+
](https://discourse.julialang.org/t/optimizationmoi-ipopt-violating-inequality-constraint/92608/) this may be the
2828
reason for the difference.
2929

3030
Alternatively, one can pre-process the values directly. For example, `log(abs(x))` is guaranteed to work. If one does
@@ -68,7 +68,7 @@ example, the ModelingToolkit integration with Optimization.jl will do many simpl
6868
is called. One of them is tearing on the constraints. To understand the tearing process, assume that we had
6969
nonlinear constraints of the form:
7070

71-
```julia
71+
```
7272
0 ~ u1 - sin(u5) * h,
7373
0 ~ u2 - cos(u1),
7474
0 ~ u3 - hypot(u1, u2),
@@ -86,7 +86,7 @@ u3 = f3(u1, u2)
8686
u4 = f4(u2, u3)
8787
```
8888

89-
and thus if the objective function was the function of these 5 variables and 4 constraints, ModelingToolkit.jl will
89+
and thus if the objective function was the function of these 5 variables and 4 constraints, ModelingToolkit.jl will
9090
transform it into system of 1 variable with no constraints, allowing unconstrained optimization on a smaller system.
9191
This will both be faster and numerically easier.
9292

docs/src/API/modelingtoolkit.md

+19-19
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
1-
# ModelingToolkit Integration
2-
3-
Optimization.jl is heavily integrated with the ModelingToolkit.jl
4-
symbolic system for symbolic-numeric optimizations. It provides a
5-
front-end for automating the construction, parallelization, and
6-
optimization of code. Optimizers can better interface with the extra
7-
symbolic information provided by the system.
8-
9-
There are two ways that the user interacts with ModelingToolkit.jl.
10-
One can use `OptimizationFunction` with `AutoModelingToolkit` for
11-
automatically transforming numerical codes into symbolic codes. See
12-
the [OptimizationFunction documentation](@id optfunction) for more
13-
details.
14-
15-
Secondly, one can generate `OptimizationProblem`s for use in
16-
Optimization.jl from purely a symbolic front-end. This is the form
17-
users will encounter when using ModelingToolkit.jl directly, and it is
18-
also the form supplied by domain-specific languages. For more information,
19-
see the [OptimizationSystem documentation](https://docs.sciml.ai/ModelingToolkit/stable/systems/OptimizationSystem/).
1+
# ModelingToolkit Integration
2+
3+
Optimization.jl is heavily integrated with the ModelingToolkit.jl
4+
symbolic system for symbolic-numeric optimizations. It provides a
5+
front-end for automating the construction, parallelization, and
6+
optimization of code. Optimizers can better interface with the extra
7+
symbolic information provided by the system.
8+
9+
There are two ways that the user interacts with ModelingToolkit.jl.
10+
One can use `OptimizationFunction` with `AutoModelingToolkit` for
11+
automatically transforming numerical codes into symbolic codes. See
12+
the [OptimizationFunction documentation](@id optfunction) for more
13+
details.
14+
15+
Secondly, one can generate `OptimizationProblem`s for use in
16+
Optimization.jl from purely a symbolic front-end. This is the form
17+
users will encounter when using ModelingToolkit.jl directly, and it is
18+
also the form supplied by domain-specific languages. For more information,
19+
see the [OptimizationSystem documentation](https://docs.sciml.ai/ModelingToolkit/stable/systems/OptimizationSystem/).

docs/src/API/optimization_function.md

+6-7
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ SciMLBase.OptimizationFunction
88

99
The choices for the auto-AD fill-ins with quick descriptions are:
1010

11-
- `AutoForwardDiff()`: The fastest choice for small optimizations
12-
- `AutoReverseDiff(compile=false)`: A fast choice for large scalar optimizations
13-
- `AutoTracker()`: Like ReverseDiff but GPU-compatible
14-
- `AutoZygote()`: The fastest choice for non-mutating array-based (BLAS) functions
15-
- `AutoFiniteDiff()`: Finite differencing, not optimal but always applicable
16-
- `AutoModelingToolkit()`: The fastest choice for large scalar optimizations
11+
- `AutoForwardDiff()`: The fastest choice for small optimizations
12+
- `AutoReverseDiff(compile=false)`: A fast choice for large scalar optimizations
13+
- `AutoTracker()`: Like ReverseDiff but GPU-compatible
14+
- `AutoZygote()`: The fastest choice for non-mutating array-based (BLAS) functions
15+
- `AutoFiniteDiff()`: Finite differencing, not optimal but always applicable
16+
- `AutoModelingToolkit()`: The fastest choice for large scalar optimizations
1717

1818
## Automatic Differentiation Choice API
1919

@@ -27,4 +27,3 @@ Optimization.AutoZygote
2727
Optimization.AutoTracker
2828
Optimization.AutoModelingToolkit
2929
```
30-

0 commit comments

Comments
 (0)