|
51 | 51 | "\n",
|
52 | 52 | "- Expected yearly returns:\n",
|
53 | 53 | "\n",
|
54 |
| - " Share $A$: $r_{A} = 7\\%$\n", |
| 54 | + " - Share $A$: $r_{A} = 7\\%$\n", |
55 | 55 | "\n",
|
56 |
| - " Share $B$: $r_{B} = 9\\%$\n", |
| 56 | + " - Share $B$: $r_{B} = 9\\%$\n", |
57 | 57 | " \n",
|
58 | 58 | " \n",
|
59 | 59 | "- Volatilities:\n",
|
60 | 60 | "\n",
|
61 |
| - " Share $A$: $\\sigma_{A} = 20\\%$\n", |
| 61 | + " - Share $A$: $\\sigma_{A} = 20\\%$\n", |
62 | 62 | "\n",
|
63 |
| - " Share $B$: $\\sigma_{B} = 30\\%$\n", |
| 63 | + " - Share $B$: $\\sigma_{B} = 30\\%$\n", |
64 | 64 | " \n",
|
65 | 65 | " \n",
|
66 | 66 | "- Correlation:\n",
|
67 | 67 | "\n",
|
68 |
| - " $\\rho_{A, B} = 0.7$\n", |
| 68 | + " - $\\rho_{A, B} = 0.7$\n", |
69 | 69 | "\n",
|
70 | 70 | "The investor wants to invest her money in such a way, that the expected return is $5\\%$, and where risk (volatility) is minimized.\n",
|
71 | 71 | "A portfolio consisting of $x_{1} \\times 10\\,000$ € in Share $A$ und $x_{2} \\times 10\\,000$ € in share $B$ can be expressed as\n",
|
72 | 72 | "\n",
|
73 |
| - "$\\begin{align}\n", |
| 73 | + "$$\n", |
| 74 | + "\\begin{align}\n", |
74 | 75 | "f (x_{1}, x_{2})\n",
|
75 | 76 | "=\n",
|
76 | 77 | "\\sqrt{ \\sigma_{A}^2 x_1^2 + \\sigma_{B}^2 x_2^2 + 2 \\sigma_{A} x_1 \\sigma_{B} x_2 \\rho_{A, B}}.\n",
|
77 |
| - "\\end{align}$\n", |
| 78 | + "\\end{align}\n", |
| 79 | + "$$\n", |
78 | 80 | "\n",
|
79 | 81 | "Our goal is\n",
|
80 | 82 | "\n",
|
|
211 | 213 | "source": [
|
212 | 214 | "## **Intuition:** The Gradient always points in the direction of the steepest ascent.\n",
|
213 | 215 | "\n",
|
214 |
| - "Example 1: $f(x_{1}, x_{2}) = x_1$, $\\nabla f(x_{1}, x_{2}) = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}$." |
| 216 | + "**Example 1:** $f(x_{1}, x_{2}) = x_1$, $\\nabla f(x_{1}, x_{2}) = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}$." |
215 | 217 | ]
|
216 | 218 | },
|
217 | 219 | {
|
|
233 | 235 | "id": "13632e34-1665-4598-8873-2bc804728365",
|
234 | 236 | "metadata": {},
|
235 | 237 | "source": [
|
236 |
| - "Example 2: $f(x_{1}, x_{2}) = \\tfrac{1}{2} (x_{1} + x_{2})$, $\\nabla f(x_{1}, x_{2}) = \\begin{pmatrix} \\tfrac{1}{2} \\\\ \\tfrac{1}{2} \\end{pmatrix}$." |
| 238 | + "**Example 2:** $f(x_{1}, x_{2}) = \\tfrac{1}{2} (x_{1} + x_{2})$, $\\nabla f(x_{1}, x_{2}) = \\begin{pmatrix} \\tfrac{1}{2} \\\\ \\tfrac{1}{2} \\end{pmatrix}$." |
237 | 239 | ]
|
238 | 240 | },
|
239 | 241 | {
|
|
271 | 273 | "\\end{pmatrix}\n",
|
272 | 274 | "\\end{equation}\n",
|
273 | 275 | "\n",
|
274 |
| - "Example: \n", |
| 276 | + "**Examples:**\n", |
| 277 | + "\n", |
275 | 278 | "\\begin{equation}\n",
|
276 | 279 | "\\nabla f(-4, -2) =\n",
|
277 | 280 | "\\begin{pmatrix}\n",
|
|
392 | 395 | "metadata": {},
|
393 | 396 | "outputs": [],
|
394 | 397 | "source": [
|
395 |
| - "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=1, Iterationen=i, color = \"#636EFA\")\n", |
396 |
| - "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=1, Iterationen=i, color = \"#636EFA\")\n", |
| 398 | + "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=1, iterations=i, color = \"#636EFA\")\n", |
| 399 | + "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=1, iterations=i, color = \"#636EFA\")\n", |
397 | 400 | "show_plot(contour_plot, surface_plot)\n",
|
398 | 401 | "i += 1"
|
399 | 402 | ]
|
|
436 | 439 | "source": [
|
437 | 440 | "# Gradient descent with learning rate gamma = 0.1:\n",
|
438 | 441 | "\n",
|
439 |
| - "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=0.1, Iterationen=i, color = \"#EF553B\")\n", |
440 |
| - "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=0.1, Iterationen=i, color = \"#EF553B\")\n", |
| 442 | + "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=0.1, iterations=i, color = \"#EF553B\")\n", |
| 443 | + "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=0.1, iterations=i, color = \"#EF553B\")\n", |
441 | 444 | "show_plot(contour_plot, surface_plot)\n",
|
442 | 445 | "i+=1"
|
443 | 446 | ]
|
|
481 | 484 | "source": [
|
482 | 485 | "# Gradient descent with learning rate gamma = 2:\n",
|
483 | 486 | "\n",
|
484 |
| - "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=2, Iterationen=i, color = \"#00CC96\")\n", |
485 |
| - "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=2, Iterationen=i, color = \"#00CC96\")\n", |
| 487 | + "contour_plot.add_gradient_descent(x0=[-4, -2], function = f, grad=grad_f, gamma=2, iterations=i, color = \"#00CC96\")\n", |
| 488 | + "surface_plot.add_gradient_descent_surface(x0=[-4, -2], function = f, grad=grad_f, gamma=2, iterations=i, color = \"#00CC96\")\n", |
486 | 489 | "show_plot(contour_plot, surface_plot)\n",
|
487 | 490 | "i += 1"
|
488 | 491 | ]
|
|
537 | 540 | "\n",
|
538 | 541 | "x0=[random.uniform(-5,6),random.uniform(-3,3)]\n",
|
539 | 542 | "\n",
|
540 |
| - "contour_plot.add_gradient_descent(x0=x0, function = f, grad=grad_f, gamma=1, Iterationen=30)\n", |
541 |
| - "surface_plot.add_gradient_descent_surface(x0=x0, function = f, grad=grad_f, gamma=1, Iterationen=30)\n", |
| 543 | + "contour_plot.add_gradient_descent(x0=x0, function = f, grad=grad_f, gamma=1, iterations=30)\n", |
| 544 | + "surface_plot.add_gradient_descent_surface(x0=x0, function = f, grad=grad_f, gamma=1, iterations=30)\n", |
542 | 545 | "show_plot(contour_plot, surface_plot)"
|
543 | 546 | ]
|
544 | 547 | },
|
|
547 | 550 | "id": "39148f36-835e-4ef2-9073-5764c20f03f1",
|
548 | 551 | "metadata": {},
|
549 | 552 | "source": [
|
550 |
| - "## Nonlinear programming with Scipy\n", |
| 553 | + "## Nonlinear programming with `Scipy`\n", |
551 | 554 | "\n",
|
552 | 555 | "The presented method is still too simple for practical use, in practical application there are still many problems to be considered.\n",
|
553 | 556 | "Therefore, one usually takes an already existing implementation.\n",
|
554 |
| - "The Python package Scipy with the function `minimize` is very suitable for this.\n", |
| 557 | + "The Python package `Scipy` with the function `minimize` is very suitable for this.\n", |
555 | 558 | "It suffices to enter the function to be minimized and the initial guess:"
|
556 | 559 | ]
|
557 | 560 | },
|
|
584 | 587 | "An example for a constraint of the form $g(x) \\leq 0$ is\n",
|
585 | 588 | "\n",
|
586 | 589 | "\\begin{equation}\n",
|
587 |
| - "-(x_{1}+2)^{2} + x_{2}^{3} \\leq 0\n", |
| 590 | + "-(x_{1}+2)^{2} + x_{2}^{3} \\leq 0.\n", |
588 | 591 | "\\end{equation}\n",
|
589 | 592 | "\n",
|
590 | 593 | "This inequality defines a domain, in which the solution lives."
|
|
756 | 759 | "\\begin{pmatrix}\n",
|
757 | 760 | "4x_{1}^{3} + 36x_{1}^{2} + 108x_{1} + 108 - 4x_{1}x_{2} - 12x_{2} \\\\\n",
|
758 | 761 | "-2x_{1}^{2} -12x_{1} + 2x_{2} - 18\n",
|
759 |
| - "\\end{pmatrix}\n", |
| 762 | + "\\end{pmatrix}.\n", |
760 | 763 | "\\end{equation}\n",
|
761 | 764 | "\n",
|
762 | 765 | "The Gradient of $f_{ \\text{pen}}$ is\n",
|
|
775 | 778 | "outputs": [],
|
776 | 779 | "source": [
|
777 | 780 | "contour_plot.add_gradient_descent(x0=contour_plot.result, function=f, grad=lambda x : (grad_f(x) + alpha*grad_h_sq(x)), gamma=gamma,\n",
|
778 |
| - " Iterationen=100, Nebenbedingung = h)\n", |
| 781 | + " iterations=100, Nebenbedingung = h)\n", |
779 | 782 | "\n",
|
780 | 783 | "contour_plot.show()\n",
|
781 | 784 | "\n",
|
|
892 | 895 | "We optimize the function\n",
|
893 | 896 | "\n",
|
894 | 897 | "\\begin{equation}\n",
|
895 |
| - "f(x) = \\exp \\left(- \\sum_{i=1}^{n} i \\cdot x_{i} \\right) + \\sum_{i=1}^{n} x_{i}^{2}\n", |
| 898 | + "f(x) = \\exp \\left(- \\sum_{i=1}^{n} i \\cdot x_{i} \\right) + \\sum_{i=1}^{n} x_{i}^{2}.\n", |
896 | 899 | "\\end{equation}"
|
897 | 900 | ]
|
898 | 901 | },
|
|
0 commit comments