-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathappSMF.html
39 lines (38 loc) · 4.72 KB
/
appSMF.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="generator" content="jemdoc, see http://jemdoc.jaboc.net/" />
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<link rel="stylesheet" href="jemdoc.css" type="text/css" />
<title>Approximating a non-submodular function by bounding the function marginals with submodular functions</title>
</head>
<body>
<div id="layout-content">
<div id="toptitle">
<h1>Approximating a non-submodular function by bounding the function marginals with submodular functions</h1>
</div>
<table class="imgtable"><tr><td>
<img src="pics/appSMF.png" alt="aSMF" width="250px" height="220px" /> </td>
<td align="left"></td></tr></table>
<p>Discrete optimization appears in a multitude of problems of steering of complex networks, sensor selection, game theory etc. The feasibility set of such problems grows exponential with the set size and hence some structure in the optimization cost is essential. The maximization of submodular functions has gained huge popularity in present times due to their guaranteed performance bound. A simple greedy selection can give optimality guarantees. However, in a lot of problems the empirical experiments have suggested that even some non-submodular functions have good performance. Some popular functions that appear in complex networks and control theory are negative trace of Gramian inverse, minimum eigenvalue. We have proposed a novel technique of approximate submodularity to better explain the mysterious performance of non-submodular functions. </p>
<p>The approximate function with a divergence between two functions <img class="eq" src="eqs/3969662385307339931-130.png" alt="f" style="vertical-align: -4px" /> and <img class="eq" src="eqs/6614627337783883364-130.png" alt="g" style="vertical-align: -4px" />, <img class="eq" src="eqs/3890647692052902744-130.png" alt="d(f,g)" style="vertical-align: -5px" />, is defined as follows: A function <img class="eq" src="eqs/3969662385307339931-130.png" alt="f" style="vertical-align: -4px" />, possibly non-submodular is referred as approximate submodular if there exists a non-decreasing submodular function <img class="eq" src="eqs/6614627337783883364-130.png" alt="g" style="vertical-align: -4px" /> such that <img class="eq" src="eqs/5238905856476866789-130.png" alt="d(f, g) leq delta" style="vertical-align: -5px" />. Intuitively, the approximate submodularity introduces the idea of closeness to being submodular using the notion of a divergence. The proposed concept of approximate submodularity can not only help in deriving tighter performance bounds for metrics that appear a lot in the applications of complex networks but also provide interesting and unexpected insights into the structure of the identified hidden submodular function. The concept of approximate submodularity is better understood by the introduction of the region of submodularity (ROS) of a given function which is defined as.</p>
<div class="eqwl"><img class="eqwl" src="eqs/4890797011777845898-130.png" alt=" ROS(f, delta) = {g vert frac{1-gamma_{f}}{1+gamma_{f}}leq d(f,g) leq delta} " />
<br /></div><p>where <img class="eq" src="eqs/4026154642656269638-130.png" alt="gamma_{f}" style="vertical-align: -6px" /> is the submodularity ratio of the given function <img class="eq" src="eqs/3969662385307339931-130.png" alt="f" style="vertical-align: -4px" />. Intuitively, the ROS is a collection of all feasible submodular functions which are close to the given function <img class="eq" src="eqs/3969662385307339931-130.png" alt="f" style="vertical-align: -4px" />, using the notion of defined divergence. </p>
<p>The proposed analysis of important non-submodular functions has established that greedy selection does offer guaranteed performance in terms of optimality. It will enable the use of these metrics in the important problems of sensor selection, matrix (Gramian) inversion for observability, Bayesian A-optimality etc. The output solution of the greedy algorithm is difficult to verify in terms of the optimality due to an exponential size of possibilities. With the proposed concepts, the performance of the greedy algorithm when applied to the large complex networks can be quantized.</p>
<div id="footer">
<div id="footer-text">
Page generated 2018-10-28 02:03:58 PDT, by <a href="http://jemdoc.jaboc.net/">jemdoc</a>.
</div>
</div>
</div>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-121309974-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-121309974-1');
</script>
</body>
</html>