Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix image URLs in vacuum_world.ipynb by removing 'files/' prefix #1302

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions vacuum_world.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -563,7 +563,7 @@
"A model-based reflex agent maintains some sort of **internal state** that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. In addition to this, it also requires a **model** of the world, that is, knowledge about \"how the world works\".\n",
"\n",
"The schematic diagram shown in **Figure 2.11** of the book will make this more clear:\n",
"<img src=\"files/images/model_based_reflex_agent.jpg\">"
"<img src=\"images/model_based_reflex_agent.jpg\">"
]
},
{
Expand Down Expand Up @@ -650,7 +650,7 @@
"A goal-based agent needs some sort of **goal** information that describes situations that are desirable, apart from the current state description.\n",
"\n",
"**Figure 2.13** of the book shows a model-based, goal-based agent:\n",
"<img src=\"files/images/model_goal_based_agent.jpg\">\n",
"<img src=\"images/model_goal_based_agent.jpg\">\n",
"\n",
"**Search** (Chapters 3 to 5) and **Planning** (Chapters 10 to 11) are the subfields of AI devoted to finding action sequences that achieve the agent's goals.\n",
"\n",
Expand All @@ -659,7 +659,7 @@
"A utility-based agent maximizes its **utility** using the agent's **utility function**, which is essentially an internalization of the agent's performance measure.\n",
"\n",
"**Figure 2.14** of the book shows a model-based, utility-based agent:\n",
"<img src=\"files/images/model_utility_based_agent.jpg\">"
"<img src=\"images/model_utility_based_agent.jpg\">"
]
},
{
Expand All @@ -673,7 +673,7 @@
"A learning agent can be divided into four conceptual components. The **learning element** is responsible for making improvements. It uses the feedback from the **critic** on how the agent is doing and determines how the performance element should be modified to do better in the future. The **performance element** is responsible for selecting external actions for the agent: it takes in percepts and decides on actions. The critic tells the learning element how well the agent is doing with respect to a fixed performance standard. It is necesaary because the percepts themselves provide no indication of the agent's success. The last component of the learning agent is the **problem generator**. It is responsible for suggesting actions that will lead to new and informative experiences. \n",
"\n",
"**Figure 2.15** of the book sums up the components and their working: \n",
"<img src=\"files/images/general_learning_agent.jpg\">"
"<img src=\"images/general_learning_agent.jpg\">"
]
}
],
Expand Down