Skip to content
This repository has been archived by the owner on Dec 8, 2022. It is now read-only.
/ MEMRL Public archive
forked from annikc/MEMRL

PhD Thesis work -- computational model of learning and memory in decision making in reinforcement learning tasks

Notifications You must be signed in to change notification settings

jeremyforan/MEMRL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Memory-Assisted Reinforcement Learning

By Annik Carson (Last updated July 2018)

This code is used to solve reinforcement learning tasks, using a variety of modules. The file structure is as follows:

Environments

Gridworld or OpenAI gym environments which create the tasks to be solved by the RL network

RL Network

Standard RL architecture we develop is an actorcritic network. Can also use Q-learning, etc.

Memory

Episodic caching system used to assist the RL network

Sensory

Networks used to create efficient representations of incoming state information. Can be used to supplement the RL network. These may be modified autoencoders, etc.

Notebooks

Jupyter notebooks used for running code

Data

Storage of data from runs for later analysis

Example Code

About

PhD Thesis work -- computational model of learning and memory in decision making in reinforcement learning tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.1%
  • Python 1.9%