Replicating Results With dqn_atari.py #469
Unanswered
Gwendls110303
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to replicate the results in Table 9 of the appendix in this paper, however I am getting different results despite using the same parameters.
These are my results: (brackets denote standard deviation)
Game Mean Episodic Return
ALE/Enduro-v5: 214.8 (17.4)
ALE/SpaceInvaders-v5: 573.5 (8.9)
ALE/Pitfall-v5: -94.6 (12.1)
While in the paper they report
Enduro: 473.2 (22.3)
SpaceInvaders: 211.6 (14.8)
Pitfall: -24.9 (14.8)
I selected version 5 as it seemed to match most of their configurations, however some parameters don't seem to exist with this implementation of DQN such as:
Gradient momentum
Squared gradient momentum
Min squared gradient
Replay start size
Agent history length
Frame pooling (boolean)
No. of different colours
Any suggestions on how to address these parameters or if these have a different name within the code?
Beta Was this translation helpful? Give feedback.
All reactions