Skip to content

Commit 09f4552

Browse files
committed
Updated hyper parameters and added README.
1 parent 97f3823 commit 09f4552

File tree

3 files changed

+45
-3915
lines changed

3 files changed

+45
-3915
lines changed

.gitignore

+6
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
2+
*.trace
3+
*.v2
4+
*.profile-empty
5+
*.pyc
6+
.ipynb_checkpoints/mario_dqn-checkpoint.ipynb

README

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
A description of the current approach from my Whatsapp conversation with N. :
2+
3+
I have two networks: a policy and a target network. It's called double DQN and is supposed to reduce biases by splitting the evaluating the action and acting into two networks. I'm confused whether it's technically a value network because there is another approach that is called dueling DQL where the target reward calculation is split into two parts: value [of the move] and advantage [of the state]. That has further benefits and there are two networks in that also and one is responsible for the value part... On the other hand, in my configuration the target network is responsible for the Q values (what is the reward of an action?) so it could be called value network.
4+
5+
Target network is another instance of the policy network. It is not trained, rather periodically the weights of the policy network are copied into the target network.
6+
7+
Target network is used inside the bellman target calculation where it updates the reward for the last action and needs to ask "how good is the best move from this new state?". [cell 10]
8+
9+
The inputs and outputs can be seen in cell 14.
10+
env is the mario environment. It has a step() method which takes an action. Currently there are 5 actions: 0 is no op, 1-4 are right and right + A/B/A+B. Step returns four things: the new state, the reward of the last action, whether Mario is done (or died), and random info that's not used yet.
11+
12+
The outputs are put into the replay buffer. After every step() call, the program takes random 16 outputs and perform the training: 1) for each state among the 16, take the predicted Q values (rewards for the 5 actions. 2) for the action take, update the reward estimate (1 out of 5 numbers) (target network is used). 3) train the policy network with the x=state and y=the updated 5 numbers.

0 commit comments

Comments
 (0)