You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Developers,
I'm getting the following error when running the code below
pearl/neural_networks/common/value_networks.py", line 262, in get_q_values
x = torch.cat([state_batch, action_batch], dim=-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Tensors must have same number of dimensions: got 4 and 2
Am I doing something stupid, or is there some limitation (for instance, so that dimension of the action and observation space must be the same?)
Terveisin, Markus
I think the error is because you are using a VanillaQValueNetwork which requires the state and the action to have the same dimension. For image inputs, you want to use the CNNQValueNetwork as the network type (we need to enable that for deep q learning).
Dear Developers,
I'm getting the following error when running the code below
Am I doing something stupid, or is there some limitation (for instance, so that dimension of the action and observation space must be the same?)
Terveisin, Markus
The text was updated successfully, but these errors were encountered: