
For more advanced users: main.py selfplay dqn_train -c will start training the deep Q agent with c++ montecarlo for faster calculationĪt the end of an episode, the performance of the players can be observed via the summary plot.To use it, use the -c option when running main.py. In order to use the c++ version of the equity calculator, you will also need to install visual studio 2019 (or gcc over cygwin may work as well).


So we can collaborate and create the best possible player. Please try to model your own players and create a pull request This is an environment for training neural networks to play texas

Neuron Poker: OpenAi gym environment for texas holdem poker
