Play Doom, Heretic and Hexen with the Doomsday engine. To play DOS games Doom, Doom II, Heretic or Hexen on a modern PC with Windows 10, Windows 8 and Windows 7, you can use DOSBox.But there is a better solution, to play these old 3D games with a much higher resolution (for example 1920x1080). Mount d home/pi/dos-games/doom. Change the drive where you have installed the doom.exe file like below: D: Type the name of the game you want to run; D: doom. This will launch the game, and you should see the game menu on your screen. DOS on Raspberry Pi. The Raspberry Pi terminal and the DOS are the similar command line.
Sep 19, 2018How To Install and Play DOOM 2 on Windows 10
Introduction
The old PC games were always a blast to play back in the DOS and early versions of Windows. The graphics weren’t the greatest but, they were great at the time. In this post we will relive the past and get DOOM 2: Hell on Earth running on Windows 10.
So how are we to achieve this? In this case we are going to use Chocolate-Doom.
Download Chocolate-DOOM
Play Doom On Windows 10
Chocolate-doom supports the following programs.
- Doom (including the shareware and registered versions, and the Ultimate Doom expansion pack)
- Doom II
- Final Doom (TNT:Evilution, and the Plutonia Experiment)
- Chex Quest
- Heretic
- Hexen
- Hexen: Deathkings of the Dark Citadel (expansion pack)
- Strife
It is also possible to play these expansion packs and commercial games, each of which requires one of the above:
- The Master Levels for Doom II
- Hacx
Open your trusty browser and go to https://www.chocolate-doom.org/wiki/index.php/Chocolate_Doom and download Chocolate-doom. It will be a zip file. Unzip the file and wait.
Install DOOM 2
Now we need to install DOOM2 from your CD.
Copy the extracted Chocolate DOOM contents into the DOOM 2 installation directory.
Starting Up
Now we are ready to play!!! double-click on chocolate-doom.exe and DOOM 2 will start in full screen mode.
If Fullscreen is too ugly then you can change settings by running chocolate-doom-setup.exe.
Select Configure Display and make your selection and save the changes.
Hope this brings back memories and many hours of playing a classic.
Related Posts
Do you know of an issue between this and 64bit windows?
Does not work in win 10 64-bit
How did you install Doom2 from a CD when DEICE.EXE won’t run in 64-bit Windows? I can’t install Doom 2 from the CD. The error message tells me that DEICE.EXE won’t run on a 64-bit system.
Leave a Reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
With recent scientific advancements in Deep Learning, Artificial Intelligence and Neural Networks, as well as steadily evolving tools such as Tensorflow, Pytorch, and Keras, writing, testing and optimizing your own Neural Networks is now easier than ever before. Amazed by the results of the Vizdoom competition, I decided to implement my own simple Doom AI from scratch. Having no practical knowledge of Deep Learning, I decided to share my journey from an empty Python project to an (at least decently) working Doom AI.
Prerequisites
First of all, I want to cover this topic more from an engineering than a Data Science background, so don’t expect any deep elaborations on the theory behind Neural Networks. A great introduction to this topic is the Artificial Intelligence A-Z course on Udemy. If you prefer books, check out the Hands-On Machine Learning with Scikit-Learn and TensorFlow book on Amazon.
Additionally, feel free to check out the following papers:
– AI handbook
– Reinforcement Learning I: Introduction
– Asynchronous Methods for Deep Reinforcement Learning
Software
To train the Doom agent, we will use the Pytorch library. Since we have to move a lot of data through the Artificial Neural Network (ANN) we are going to create, training on a GPU is mandatory. To set up your PC, check out Tuatinis amazing blog post on setting up an environment for deep learning.
Besides that, you will need Python and the pip package manager as well as Vizdoom to play the game and interact with the ANN. To install Vizdoom, please follow the instructions on their githubpage.
Setting up the project
To run pre-configured scenarios for training an AI, Vizdoom provides a bunch of .cfg files we can use right away. To make use of them, I copied the Vizdoom scenarios directory into an empty Python project. Among these scenarios are some fairly easy (Basic, Deadly Corridor) and some more challenging ones (My way home, Deathmatch). We will try to create a generic model so we can easily switch between all these scenarios.
Next, I created a src
directory, that will contain all source code for the project. In there, I split up the project into three further parts: doom
, which contains our client for Vizdoom and at the same time provides an interface to be used by our AI. Besides that, I created a models
directory, which will contain the different Deep learning models, as well as a utils
directory for logging, argument-parsing and preprocessing.
In the src
directory itself, we will need an empty __init.py__
and a main.py
file as starting point for our project.
If you followed along until this point, you should have a project structure that looks like this:
You can also clone the full repository from doom-ai.
Playing the game
Before we start creating our AI, let’s get used to the Vizdoom API. To do so, we’ll create a simple client that allows us to play the game ourselves! To keep our interface to the Vizdoom API in one place, I created a doom_trainer.py
file inside the doom directory:
The create_actions
method will later be used to generate all the actions our AI can choose from. To play the game ourselves, we can leave this method empty for now. The DoomTrainer
class contains methods to initialize Vizdoom, play as human, create new episodes and to let our AI make actions. I will go into more detail about these methods later. For now, let’s focus on the __init__
and play_human
methods:__init__
takes a params object that contains all information required to train the AI. The first two lines of this method create a new DoomGame
instance and load the scenario based on the path from the params object. Then, if we pass the ‘human’ model, we set the game mode to Mode.SPECTATOR
, which will hand all control to the game to the player.
The play_human
method sets up a loop that retrieves all user input and forwards it to Vizdoom.
To use these new methods and hook up our future AI with the doom_trainer
, let’s add a new game.py
file and add a play
method to it:
The first two lines will initialize Pytorch to run on our GPU. After that, we check the provided model
from the provided parameters and then call the according method. Beside the human
model we will implement now, I added a model for A3C. We will look at that later.
With that said, let’s implement the play_human
method, so we can finally start interacting with Vizdoom:
That’s all the code we need! We instantiate our new class and call its start_game
and play_human
methods.
Play Doom On Cmd 2
We will call this new method from a new main.py
file defined in the root directory of the project:
Here, we simply get the command line parameters via the parse_arguments
method and pass them on to the play
method we just created. Since we are playing ourselves, we only have to worry about the scenario
and model
parameter for now. As you can see, I set the model to “human” and the scenario to “deadly_corridor”.
If you run the main.py file, a screen should open with the deadly_corridor
scenario loaded. The goal of this scenario is, to reach the green armor at the end of the corridor. Below, you can see one оf my attempts at solving this challenge 🙂
If you want to challenge yourself, go to the scenarios directory and open the deadly_corridor.cfg
file. In there, set the doom_skill = 5
for the most deadly enemies!!
Image preprocessing
Before we can start to work on our AI, we have to consider the huge amount of image data we have to process. For each frame, we have to analyze three matrices (one for each RGB color) of the size 320×240 / 640×480 / … depending on the size of the screen. We can divide the amount of data by 3, simply by turning the input image into a gray-scale image. While we will lose some details with this method, I find it a good compromise between performance and effectiveness of the AI.
In addition to that, we can reduce the input size even further by scaling down the input images. For example, we can turn one 320×240 grayscale image into a 160×120 grayscale image and reduce the input size by 50%. Of course, this also results in some loss of data, so we have to carefully evaluate how practical this approach is.
Below you can see the result of applying both of these “optimizations”. First, the colors are combined into one grayscale-image, then the size is reduced by 50%.
Using the imresize
function from the scipy.misc package, implementing these transformations is very straightforward. In the utils directory, I created a image_preprocessing.py
file that contains a single scale
method:
Play Doom On Calculator
It takes an input screen_buffer, the target width and height and a gray flag as parameters and applies the transformations accordingly.
As you have already seen above, we will use this method in the get_screen
method of the DoomTrainer
class:
Creating the AI
With that out of the way, it’s time to create our actual AI! There are a lot of different approaches to train an efficient AI using Reinforcement learning. Since it is one of the most recent and effective ones, I decided to start with the A3C algorithm. This utilizes the power of Deep Neural Networks (DNN) by running multiple agents for training at the same time. Each agent then shares its results with the other agents. Since every agent makes different decisions, this approach reduces the chance for the AI to run into a local minimum. Additionally, it drastically reduces the average training time required to perform decently well at any given task.
Besides the training agents, we will have one test agent that we use for evaluation and won’t touch the model’s parameters. We will heavily base our implementation on the pytorch-a3c project on GitHub, which is based on the Asynchronous Methods for Deep Reinforcement Learning research paper.
To implement the A3C model, inside the models directory, we will create four files: A3C.py
, optimizers.py
, test.py
and train.py
. Let’s start with the A3c.py
file:
This implementation is pretty much the same as in the one in the mentioned GitHub repository. The only differences are the parameters of the convolution layers, which yielded better results for me in the tested scenarios. As you can see, we are also using an LSTM layer, so our AI can memorize previous states and choose the optimal next action based on that.
The optimizers.py
is identical to the one from the GitHub, repo, so I won’t go into any details about it here.
Next, let’s look at the train.py
file:
Here, I had to replace all the calls to the envs / Atari API with our DoomTrainer
class. I also added .cuda()
to all torch variables. This ensures that our AI will run on the GPU instead of the CPU. The key part of this method is the inner for loop, which reads the hidden state from the LSTM layer and puts it into our model (together with the input screen received from the DoomTrainer
class). The remaining lines are used to backpropagate through the DNN, calculate the loss and update the DNN parameters using the optimizer. Additionally, we update the shared state for the other agents running at the same time.
To tell the agent to make a particular action, we use the following code:
In our DoomTrainer
class, this method is implemented like this:
To fill the self.actions
array, we use the create_actions
method, which we left out earlier:
As you can see, we simply check the scenario passed by the caller and return an array of actions based on that. Passing this array to Vizdoom will trigger our agent to take that particular action and the game will continue on with the next frame. Back to the train
method, we can check if the episode is done (the agent won or died) and based on that either create a new episode or continue with the next screen:
With the hardest part covered, let’s take a look at the test.py
file:
Again, we instantiate our DoomTrainer
and model classes and hook them up. Then, in the while loop, we run through our model and get the reward after each taken action. If we reach the if done:
case, this means that our AI either succeeded, ran out of time or died from enemy fire. In that case, we will log the reward of that episode as well as the episode length, reset all variables and finally start to continue on with the next screen. The time.sleep(15)
is used to allow the training agents to catch up.
Now, the last step before we can watch our AI learn to play Doom is to add a method in game.py
to set up the training and test processes and run them asynchronously. To do so, I created the following play_a3c
method:
Here, we first create a new DoomTrainer
instance and use it to create the A3C model that we will share the processes. Then we instantiate our shared optimizer which will be used for training the AI agents.
After that, we create one process that will runt he test method and several others that will run the training method. For me, setting params.num_processes
to 6 worked very well, anything more and I ran out of GPU memory.
Running the AI
If we have done everything right, we should now be able to finally run our AI agent! Before you start the program, make sure to set the screen resolution in the deadly_corridor.cfg file to 320×240:screen_resolution = RES_320X240
Otherwise, we have to recalculate the input size of the LSTM layer. Set the input parameters so that model “AC3” and scenario “deadly_corridor”. You can leave the other values as they are for now. If you run the main.py
file, you should see 7 windows popping up (they are on top of each other, so to see all of them, simply draw them apart). Six of them contain training agents, and one contains the test agent that also prints the reward of each episode:
On the left side, you see the six training agents and on the right side the testing agent. Since it sleeps for 15s after each run, it is standing still most of the time. You can also see that, while all training agents behave a bit differently, the AI quickly figures out that the best way to reach the green armor is to run straight towards it. This works only because I set the difficulty to 4. If we set it to 5, the enemies will kill our agent before he reaches the armor, so it has to work much harder in that case. With the current approach, it didn’t manage to reach the goal even after 12 hours of training.
Results
While this model performs well for the easier Vizdoom challenges, it didn’t produce any decent results for the more advanced scenarios. This shows that there is a lot of potential for improvement and parameter tuning. It also shows that while frameworks like Pytorch make it very easy to train your own AI, you still need a lot of background knowledge and a deep understanding of Deep Learning to create efficient AIs for complex problems.
Further resources
I hope I got you interested in the topic of Deep Learning and Reinforcement Learning. To dive deeper into both theoretical and technical applications of Machine Learning and Deep Learning, check out the following Udemy courses:
Artificial Intelligence A-Z
Deep Learning A-Z
I also highly recommend the book Hands-On Machine Learning with Scikit-Learn and TensorFlow.
That’s it, I really hope you enjoyed this Tutorial, let me know if you have any feedback, problems or questions!