Road Rage – n900 game

The Lappeenranta University of Technology (LUT) organized 20th summer school on telecommunications during the last week. I was there arranging it and running a 24 hour codecamp. The theme of the summer school was “Supporting independent living with technology”. We had a number of seminaries related on the theme and at the end of the week we had a 24 hour codecamp. During the codecamp the students were introduced to the Nokia N900 smart phone that runs the newest Maemo operating system. The good side of the OS is that you can use Python to quickly develop applications and test them. This was the case on the one-day codecamp – Program something that supports independent living at home and do it quickly (and dirty :-P).

Even though I was helping people out most of the time and programming some harder parts (like the very evil how-to send an SMS), I still had time to do something own. As most of the readers know, I am a big game aficianado. Thus, I had to do a game 😛 All I had was a few dark hours of the night. I decided to use pygame as the library to program with and away I went!

An Idea?

Okay… Where do I start from? Well, I had a mobile device on my hands. The first thing that came to my mind was to create a certain limitation (in addition to the small amount of time I had). Thus, I decided that the only way to control would be with the accelerometer of the phone – no touching, no keyboard buttons. Just tilting the phone.

Fine, I had a limitation and the controls. Now, the game? I wanted to have something fast paced, quick to play, quick to fail and quick to start again. Thus, I came across the idea of a car game. Just your old, not-so-breakthrough, driving game.

So, now we have the game idea: A car driving forwards that moves according to the tilting of the phone. Nice 😀

Gameplay

Well, something easy and fast-paced. This was quite easy to come by. Imagine a car driving very fast on the wrong lane. There are other cars coming towards you. The player needs to tilt the phone to avoid colliding with the law-abiding citizens. As this would be too easy by itself, I added oil spills. If the car hits an oil spill, it looses maneuverability – the player cannot move the car for a few seconds, thus forcing the car to just keep going straight. This added some nice challenge to the game.

Also, the game keeps track of the player’s score. The further you get, the more points you get. In addition, if the player tilts the phone forwards the car goes faster and if tilted backwards the car brakes. Going slow is easier, but the player earns a lot less points in the exchange. So, either higher speed, higher gain and higher risk or the slow grandma way 😛 Now there is a simple gameplay logic and challenge to the game. All that is missing is…

Grrrraphics

Urmm… yeah. This is something that is needed. I guess. As I suck on drawing and making graphics I had to go with some really cheap stuff.  If you can go the whole nine yards, I decided not to even go to the start line. No fancy programs and pixel graphics. Instead, I drew the car, oil spills, the road and the oncoming cars by hand. Some really ugly and nasty pencil drawings. BUT! After editing them a little bit (just changing temperature, saturation, hue, brightness, etc.) I came up with pretty decent looking graphics.

Here are the graphics I created and edited (all in a short amount of time)

The Car:  

Enemy car:

Oil spill:  

Explosion:

The road:  

Ugly? Well, it still works surprisingly good.

Putting it all together

Okay, so now I had everything in place. Easy controls, cheap graphics and simple gameplay. The creation of this whole game from a scratch took me about 4 hours, which I am quite proud of. Here is a screenshot of the game (yeah, screen capture of a quick-paced game is not easy, but you get the idea):

If you are still interested and would like to try out the game (I think it works only on n900), get the source from the following page: http://www.codecamp.fi/doku.php/ssotc_2011/grp1/start

It’s completely free and open-source. Do whatever you want with it, but please, at least provide some credits to my work. Also, if you happen to use something I have made, please let me know. I would be more than happy to hear about it 😀

Advertisements

Using Emergent Behavior in Video Games

As some of my readers might know, I am currently doing my exchange year in Madrid, studying Artificial Intelligence. One of the courses I had was about Intelligent Agents and Multi-Agent Systems. As a short test project I wanted to create an emergent enemy AI for a simple game. Just making the game and keeping everything to myself might have been easier, but I decided to share some of my thought with you.  Underneath is a shorted version of everything, but you can also download the complete .pdf article >> Using Emergent Behavior to Improve AI in Video Games << [all rights reserved]:-D

Intro

As most know, video game industry has been competing for the best graphics in games for a long time. Now, as making graphics has become more and more easier all the time, there has to be a new hook. Some say it is the story (I totally agree), but another thing that has not evolved much is the non-player character intelligence. This is something rather hard to do and experimenting on new methods is something the triple A studios are not very eager to work with. I am no top-notch researcher, so I just decided to try to implement a simple game AI, using emergent behavior.

Ummm… Emergent Behavior?

Yeah, what is emergent behavior? It is a behavior that emerges from simple set of rules, yet it can be considered intelligent. The creator only specifies some simple rules, that in the end result in wanted behavior. Insects, for example, is something that is being studied by computer scientists in order to find this kind of emergent behavior. Ants and bees for example display this kind of behavior in finding the shortest route to their goal – not because they actively search one, but because of their behavior they mark the fastest route between two points.

The Game

The game was implemented using python programming language and pygame library. Those not familiar, can read an introduction from my previous post. The game was programmed completely from the scratch, just to practice a little bit more of my programming skills. The game itself features a player controlled character, that can move around the screen and shoot with a gun. The enemies come in form of green aliens, that try to nd the player and attack him by biting. The way they communicate, is by leaving pheromones (blue dots) to the game area, that contains information about the locationof the player character. The rest following underneath is straight taken from my paper.

The AI

The idea of the Arti cial Intelligence is to create a simple emergent behavior for thealiens. They are considered to be quite simple and the method used is somewhat basedon ideas presented by Steels (1990) and Lewis and Bekey (1992). One important thing to notice here is that the aliens cannot use direct communication to talk with each other. Because they are simple critter-like creatures, they cannot have telepathic abilities, as that could be felt cheating by the player. Thus the aliens should use pheromones to leave information to the environment and use this indirect way of communicating instead of telepathy. This approach also makes the AI easier to implement and probably less resource intensive.

The Aliens have three modes: Search, Flee and Attack. In search mode the alien is trying to find the player in the game area. The alien has no knowledge of the environment around it and it does not gather any information about the level. Because of this, the alien is implemented to walk randomly until it sees anything of interest. If the alien in search state sees pheromones on the ground, it will start to follow the trail. In case the enemy sees the player, it will attack or flee, depending on the situation. There is a threshold related to the attack/flee behavior (as seen in the figure 1, points 3 and 4). If the alien sees other aliens nearby it will attack the player, but if it is alone, it will flee. The behind this kind of behavior is to make the aliens attack in pairs or groups. This way the player will have to fight a bigger bunch of aliens and has the sensation of challenge.

In flee mode the alien is running away from the player. This might be triggered for example because of being outnumbered or heavy loss of health (in the implementation here, the fleeing depends solely on the number of aliens nearby). In flee mode the alien leaves pheromones behind, so that other aliens might find the player (or that the alien itself might find the way back to the player). The pheromones contain a direction, which points towards the player in order for the aliens to know which way to follow the trail. As the alien is running away from the player, the pheromones left point always to the opposite direction the alien is facing.

In the attack mode the alien is charging towards the player, trying to attack it. This is done by getting to a close distance to player so the alien will be able to bite the player and deal damage in that way. The algorithm shown in figure 1 portrays the behavior of the aliens.

The Initial Algorithm For the AI

The initial level is shown in the figure 2. It is just a plain level, with only the player and the aliens. The red circle around the aliens portrays the field of view of the enemies. However, as can be seen from the figure 2, the player has actually too much room to move around as well as the search space is too great a challenge for the aliens. It takes a significant amount of time for the aliens to find the player (but because of the randomness, this is not always true). Also, the pheromone trail stayed alive for a very long time and once the aliens found the trails, they did not point to the current location of the player anymore and tended to be quite long.

The Initial Version of the Game

As a solution to these problems, there had to be a time-out on the pheromones. A timer was added, that would make the pheromones to disappear after certain amount of seconds had passed. This way the pheromones no longer led to empty places nor were dozens of pheromones long. Another addition that can be seen from the figure 3 was the addition of the walls. Because this kind of shooter games usually happen in tight, closed spaces, the playing area was divided into smaller rooms. The rooms reduce the search space for the aliens significantly, so they were able to find the player within the room quite quickly. The running movement of the player is also restricted within the rooms and there are only few doorways leading out from each of the rooms.

The Game After Adding the Walls

The changes led to faster search times for the aliens and better challenge for the player. However, as the alien fled from one room to another leaving a trail of pheromones behind the player was still able to run away. Once the aliens came back, the trail was a dead-end. Also in cases when the aliens attacked the player, other aliens nearby still just kept searching and did not react to the attacking aliens nearby.

To fix the problem, another type of pheromone was added to the game. These red pheromones are left when the alien switches to attack behavior. In attack mode it leaves pheromones that point towards the way it is moving (as opposite to the fleeing pheromones). Now when the alien goes into attack mode, it gather other aliens on it’s route to also attack the player. In the case where the player decides to run away, the enemies keep following and leave a trail that gathers even more aliens to join the attack. This means, that if the player runs around the whole game level with only one alien pursuing in the beginning, the player will end up with all the aliens in the current level following him. The final version of the implementation can be seen from the figure 4.

The Final Version of the Game

Conclusion

A good aspect of emergent behavior is it’s simplicity to implement. With just a couple of simple rules it is possible to create “intelligently” performing agents. However, as can be seen well in this paper, the design stage is somewhat complicated. Even though the end goal was clear in the beginning, the behavior rules had to be adjusted a little. The aliens behaved less smartly than they were supposed to. The positive thing was that adjusting the behavior was quiet easy, because of the simplicity of the behavior rules.

One good side of the emergent behavior and the simplicity is the low amount of processing power required. Dozens of aliens can be running at the same time on the screen without the loss of frame rate (one thing to remember though is that the graphics are not that complicated either). With some optimization it would probably be possible to gain even better performance than there currently is.

However there still remains more tweaking possibilities to be done for the aliens. They don’t seem to recognize the pheromones really well so there could be a bigger area effect on the pheromones. Other possible implementation could be to make the aliens scream when they attack the player. This way the scream might attract other aliens nearby (for example within the same room) to join the attackers.

References

If you are interested in reading more, check for example the following scientific papers for more on Artificial Intelligence.

S. Cass, Mind games [computer game ai]. Spectrum, IEEE 39(12), 40{44 (2002).doi:10.1109/MSPEC.2002.1088444F.
Dignum, J. Westra, W.a. van Doesburg, M. Harbers, Games and Agents: Designing Intelli-gent Gameplay. International Journal of Computer Games Technology 2009, 1{19 (2009).doi:10.1155/2009/837095. http://www.hindawi.com/journals/ijcgt/2009/837095.html
M.A. Lewis, G.A. Bekey, The Behavioral Self-organization Of Nanorobots Using Local Rules,in Intelligent Robots and Systems, 1992., Proceedings of the 1992 lEEE/RSJ InternationalConference on, vol. 2, 1992, pp. 1333{1338. doi:10.1109/IROS.1992.594558
J. Orkin, Three states and a plan: the AI of FEAR. Game Developers Conference, 1{18 (2006)L. Steels, Cooperation Between Distributed Agents Through Self-organisation, in IntelligentRobots and Systems ’90. ‘Towards a New Frontier of Applications’, Proceedings. IROS’90. IEEE International Workshop on, 1990, pp. 8{14. doi:10.1109/IROS.1990.262534

The Source Code

The source for all I have written above is available for download. Get it from here >> Swarm Game Source << You are free to use the code for your own use, just please, give the credit to the one whom it belongs to. It is a .rar archive. All you need is Python and PyGame libraries to get it working, the images and everything are included.

Hope this helped someone or gave new ideas. Until next time!

Removing Red Eyes with OpenCV and Python

As I wrote in my blog yesterday, I started working on automatic red eye removal with python and OpenCV. Today, I finally managed to get it working (despite of the small amount of fever, which lead to some very interesting solutions :-P). The system works on the few pictures I have tested. Of course, it is not perfect, but should be at least somewhat acceptable.

The First part of the program is from the yesterday’s blog (eye detection), that needs to be run in order to get the information required for the red eye removal (yeah, you really need to have eyes recognized for this to work!), I have included the DetectRedEyes here, because I did some modification to it.

Detecting the Eyes

I’m not going to comment this much. Just to note, that I have removed drawing the rectangles around the eyes and replaced that with adding the information to an eyesList, which should contain the coordinates for the eyes in the big picture. I will skip to the changed eyes part…

def DetectEyes(image, faceCascade, eyeCascade):

“””

“””

“””

# Detect the eyes

eyes = cv.HaarDetectObjects(image, eyeCascade,
cv.CreateMemStorage(0),
haar_scale, min_neighbors,
haar_flags, (20,15))

eyesList = []

# If eyes were found
if eyes:

# For each eye found,
for eye in eyes:

# We save the locations on the BIG picture
# and append them to the eyesList

eyeX1 = eye[0][0]
eyeY1 = eye[0][1]
eyeX2 = eye[0][0] + eye[0][2]
eyeY2 = eye[0][1] + eye[0][3]

correctPosition = (eyeX1 + point1,
eyeY1 + point2,
eyeX2 + point1,
eyeY2 + point2)

eyesList.append(correctPosition)

# Finally, reset the image region of
# interest (otherwise this won’t be drawn correctly

cv.ResetImageROI(image)

return image, eyesList

Removing Red Eyes

def RemoveRedEyes(image, eyes, threshold=2.1):

# Check for existence of the eyes list
if eyes:

# Go through the eyes list [one eye at a time]
for position in eyes:

# Rename the positions, this is just for code readability
eyeX1 = position[0]
eyeY1 = position[1]
eyeX2 = position[2]
eyeY2 = position[3]

# Set the image Region of interest to be the eye area [this reduces processing time]
cv.SetImageROI(image, (eyeX1, eyeY1, eyeX2 – eyeX1, eyeY2 – eyeY1))

Okay, so first to the function. RemoveRedEyes(image, eyes, threshold=2.1). The image is the loaded up image in OpenCV, and the eyes is the list of the eyes in the DetectEyes(). Then We start going through the eyes list. The eyeX1, eyeY1, etc… are just used for making the code easier to read and understand. In the last row, we set the image region of interest to be the area of the eye in question. This reduces the processing time notably, because there is now only small amount of the image to be processed (not the complete 5+ Megapixels).

# Gets the image pixel i,j
# Then the channel value B,G,R order n with pixel[n]
# Get the size of the image (this time image means only the region of # interest, It has been “cropped” at the current time to be smaller)

sizeX, sizeY = cv.GetSize(image)
i = 0
j = 0

# Go through all the pixels in the image
while i < sizeX:
j = 0

while j < sizeY:

# This gets the pixel in question
pixel = cv.Get2D(image, i, j)

# Calculate the intensity compared to blue and green average
redIntensity = pixel[2] / ((pixel[1] + pixel[0]) / 2)

# If the red intensity is greater than 2.0, lower the value
if redIntensity >= threshold:

newRedValue = (pixel[1] + pixel[0]) / 2
# Insert the new red value for the pixel to the image
cv.Set2D(image, i, j,
cv.RGB(newRedValue, pixel[1], pixel[0]))

j += 1

i += 1

# Reset the image region of interest back to full image
cv.ResetImageROI(image)

# Return the resulting image
return image

Now, the hard part was here. After all, understanding that is the key to the SUCESSION! First, we get the size of the image with cv.GetSize(image). Now we use this information to go through all the pixels of the selected region.

pixel = cv.Get2D(image, i, j)

redIntensity = pixel[2] / ((pixel[1] + pixel[0]) / 2)

Here we load the (i,j) coordinate pixel and then calculate the intensity of the red in the picture. The pixel values in (at least current version of) openCV go here in order Blue, Green, Red. So pixel[0] means blue value, pixel[1] green value and pixel[2] means the red value.
Here we calculate the red intensity by dividing the red value by the average of blue and green. If the red is a lot greater than the other values, we certainly have more red. (And because we are talking of the eyes here, this shouldn’t be normal – at least I don’t know any red eyed people).

If redIntensity >= threshold:
newRedValue = (pixel[1] + pixel[0]) / 2
cv.Set2D(image, I, j,
cv.RGB(newRedValue, pixel[1], pixel[0]))

Here, if the red intensity of greater than the threshold given at the beginning (default = 2.0), we should do something. The new red value is given according to the average of the blue and green values, so it is scaled down to the same level. In the last row we just insert the new red value to the pixel in question to the big image.

Now, just run the whole thing:

if __name__ == “__main__”:

image, cascade, eyeCascade = Load()
image, eyes = DetectEyes(image, cascade, eyeCascade)
image2, cascade, eyeCascade = Load()
RemoveRedEyes(image2, eyes)
cv.ShowImage(“Changed image”, image2)
Display(image)

Some Results

So, here are some results of the changes I have managed to get. In this first picture, we can see that the baby has a little bit of red eyes left, because the threshold level is so high. If we would lower it a little bit, say to 1.5, the result would be following:

However, this will cause trouble with other pictures. For example the picture of me, which I used as an example yesterday has too much red color (well, it does not have red eyes, but it could have!). When we change the threshold here down to 1.5, the whole region of interest area gets a little bit messed up.

But Again, if we return back to the 2.1 (just my hunch), we don’t get this error.

Okay, so have fun testing this out. The source code with all the image files and haarcascades is included here.

http://dl.dropbox.com/u/4692161/red_eye_removal.tar.gz

Detecting Eyes With Python & OpenCv

Lately, I have been working on an image processing application for nokia’s n900. One part of the idea is to have automatic red eye detection and removal, to make everything easier for the user. My previous post was about getting the latest opencv version to work with python bindings on n900. I suggest checking it out, in case you are also interested on mobile development. A big thanks for all this goes to http://nashruddin.com/OpenCV_Eye_Detection

Starting out

First, lets start from the top. Import opencv and create a loader for the data.

import cv

def Load():

image = cv.LoadImage(“images/dallas.jpg”)
faceCascade = cv.Load(“../haarcascades/haarcascade_frontalface_alt.xml”)
eyeCascade = cv.Load(“../haarcascades/haarcascade_eye.xml”)
return (image, faceCascade, eyeCascade)

So, here we load the image, a haarcascade for the face recognition and a haarcascade for eye recognition. Then return all these for later use. After this, lets do the display and test everything works fine.

def Display(image):

cv.NameWindow(“Red Eye Test”)
cv.ShowImage(“Red Eye Test”, image)
cv.WaitKey(0)
cv.DestroyWindow(“Red Eye Test”)

Okay, so the Display takes image as a parameter (which has to be loaded before to cv). Then we just create a display window and show the image until the user presses any key. Now you should see an image of me when you just call these two files.

image, faceCascade, eyeCascade = Load()
Display(image)

Into the REAL business

Okay, so now it is time to go to the actual business! First, we should start with creating the detection function.

def DetectRedEyes(image, faceCascade, eyeCascade):

min_size = (20,20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

So far nothing very special has happened. In the beginning we just define the minimum size of the detected area (these relate to faces, so if the face is smaller than 20×20, it shouldn’t be detected), how small to scale the image, etc.

# Allocate the temporary images
gray = cv.CreateImage((image.width, image.height), 8, 1)
smallImage = cv.CreateImage((cv.Round(image.width / image_scale),
cv.Round (image.height / image_scale)), 8 ,1)

# Convert color input image to grayscale
cv.CvtColor(image, gray, cv.CV_BGR2GRAY)

# Scale input image for faster processing
cv.Resize(gray, smallImage, cv.CV_INTER_LINEAR)

# Equalize the histogram
cv.EqualizeHist(smallImage, smallImage)

In the first row here we create a gray image and a small image to make processing less resource intensive (the computer still is pretty good at recognizing stuff from smaller images!).

# Detect the faces
faces = cv.HaarDetectObjects(smallImage, faceCascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)

Now, here we detect the faces from the picture. We use HaarDetectObjects to find the faces from the smaller image we created beforehand. For more information about the HaarDetectObjects(), check http://opencv.willowgarage.com/documentation/python/object_detection.html#haardetectobjects

# If faces are found
if faces:

for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(image, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)

Here We just detect the faces and draw rectangles around the found faces.

# Estimate the eyes position
# First, set the image region of interest
# The last division removes the lower part of the face to lower probability for false recognition
cv.SetImageROI(image, (pt1[0],
pt1[1],
pt2[0] – pt1[0],
int((pt2[1] – pt1[1]) * 0.6)))

Okay, so after finding the faces, we create a region of interest, from where the eyes should be found. The area is actually the same as the rectangle drawn, EXCEPT for the lower part of the image, where 1/3 of the face is left out (normally this means the mouth area). This is just to make the recognition a little bit faster and to lessen the amount of faulty eyes found. Underneath is a screenshot of what the image will look like after setting the region of interest.

Now, all that is left, is to repeat the same kind of operation that was done to the whole picture to find faces, but this time to find eyes from the region of interest.

# Detect the eyes
eyes = cv.HaarDetectObjects(image, eyeCascade,
cv.CreateMemStorage(0),
haar_scale, min_neighbors,
haar_flags, (20,15))

# If eyes were found
if eyes:

# For each eye found
for eye in eyes:

# Draw a rectangle around the eye
cv.Rectangle(image,
(eye[0][0],
eye[0][1]),
(eye[0][0] + eye[0][2],
eye[0][1] + eye[0][3]),
cv.RGB(255, 0, 0), 1, 8, 0)

# Finally, reset the image region of interest (otherwise this won’t
# be drawn correctly
cv.ResetImageROI(image)

This is the final and important part. If you do not call the ResetImageROI, the only thing that will be shown, is the small cropped (region of interest) area of the picture with red squares drawn around the found faces. So, it is important to remember to call this ResetImageROI(). All that is now left, is to return the image.

return image

Now, we should just add all these function calls to main and things should go just fine.

if __name__ == “__main__”:

image, faceCascade, eyeCascade = Load()
image = DetectRedEyes(image, faceCascade, eyeCascade)
Display(image)

This is what you should get as a result:

So, this concludes the part of detecting the eyes. In the next part I will add the possibility for removing the found red eyes (if there are any, in my example picture there are none).

OpenCV 2.1.0 with Python bindings on Maemo (n900)

Howdy How!

Lately I’ve been really busy with my job at the university. However, I decided to give a short moment of my time to share some fruits of my work. I have been always interested in playing with the Augmented Reality stuff. So I decided to challenge myself a little bit and get some tools to work on n900. I found a version from the repository (2.0.something), which was missing the Python bindings. Now that I have lately worked with python (and still am), I really want those bindings to work!

Well, it seemed that there is only one option, to compile and configure everything to work manually. And that is exactly what I did. Finding all this was a pretty hard, so I decided to post everything here on my blog. There are probably some strange things, because I am no real Linux expert (and this is the first time ever I have created .deb packages), but following my instructions you should get opencv to work on maemo (with the newest PR1.2 firmware). So, keep on reading.

How to get OpenCV to work on n900

The OpenCV needs to be build inside scratchbox and then deployed to the real n900 in question. The installation is based on the OpenCV installation instructions from the wiki: http://opencv.willowgarage.com/wiki/InstallGuide%20:%20Debian

I have built successfully the whole system from the stable package, not from the SVN (for some reason I wasn’t able to get cmake to compile from SVN)

NOTE: This can be built for both the X86 and the ARMEL. Just change to different scratchbox environment with sb-menu

If you want to try things on the emulator, you should compile the X86. Otherwise use ARMEL (so you can deploy it to the n900). Probably might be best to do both?

Working with the scratchbox

If you want to use the latest package, get subversion.

1. Install subversion

fakeroot apt-get install subversion

2. Install the rest of the prerequisites

apt-get install build-essential

apt-get install cmake

apt-get install pkg-config

apt-get install libpng12-0 libpng12-dev libpng++-dev libpng3

apt-get install libpnglite-dev ibpngwriter0-dev libpngwriter0c2

apt-get install zlib1g-dbg zlib1g zlib1g-dev

apt-get install libjasper-dev libjasper-runtime libjasper1

apt-get install pngtools libtiff4-dev libtiff4 libtiffxx0c2 libtiff-tools

apt-get install libjpeg8 libjpeg8-dev libjpeg8-dbg libjpeg-prog

apt-get install ffmpeg libavcodec-dev libavcodec52 libavformat52 libavformat-dev

apt-get install libgstreamer0.10-0-dbg libgstreamer0.10-0 libgstreamer0.10-dev

apt-get install libxine1-ffmpeg libxine-dev libxine1-bin

apt-get install libunicap2 libunicap2-dev

apt-get install libdc1394-22-dev libdc1394-22 libdc1394-utils

apt-get install swig

apt-get install libv4l-0 libv4l-dev

apt-get install python-numpy

3. Get the files and save them to your MyDocs scratchbox directory

(if default installation is done, this can be found in ubuntu from /scratchbox/users//home//MyDocs/

RECOMMENDED: download the latest stable Unix version

http://sourceforge.net/projects/opencvlibrary/files/

untar the file for example to OpenCV-2.1.0/ (the version I have tested this with)

or if you want to try your luck, from svn

https://code.ros.org/svn/opencv/tags/latest_tested_snapshot/

4. create a build directory (the only difference to that one, is that o is not a capital letter

mkdir opencv-2.1.0

cd opencv-2.1.0

5. Now, lets use cmake to generate the makefile

cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D BUILD_EXAMPLES=ON ../OpenCV-2.1.0

This will use the cmake to build the make file

6. Now, we must create the .deb package for the opencv

export DEBULLNAME=”Your name”

dh_make -e your.name@example.org –createorig

When asked for what kind of binary, just type s (= single)

After this you can edit the installation info etc. at ./debian (for example nano ./debian/control and just write a short description here)

7. Now, build the package (you have to be in the opencv-2.1.0/ folder

dpkg-buildpackage -rfakeroot

This will display a lot of output and you should see the build process in percentage running towards 100% (this will take some time, so grab a cup of coffee. For example check http://coffeeproject.com/shop/magento/index.php for some nice stuff)

8. If everything went okay, now you should be able to install the application from the .deb package

cd ..

fakeroot dpkg -i opencv-2.1.0-1_armel.deb

OR

fakeroot dpkg -i opencv-2.1.0-1_i386.deb

Now you should have the whole thing installed on the machine (now scratchbox environment). The problem here is, that python is not configured properly.

9. Configure library path

Because the .deb installer does not install all the libraries correctly (at least with these instructions), you have to configure the path manually.

Export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

sudo ldconfig (OR fakeroot, if inside scratchbox)

10. Configure Python

Now, finally only one part is still missing! The thing we went all this trouble through, configuring the python to work. In n900 you can replace python2.5 with just basic python command

Python2.5 [in scratchbox python leads to 2.3 version of python]

>>>import sys

>>>print sys.path

>>>sys.path.append(“/usr/local/lib/python2.5/site-packages/

And after this everything should work just fine. Test it

11. Test OpenCV with python

>>>import cv

If no error comes up, this should work now 😀 You can test the installation in scratchbox by going to the original folder OpenCV-2.1.0/samples/python. Normally, when running on PC machine, the python test could be done for example by running python delaunay.py. If you try to run

python2.5 delaunay.py

You will get no error, but nothing will show on the screen (at least when run solely inside the scratchbox and no virtual machine). If you run the same thing in n900, it should display you some nice stuff 😀

Okay, that was it. Have fun tinkering with opencv and python. If you get any errors/bug/missing things, please, post here so we can try to fix those.

-Japskua

StarSystem Lander

Today I am going to share my design of the game I am going to make during the coming days. As I already mentioned before in my previous post, I am learning some more Python and game programming at the same time. My plan is to make few small games before summer really starts and to learn at least a little bit of actual game programming. It is actually a sad thing, that they don’t teach game programming at the university, because it is fun. 🙂

The Idea

I am not planning to do anything very complicated and I didn’t come up with truly awesome idea for a small game. So, I will be programming a lunar lander clone, which I decided to call StarSystem lander. The idea is to land safely on a planet without destroying the spacecraft or running out of fuel. Those who don’t know the game can check out for example this youtube link: http://www.youtube.com/watch?v=X34MB_P37jM

My plan is to make a little nice graphics, although I actually think that the wireframe looks nice and nostalgic. I just want to learn to make nicer graphics, because that is the way games nowdays are like. I will not probably reach a cool looking level of drawing, but at least I am giving it a shot.

Features

Okay, so the features for the game are supposed to be something like this.

  1. Movable spacecraft, that rotates 360 degrees
  2. Fuel meter showing how much fuel is left
  3. A High-score list according to with how little fuel and how fast the user can land the ship
  4. Different planets with different gravities (Moon, Earth, Mars, etc.)
  5. Graphics
  6. Some sounds (space is an empty place, so no music)

This is propable all the features that the game should have. If someone comes up with other good features, please let me know, so I could also try to add them. But these are something I will try to accomplish in the coming week or two.

Sketch

When designing for the game, it is important to make some kind of sketches, graphs, etc., to shorten the actual development time. I draw a sketch of the game on pen and paper (that is really THE FASTEST way to prototype) while sitting on a morning train from Lappeenranta to Joensuu today. Because I don’t have a scanner at hand, I will make a paint sketch here to give an idea of how I see the game looking like.

This paint sketch looks very horrible, but hey, this is something I made up in a few minutes. The one I had drawn in my sketchpad looks a lot more nicer. (maybe I should have taken a picture of with a camera…. well, next time). All in all, I really have to admit it, I am no graphics artist, but hopefully I will manage with all these ”oh-so-hard” graphics 😛

The game objects

Now, to the actual part I am a little better – designing architecture of the software. Because I will be doing this in a prototyping way and not giving way too much thought on all the parts (the game is not that big, I can easily change things when needed, as long as I program it properly). So, the main components are SpaceShip, Level, LandingPad, Timer and FuelMeter. These are the pieces the game is made of (drawn on the screen). The game engine beneath is of course it’s own case, but I will now just document some ideas of the game parts.

SpaceShip
location
speed
direction
fuel
Rotate(direction)
Throttle()
Move()
getFuel()
getDirection()
getLocation()
getSpeed()
checkLanded()

This is propably the most complicated of the classes. The spaceship has location (where it is on the screen), speed (= how fast it is moving), direction (which way it is moving) and fuel (how much fuel is left).

The methods that the spaceship has are Rotate(direction), which turns the spaceship either left or right. Throttle(), which is used to gain more speed to the direction the spaceship is facing. Move() is used to actually move the spaceship on the screen (this is for drawing reasons, which I will explain about a little bit later). The three get functions are used to get the specified information of the spaceship (like the current location). Also there is a checkLanded() function inside the spaceship class, but this I might probably change to somewhere else later.

Level
groundLocation
gravity
GetGroundLocation()
getGravity()

Level object will propably contain the picture of the ground (how it is formed) and also contains the gravity value of the planet in question.

LandingPad
location
getLocation()

LandingPad is an object that will be placed on top of the actual level. This way there is no need to program to the level object the location of the landing pad, but to have it as an individual object, that can be called to check if the ship has landed at correct place.

Timer
time
showTime()

Timer is used to get the time how long the game has been running. It just displays the current time on the top-right corner of the screen.

FuelMeter
fuelLeft
DisplayFuel()

FuelMeter displays the amount of fuel left. Every time the user Throttles with the spaceship a small portion of the fuel is used up. The fuel meter draws the current situation of the fuel on the right side of the screen, under (or next to) the Timer.

Next Time

I have now demonstrated a very short and brutal design of the game. As I will be programming more and more, I will probably change classes a little bit when needed, but my goal is to get something done quickly. My next post will be about programming the StarSystem Lander game. The coming weekend will be pretty busy day for me, because I will be participating in one seminary. Expect for the next update around next week. And please, feel free to leave comments.

-Japskua

Programming Python

Hello everyone!

…and sorry for not posting anything in the last few weeks. Instead, I am going to make few blog posts this week.


So, straight to the topic. I have received a nice summer trainee position from my university. I will be working on something called Cyber Foraging (you can check more about this by googling). I will get back to this later, once I have started the work and studied this area more. My job is to program something using Python. I have done few projects with python, but not for almost a year, so I decided it is time to recap some of my skills (and learn more, of course!). Those of you who are not familiar with Python I recommend you reading the website and getting to know the language. It is fast, efficient and powerful (although the whitespace system sometimes gives some headaches).

As I have probably mentioned before I am very much interested in game programming, but haven’t had much luck with it (I have tried to start with a WAY TOO HARD approach). I have had some hard time getting Ogre3D to work with c++ ( I really don’t know why it doesn’t work correctly plus I haven’t had much time). Well, Now with my new Python interest, I found out that ogre has also python version of it. I was able to get this Python-Ogre running. YAY! However, making 3D games takes a lot of time, which I don’t have too much, I decided that I need something a little bit less complicated.  What I found out was this: PyGame.

Huh? PyGame? What’s that?

PyGame is a Python extension library that wraps SDL library and it’s helpers” SDL (short for Simple DirectMedia Layer) is cross-platform multimedia library that provides a low level access to audio, keyboard, mouse, joystic and 3D hardware.  SDL is written in C, but with the PyGame it will work with Python language as well. PyGame is actually really easy to program and even a programmer with not much programming experience can pretty quickly start coding nice 2D games. Why 2D? Simply put, 3D games require a lot more time, even more experience and more graphical skills than a 2D game (at least usually).  A nice introduction to PyGame can be read from here.

How to get started?

1. Download Python. I am using version 2.6.5 which is currently the newest (of the 2. series). A note here. Python has also 3.version, but the syntax is different from the more widely used 2. version. It also has more third party software available than 3. version.

2. Download PyGame. Here it is important that you download the same version as your Python is.

3. If you don’t want to use IDLE (the editor that comes with the Python). The problem with IDLE is, that it has no code-completion or other “little-more-sophisticated” functions. However, you can finely debug and program with IDLE. I recommed installing Eclipse (the classic version is good one) and the Python add-on for Eclipse.

The PyGame website actually has pretty good collection of Tutorials. I really recommend checking them out! I myself started going through them last week and I have a
lready programmed some nice and simple games (for example the classic Worm game). From my experience I recommend at least the following tutorials by Lorenzo E Danielsson. Also another one worth checking out can be found from Eli Bendersky’s website.

Here is a screenshot of a simple worm game I created accoring to Lorenzo’s tutorials

The links I have supplied in this blog entry:

Lappeenranta University of Technology – http://www.lut.fi
Cyber Foraging by Google search – http://www.google.fi/search?hl=fi&q=cyberforaging&btnG=Haku&meta=&aq=f&aqi=&aql=&oq=&gs_rfai=
Python – http://www.python.org/
Ogre3D – http://www.ogre3d.org/
Python-Ogre – http://www.python-ogre.org/
PyGame – http://www.pygame.org/
SDL – http://www.libsdl.org/
PyGame Introduction – http://www.pygame.org/docs/tut/intro/intro.html
Download Python – http://www.python.org/download/
Download PyGame – http://www.pygame.org/download.shtml
Download Eclipse – http://www.eclipse.org/downloads/
Python add-on for eclipse – http://pydev.org/
PyGame Tutorials – http://www.pygame.org/wiki/tutorials
Lorenzo’s Worm game tutorial – http://en.wordpress.com/tag/pygame-tutorial/
Eli Bendersky’s PyGame tutorials – http://eli.thegreenplace.net/category/programming/python/pygame-tutorial/