Home > SDP2011-Robotniks

SDP2011-Robotniks

SDP2011-Robotniks is a project mainly written in JAVA and PYTHON, it's free.

The codebase for the System Design Project group 9 - Robotniks for the Edinburgh University 2010-2011 course.

This is the code repository for the System Design Project group 9 - Robotniks for the 2010-2011 course at the University of Edinburgh.

Most of the code uses Python 2.6, OpenCV 2.2 and Pygame. The simulator also uses Pymunk, and the communication server uses the included Lejos libraries.

This is the final version version of our codebase and has been refactored to be friendly to reuse (consider it a sort of "SDP startup kit"). The PDF of our final report is included here, and reading it is highly recommended before attempting to use any of this codebase.

The codebase is made freely available under the MIT licence, as follows:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

To run the code

The vision code assumes the use of DICE machines (the MPlayer capture method is a hack for bypassing OpenCV limitations on DICE). Modification for generic camera input using OpenCV should be easy.

Pygame v1.9 is included, as is OpenCV 2.2. Before running code like vision2.py, import the environment variables using: $ . env.sh

This will make the code use the supplied software.

When running the simulator, check out the options it takes by executing: $ ./simulator2.py --help

The main system requires a number of steps to set up. These should be done in separate terminals:

  1. Run the server: $ ./start-server.sh

  2. Start the MPlayer process used for video capture, This will record snapshots of the video feed to a temporary directory. The files will only get deleted once the vision system is reading them. The MPlayer capture must always be started /before/ the vision system, but can be restarted afterwards as many times as wanted. $ ./start-mplayer.sh

  3. Start the main control system: $ ./gui.py

If things go bad, you can stop the robot using: $ ./stop.py

If you don't want to run the AI GUI, you can omit the first step and, making sure the environment variables are set, just execute: $ ./vision2.py [files..]

If using files for vision input, the MPlayer process won't need to be started either.

How to use the vision system GUI:

Keymap: t - cycle through thresholded images tab - display trackbars for editing thresholds s - save screenshot r - display raw/uncropped image space - switch everything back to normal 0 - display all channels 1/2/3 - display only the blue/green/red channel o - toggle GUI overlay h - toggle histogram display

Dragging the mouse to form a rectangle further crops the image down to a set threshold size.

Testing AI with static video: $ python test_ai.py blue final vision2/test/prim1.mpg