For our project with Lego Mindstorms, we will try to design and implement a maze-solving algorithm in a mindstorm robot. We believe we can implement a somewhat intelligent algorithm in order to solve the maze. This is similar to an organism having to navigate in an environment. We will design an artificial life that must interact with its environment. We are limited in some respects of how advanced the life can become do to the limited capabilities of the RCX, only 3 sensors and 3 motors possible and memory available.
To do this we must design a robot with the capabilities of interacting with its environment. Currently our plan to accomplish this is through the use of a touch sensor and a light sensor. The touch sensor will be used to detect wall in front of the robot. The light sensor will be used to keep track of grid markings on the maze. The grid markings are to be used to by the robot to keep track of its current position in the area of the maze already explored. The movement of the robot will be by use of two rear wheels. Turning will be accomplished by turning off power to one wheel. Therefore the robot will turn on an access in between the center of the two wheels. This allows for a much smaller maze size.
The algorithm will be written in NQC. This will allow us a greater variability in programming the robot to interact with the environment. We will use a recursive maze algorithm that has been converted to iterative for use in NQC. NQC does not support recursion or functions to our knowledge. Also the maze can only be a 4x4 due to the limited availability of memory for remembering the path traveled. If we remove this, then the algorithm becomes much less intelligent in solving the maze.
We have been exploring other options to interact with the environment, however so far this seems the most feasible given the current restraints of the RCX and NQC. We have dabbled with the idea of using the RCX to send out an infrared signal and measuring the time before reflectance; however, the signal is not very directional. We have also thought of using markings on the floor of the maze as choice positions, however this requires more movements and would require more sensors. We have also thought of using light sensors on the sides, like whiskers, in addition to our touch sensor and grid sensor. This would sever as a way of determining if there is an alternative pathway, but this would require one more light sensor for a total of 3 plus one touch sensor in the front. This seems like the best solution so far but we would need and additional RCX and need to make the two RCXs talk to each other.
The movement mechanism has also been a subject of debate. We original used tractor treads, but the robot experienced too much slippage during turns and ended up off center. The current mechanism also has some minor problems turning.