CSE 151 Final Project

Development History

return Home

     The main goal of our project was to create an artificial life and try to make it interact with its environment in s intelligent manner. We believed placing a robot, built out of Lego Mindstorms, in maze and having it find a solution would represent this experiment. For us, the level of intelligence we wished the robot to achieve was the ability to map the maze and therefore know which ways it has been, which ways it can not go, and to remember new possible paths while exploring a current path. This means the robot must reach the cognitive state of intelligence rather than just reactionary. However, as it turned out, the reactionary level was harder to implement than the cognitive level.

    To accomplish the problem of implementing an "intelligent" maze-solving algorithm, we decided to first use actual walls. We thought this would be a more "real world" maze, rather than just having dead end lines. However, in order to map the environment we decided to place grid lines own the ground so it could know its relative position in the maze. This maze representation remained the same throughout the experiment with the exception of the addition of gray lines whose purpose will be explained later.

    The next phase was to decide what type of programming language we would use. At the start of the project we only had knowledge of Lego Brick Language and Not Quite C (NQC). We decided to use NQC to allow us greater programming freedom. However, NQC has a number of limitations including a limited number of variables, no recursion, and no arguments in function calls.  We had to redesign our maze-solving algorithm to an iterative algorithm and limit the maze size to a 4X3. The larger the maze the greater, potential for backtracking to untried paths, which requires a larger stack.

    We decided to decompose the problem into smaller tasks and build on top of them. The first step was to design the means of locomotion. Since it needed to turn 90° accurately, our first inclination was to use tractor treads. However, this failed miserably. The robot not only failed in turning 90°, but also in traveling straight. We came to the conclusion that too much slippage occurred in using the tractor treads. Therefore, we needed to redesign the robot. This time we used two large wheels in the rear and one swivel wheel in the front. To our dismay, this also failed. However, when we replaced the front swivel wheel with a skid plate, we saw much better results. After playing with the motor speed and turning time we obtained a robot that turned close to 90° and went fairly straight, we thought.

    At this point we had finished the maze algorithm, and needed to debug it. To do this we needed a robot, which could interact with its environment. This was our biggest and most challenging task. Our initial idea was to have one light sensor pointing down to read the gridlines and a touch sensor in the front to detect walls. However, we could not turn 90° reliably enough nor could we travel straight enough to accomplish this. After doing some research on the web and in class, we discovered the language legOS. This was a more powerful language than NQC it seemed. We decided we should switch to legOS to eliminate the restrictions on memory and recursion. However, we ended up wasting a lot of time on legOS. A lot more than we should have. A large part was due to the failing of the web compilers and having to set up our own on our machine.

    We were still having problems interacting with the environment and after further web searching, we came across the idea of using the IR as a radar to detect objects. We thought this could be a valuable tool and decided to try to implement this in legOS. After failing many times, even with example IR code we decided it would not work. However, we stumbled upon a radar robot on the web built using NQC. We tried this code and it worked great. We did some further testing with legOS and found that many simple processes were failing also. After a series of experiments, we isolated a problem. The problem was that in legOS, if you have multiple processes running and one starts a loop, it will soon take over and be the only process running. The way we solved this problem was inserting sleep statements in certain parts of the code to prevent loss of processes. After encountering this problem, which other students had encountered also, and finding that IR did work in NQC, we decided to switch back to NQC despite its limitations.

    We still needed to make the robot interact with the environment in a fairly predictable manner. We designed a program, which used IR to navigate and avoid objects. In our design we used three light sensors. The middle one to detect close objects and the left and right ones to determine which direction was the best direction to turn in order to avoid the object. When the robot was coming towards an object far off center from the object, it was able to determine which way to turn. Our hope was to use this to turn the robot when it reaches the end of a corridor and an exit exists on one of the sides. However, we soon learned that if the robot were traveling towards the wall in the middle, then it would not choose the right direction. This brought our attention to another problem. Sometimes, when turning the robot would be to close to an object for the IR to detect. Also, more often than not, the sides of the robot would hit the wall. Therefore, we decided to add a front touch sensor and two angled side touch sensors. This means they were pointing at 45° to the left and right from the front middle of the robot. After adding appropriate code, we could use the IR and three touch sensor to have the robot wander around the environment and not get stuck. It could navigate the maze in a random manner. After enough time it would find its way out. This was a reactionary organism, the first step to our cognitive organism, or so we thought.

    At this time, J.P. was talking to his uncle about our robot, and he said it sounded like we were attempting to use subsumption architecture to build our robot. After looking into it, we determined that is was in fact the approach we wanted to use as well as some other theories of building autonomous robots by Rodney Brooks. The idea behind subsumption architecture it that in there are levels of competence for an autonomous robot. These are in order: avoid contact with objects, wander aimlessly around without hitting things, explore the world, build a map of the environment and plan routes from one position to another, and some others which are not relevant to our project. (Brooks, p 351-2) In subsumption architecture, you build control layers that correspond to competence levels and build the control structures on top of each other. The higher control structures can take control of the lower ones. "Control is layerd with higher level layers subsuming the roles of the lower level layers when they wish to take control." (Brooks, p353) Therefore, a good way to achieve this to start in a complex environment and perform simple tasks, while slowing increasing their complexity. This was the approach we were taking and therefore decided to use subsumption architecture.

    However, we still had the problem of mapping the environment. We had reached the second level of competence and needed to achieve the third. We needed to move one of the light sensors so it could read the gridlines on the ground. Since the left-right detection with IR was not working very well at all, we decided to scrap it and go strictly with the angled touch sensors. However, the front IR was not working well either. It was very inconsistent. The stopping distance from objects varied greatly and was affecting the turning time of the robot. Since the maze was very tight, it was critical that the robot turns 90°. We decided we could not live with this amount of uncertainty and had to reduce it. We decided to use a combination of the "fight it" and "accommodate it" strategies. We would bring the level down some and accommodate the rest. To do this we decided to use a second set of gridlines. These would be used to help the robot turn 90°. This allowed us to eliminate the angled touch sensors, and also allowed us to lower the walls. Lowering the walls was important, since we used a light sensor to detect the gridlines. The higher the walls, the larger the shadow and thus the greater range of light values across the maze.

    The front IR was still very variable and was not helping. We decided to eliminate it all together and go only with a front touch sensor. This means we were back to our original plan of using one light sensor to detect the gridlines and a touch sensor to detect objects.

    At this point we decided our sensor and robot designs were good and the environment was appropriate so we should fix the environment. We painted the gridlines and cut out walls. However, as it turned out the paint was a problem. It was very sticky and the robot did not have enough power to move along it. In addition to the paint being sticky, we used masking tape to keep the gridlines straight while painting. The glue from the tape melted and got stuck to the board causing the robot to get stuck. We decided to use clear tape to cover the sticky areas, but this changed the reflectivity of the board and paint. Therefore, we could not cover all the sticky spots.

    In addition to the robot getting stuck in glue, the robot tended to get stuck while turning. This was because the robot was too big for the squares of the maze. We decided to redesign the robot. To make it shorter, we needed to move the motors on top of the RCX in order to shorten the axis of rotation from the front. After redesigning the gear ratio and connection, we discovered a new problem, the lack of power and too much speed. There was also the lack of support for the large gear so it shook violently sometimes causing to RCX to shut down. We redesigned it again and we still had the power and speed problems. We again redesigned the robot. This time the robot was wider but incredible sturdy and had good power and slower speed. We also added a restart button touch sensor and made modifications to the code so we could start the robot from its current position instead of having to go back to the start and wait for it to get back to where it was. We were able to finally start testing our "brains" of the robot now that it could map and interact with its environment.

    During the debugging, we discovered that the walls were still casting too much of a shadow. We decided to lower the touch sensor so the walls could become lower. With our new design we were able to reduce the wall size to 1/6 of the original walls used. This greatly improved the light reading but there were still problems reading gray. We decided to try to repaint some of the gray lines. This time we did not use spray paint. This improved our results tremendously and we repainted all the gray lines. After this we only needed to use the restart button once out of five maze runs. This was still needed occasionally when the robot got stuck in the crack connecting the two pieces of the maze board together.

    After final debugging and increasing the stack size we obtained a robot that could solve the maze. In fact it can solve any maze of a 4X3 if it starts in the upper left square facing down or lower right square facing down (due to the way we designed the maze-solving algorithm).
 


return Home