Mindstorms - Final Project

 

Stephen Bowman

James Lewis

 

CSE 151 Artificial Intelligence

Prof. Rik Belew

Spring 1999

 

 

 

 

Introduction

 

We used LEGO's "Mindstorm Invention System" to build a robot that pushed balls around on a playing field. It did this by applying heuristic techniques. The robot had two motor powered wheels, two light sensors, a bumper bar and a control unit. We programmed the robot using the Not Quite C programming language (hereafter NQC). We also used a graphical interface tool called RCX Command Center (hereafter RCXCC). The robot operated in a marked-off playing field, containing several aluminum wrapped tennis balls. Its task was to find the balls and push them over the edge of the playing field. After developing a program that was successful at the task we studied the success of the robot in varying environmental conditions and equipment configurations.

 

Appendix A contains the initial proposal we submitted for this assignment. We have largely achieved the goals we set out in that document.

 

 

The Environment

 

The car operated in an artificial environment. We manipulated the precise configuration of the environment during our experiments. Figure 1 shows one configuration of the environment. Each configuration we used had the following elements:

 

         A carpeted surface.

         Edges marked with 18mm wide white tape.

         Dimensions: 1290mm by 2010mm.

         Controlled light level: either almost completely dark or bright, overhead light.

         5 aluminum foil covered tennis balls.

         A robot.

 

Figure 1 - Robot performing its task in a dark environment.

 

 

The Task

 

The robots task was to move around, find the 5 balls and eliminate them from the environment. This occurred when a ball is pushed outside of the playing field by the robot. The robot had to determine the position of the ball by moving around and scanning for changes in light intensity. The robot itself was not allowed out of the playing field. A photograph of the robot and its environment (the taped off region) can be seen in figure 2.

 

Figure 2 - Searching for a ball in the environment.

 

 

Difficulty of the Task

 

There were a number of factors that make this task difficult. In this section we consider some of these general factors. In the section on 'tips, tricks and quirks' we discuss how we solved them and outline the specific problems we faced.

 

The first difficulty relates to the complexity of the environment. Russell and Norvig (1995, p 46) classify environments by distinguishing them using 5 characteristics: accessibility, determinism, episodes, dynamics, and discreteness. In our scenario the robot operates in an inaccessible environment. It only has access to its immediate vicinity, a circle at most 450mm in radius. It is non-deterministic, the inaccuracy of its sensors and unpredictability of the motors speed mean that the robot cannot be certain of the results of a particular decision in response to given inputs. The environment is nonepisodic. The robots task consists of moving around on a two-dimensional plane, hence the robots actions depend upon those preceding the current state. The environment is static, because the robot creates all changes that occur. It is able to stop and decide what to do before performing the next action without looking for changes in the environment while it is deciding. The environment is continuous. As the robot moves its percepts sweep through a range of continuous values. At any point there are an infinite number of possible actions, e.g. turn right x degrees or go forward for y seconds. Against these measures of complexity this apparently simple environment scores 4 out of 5.

 

The sensors that come with the Mindstorm kit are not very accurate, sensitive or focused. Our greatest problem was how to apply general purpose sensors to specific problems. The light sensors are sensitive to both light color and intensity, have an unfocused field-of-view of 180 degrees and do not seperate input into discrete time intervals or values. To achieve the task is was necessary to adapt the sensors, robot, program and environment to facilitate these problems.

 

There were a number of problems relating to the programming language NQC. This language is not as expressive as general purpose programming languages, for example, no floating point arithmetic and limits on the number of variables and tasks. This limited the types of solutions that can be attempted, e.g. using sophisticated neural nets and complex algorithms. Also, NQC is not very well documented. The help files contain little information about tasks. For example, how they interact, whether they die when they reach the end or whether multiple instances of the same tasks can run concurrently.

 

 

The Robot

 

We built the robot using pieces from 4 Mindstorm kits: 2 main 'Robotics Invention Kits' and 2 'Expansion kits'. Pictures of the finished robot can be seen in figures 3 and 4. The key features of the robot include:

 

         2 motor powered wheels, used for movement and turning.

         A flashlight, used to illuminate the aluminum balls. The flashlight had a focused beam, diverging 150mm over the length of the playing field.

         The RCX, processes data received from the environment via the sensors, using the program.

         A chassis to hold the flashlight, RCX, motors and sensors.

         A bumper-bar connected to two touch sensors.

         A forward facing light sensor mounted on the flashlight, if the light intensity increases by a certain amount above the initial value then that indicates a ball.

         A floor facing light sensor used to detect the edge of the playing field.

         Skids at the front and back to reduce friction with the floor.

         Dimensions: 270mm long, 260mm wide and 120mm high.

         An optional helmet to protect the forward facing light sensor from ambient light.

 

Figure 3 - Our robot.

 

Figure 4 - Our robot front view.

 

 

The Light Sensor

 

The most difficult task faced by our robot is being able to detect the balls in the playing field. Critical to this task is the efficiency of the light sensor. In order to improve the accuracy of the light sensor, we attached a flashlight to out robot that shined in the direction that the light sensor was taking readings. This better illuminated the reflective balls. We also added a hood over the light sensor to shield it from any ambient light.

 

There were many distractions in our environment that caused problems for our robot in completing its task. Objects outside the playing field were also reflective. Since reflective surfaces are what distinguish objects are balls, the robot would sometimes head in the direction of an object outside the playing field.

 

In order to determine the best threshold value for sensing a ball, we staged several experiments. The first was to position all the balls at various distances from the robot. We then rotated the robot in a 360-degree circle and recorded the light intensity at various degrees. Balls were placed at the following degrees and distances:

 

Ball #1 - 5 Degrees, 71 cm

Ball #2 - 40 Degrees, 26 cm

Ball #3 - 50 Degrees, 75 cm

Ball #4 - 65 Degrees, 184 cm

Ball #5 - 100 Degrees, 112 cm

 

We could then determine what the robot was most likely to react to in its environment. We ran this experiment for four different situations:

 

  1. The light in the room was on and the light sensor was shielded with the hood.

 

  1. The room was dark and the light sensor was shielded with the hood.

 

  1. The light in the room was on a no shield was used.

 

  1. The room was dark and no shield was used.

 

 

As seen from the graphs the fourth situation should provide the best results, because it was able to recognize balls that were placed further away. It also has a large variance between the average light intensity and the readings when pointed at the balls. From these readings we decided to place the threshold for choosing a ball should be four units above the initial reading from the light sensor. Using this threshold value we determined that the robot should be able to detect a ball approximately 45cm away.

 

Although we had original thought that shielding the light sensor from ambient light would increase its effectiveness, this did not prove to be the case. We believe the hood was catching the light from the flashlight. This can be seen in the overall lower light intensity values for situations three and four.

 

 

The Program

 

The program to control the robot is written in NQC. Due to the bruteness of the sensors and memory limitations we implemented what Russell and Norvig (1995, p 40-42) describe as a "simple reflex agent with state". Given inputs and state information, such an agent effectively performs a form of lookup to determine the next action. We maintain state by remembering past changes to motor direction, whether a ball has been detected, whether a line has been touched and how many balls have been found. When making a decision the robot really has no notion of a goal. It does not consider various options of how to achieve its task of removing all the balls, rather it keeps acting as coded until it has removed all the balls or until it times out.

 

At first we implemented some low level learning, for example, undertaking an initial setup phase to determine the ambient light level and its turning speed. However, we found the gains were negligible when compared to hard coded rules of thumb. For instance, we trialled an initial learning period to determine the light level reflected by a ball versus the ambient light level or the light reflected by an object close to the playing field. We found that this learning made assumptions that are not valid, for instance that the amount of light reflected by a ball remains the same the further the robot moves away.

 

Appendix B contains the source code for the robot program.

 

Tasks in NQC are a little like threads in Java. It is possible to run tasks concurrently, stop and pause tasks (through the use of semaphores), and to have tasks interact. The program we implemented contains 6 interacting tasks, each ensuring that the robot performs the task correctly.

 

main: Sets up sensors, initializes variables, starts initial tasks and waits until either: all balls have been removed or the operation has timed out, then it stops the robot.

 

spin: Makes the robot spin clockwise until either:

1. It sees a ball

2. Or the spinning times out without seeing a ball.

if 1 occurs then task get_ball is started to get the ball

if 2 occurs then task rand_move is called to move the robot randomly

 

hit_edge: Detects when the edge has been hit. When an edge is detected then the motors are stopped and turn_around is called.

 

hit_ball: Detects when the robot has collided with a ball. This task can run concurrently with other tasks, detecting when they cause the robot to hit a ball. hit_ball does not assume that a ball has been spotted first. When a ball is hit, the motors are put into Forwards. hit_wall detects when the robot reaches the edge with the ball.

 

rand_move: Rand_move is started when the robot is unsuccessful in finding a ball when spinning on a spot. As implemented it is not really a random movement, it does not even cause the robot to change direction. The robot has just stopped spinning, so we did not find it helpful to spin it again because the robot spins at an unpredictable rate.

 

get_ball: Task get_ball is started when the robot has spun and detected a light source above a certain threshold. The robot simply starts moving forward straight towards the light source. hit_ball is used to detect if it hits a ball, hit_edge is used to detect when the edge is reached.

 

turn_around: Task turn_around is started when the edge is detected. turn_around ensures that the robot remains inside the playing field. turn_around uses information about the direction that the motors are turning to decide what action to take. If the motors are stopped or going forward then the car is reversed, if the car is reversing then the motors are made to go forwards, if the robot is spinning then the robot spins in the opposite direction. This task is designed so that if these movements cause the robot to touch another edge then the robot can recover. How much the robot spins in each of these situations is generated randomly.

 

The Trials

 

In order to test the ability of our robot to perform its task we ran several experiments. For each of the experiments the balls were placed in the playing area in the same positions and the robot was started in the same location and facing the same initial direction. We ran the experiments four the same four situations as when we tested the light sensor. (See Figure 4) We used time as our performance metric. The program started a timer when the robot was activated and stopped when the robot had removed all five balls or had timed out from not being able to find a ball for 90 seconds. We ran only three trials for each situation to ensure that the last trial did not take longer than the first, due to the batteries being drained. The results are as follows:

 

Situation #1: Lighted room with a hood on the light sensor.

Trial #1: 75 sec.

Trial #2: 140 sec. (Timed out after removing 4 balls)

Trial #3: 127 sec. (Timed out after removing 3 balls)

 

Situation #2: Darkened room with a hood on the light sensor.

Trial #1: 127 sec.

Trial #2: 125 sec. (Timed out after removing 4 balls)

Trial #3: 138 sec. (Timed out after removing 4 balls)

 

Situation #3: Lighted room with no hood.

Trial #1: 97 sec.

Trial #2: 107 sec. (Timed out after removing 4 balls)

Trial #3: 123 sec.

 

Situation #4: Darkened room with no hood.

Trial #1: 72 sec.

Trial #2: 73 sec.

Trial #3: 62 sec.

 

As predicted the fourth situation proved the most reliable. It was the only one to remove all of the balls for each trial.

 

Figure 5 - The initial positioning of the robot and balls.

 

 

Tips, Tricks, Information and Quirks.

 

This section contains many tips, tricks, information and quirks of NQC, that we wish we had known before we started development.

 

Expect the Unexpected and Assume Nothing.

 

This is the most important tip to remember when working with NQC. A robot can work (near) perfectly for many repetitions in a given situation, then suddenly behave erratically in that very same situation. There was an occasion that the robot began executing a program that we had not used in days, even though we had cleared the RXC's memory several times.

 

Reduce Concurrent Tasks.

 

We found that it was best to reduce the number of tasks running concurrently. We achieved this primarily by explicitly stopping and merging tasks. Initially we had 10 tasks, which is the maximum allowed in NQC. We designed the program so that each distinct phase of movement had an "initiator and a monitor". Task A would initiate some actions and task B would monitor task A and the inputs. When some condition was satisfied or violated then B would stop A and initiate some further actions, or another task. For example, task spin would start the motors in opposite directions and initiate task "stop_spin". It would monitor the input and a timer to stop "spin" at the correct time.

 

Although this structure seemed the most general and modifiable, it made the interactions between the tasks too complex and unpredictable. The robot would work for a while, then become erratic when a "non-standard" condition was encountered; either within the program, or in the environment. Furthermore, because instructions from concurrent tasks are interleaved, debugging becomes very difficult. Datalogging was ineffective because it required an entry for each statement in the program to monitor the active tasks, but this made for a very large datalog.

Sleep( ) Uses a Different Scale to Timer( ).

 

Sleep( ) counts time intervals by 1/100th's of a second, Timer( ) counts using 1/10th's of a second. The RCXCC help, indicates that both count via 1/100th's of a second. We use two timers to "timeout" operations after periods of inactivity. It seemed that the robot was entering an infinite loop, or had stopped a task without stopping its motors and had not entered another task. It turned out that the robot was simply waiting 10 times longer than we thought.

 

Use Two Wheels and Skids to Turn on the Spot.

 

It was difficult designing a car that could spin on the spot and didn't turn too fast for the sensors. Tank tracks were not suitable because there is too much friction for the motors to drive the tracks in the opposite direction. We found that directly driven wheels, positioned halfway along the robot, were the best means of turning.

 

Don't use Values in Conditionals that are Updated or Changed by other Tasks

 

For example, consider the following code fragment: update_ value- that updates "value" and stop_update_value- that stops update_value and prints "value", stop_update_value- waits until "value" equals 4, stops update_value, then plots the value to the datalog.

 

int value;

 

task update_ value{

 

start stop_update_value;

while (true)

{

value = get_new_value;

}

}

 

task stop_update_value{

 

wait( value = 4); // 1

stop update_value; // 2

Datalog( value ); // 3

 

}

 

You might expect that "4" would appear in the datalog, but this is not always the case. Although line 1 holds when that line is executed, value maybe updated by update_value before that task is stopped at line 2. Hence, the value that appears in the datalog may not be "4".

 

We encountered a similar problem when we wrote a version of "spin" that used a wait until the light intensity was above the threshold of the operation timed out. In this case, the input 2 was being updated, rather than a variable by another task. It looked a little like this:

 

wait (( initial_in2 > (IN_2-DELTA_IN2))|| ( Timer(0) < SPINTIME ) );

 

if ( initial_in2 > (IN_2-DELTA_IN2))

{

// get ball

}

else

{

// random move

}

 

The value of "IN_2" in the wait conditional may be different in the "if" conditional than in the "while".

 

Spend Most Time and Code Avoiding Failure Conditions.

 

Given the limited amount of memory available in the RCX, we found it necessary to address the most likely risks of failure for the robot. For some tasks we traded optimality, for instance, moving towards the first light source above a certain threshold versus detecting the brightest source of light on a full 360-degree sweep. We found it necessary to use such gross rules of thumb in some situations to conserve memory for more critical areas, such as edge detection.

 

The most sophisticated part of the program handles movement when an edge is reached. Our early versions of the code simply reversed the car and spun it around when an edge was reached. This proved largely successful, but sometimes the robot reversed or span onto the edge. In these situations the robot tended to reverse outside the square and move around the room, treating the outside of the border as the inside of the playing field. We found it necessary to address this risk properly. No matter how good the robot was at other applications its chance of failure was high if it approached an edge at an oblique angle or went towards a corner.

 

Use Incremental Development

 

The initial version of the program contained 10 tasks. When it run it was impossible to understand what was happening. We solved this problem by starting again and adding one task at a time, we could do this because we reduced the number of concurrent tasks. If the behavior was not as we expected then we were able to isolate the problem areas, without having to consider the effects of many concurrent tasks. For example, we developed a program that kept the started the robot moving, when it hit an edge it turned around and kept moving.

 

Announce or Log Events rather than Displaying Inputs

Development requires frequent "tweaking" of parameters and testing to ensure the soundness of ideas. We soon discovered that rapid prototyping and event announcing or logging were the most effective methods. At one stage it appeared that the car was not detecting balls when the struck the front sensors. We were displaying the input level from the touch sensors, then running the robot along and looking to see if the value changed from 0 to 1 on the display. The car was hitting the ball, but it was not registering with the sensors. After spending time trying to work out why the sensors were not detecting the ball we constructed a small program that started the motors then beeped when a ball was detected. From this we found that the balls were being detected, but that this was not displayed. The code for this test program is given below.

 

//test_touch- test touch sensor sensitivity

 

task main{

 

Sensor(IN_1,IN_SWITCH);

PlaySound( 1 );

while( true )

{

if( IN_1 == 1 )

{

PlaySound(1);

Sleep(5);

}

}

}

 

We later incorporated part of this code into the final program.

 

 

References

 

Russell, Stuart and Norvig, Peter (1995). Artificial Intelligence: A Modern Approach. Prentice Hall, Upper Saddle River, New Jersey.

 

 

Appendix A - Initial Proposal

 

 

Appendix B Source Code (NQC)