CSE 260 Homework Assignment #3: Connected Component Labeling

Due: Friday 11/1/02 at 2PM

In this laboratory you'll experiment with an existing connected component labeling algorithm supplied to you, and enhance it to improve performance. You'll also conduct some parameter studies to understand the performance issues.  For this assignment you'll run in single processor mode only.  But as you enhance the code, keep in mind that you will parallelize the code in assignment #4.

Refer to the Leighton handout for reference.

Revised Sat Oct 26 00:16:10 PDT 2002

Connected component labeling

A  basic serial implementation of the connected component labeling algorithm described in the handout is available on valkyrie in ~/../public/examples/hw3.   This code has significant performance bugs, and won't scale to large meshes with many thousands of points. 

A random number generator is included.  Note the command line options, which are handled in the source file cmdLine.C. In particular, you can seed the random number generator with  an arbitrary value, or you can use the time of day.  The program outputs the seed so that you will be able to reproduce a given run.

The code considers points to be adjacent only if they are nearest neighbors on the Manhattan coordinate directions-- left, right, up, and down. This is different from the stencil used in the book, which includes corners. If you like, experiment with the 9-point stencil, but be sure to present results for the Manhattan stencil.

The assignment

Determining the criticality point

First determine,  pc  , the critical value of the independent probability  p that maximizes the running time.   It may be easier to pinpoint pc by looking at tabular data; sample p more closely in the neighborhood of criticality so you can determine  pc to 3 decimal places. Include tabular data along with your report, and also plot the running time as a function of p.

Since the initial conditions are randomized you'll need to take steps to ensure that your timings are statistically significant. For each value of p,  make several (5 to 10) runs each with a different random number seed. Plot the mean value with a curve, but also plot the maximum and minimum value for each value of p you measured.   You should also repeat a few of your runs using the same random number seed, to see if your timings are consistent. To do this, use the -s flag to set the random number seed to the value reported by the reference run you wish to reproduce.

To determine an appropriate value of N, experiment by staring with N=100 and doubling N until the run completes in about 10 seconds at criticality.  If  you are using Valkyrie observe the following protocol, which may help improve the consistency of your results.  If you see that others are on the machine, try to use a node that appears unoccupied. Stick with that node, and make all your timing  runs from a single script. This  way, if someone sees that the node is busy, they'll stay away for the duration of time that you are performing your runs.

Performance Optimization

Before you implement the depth first algorithm, attempt to improve the running time of the code provided.  To save time, run for a few values of p, but be sure and include a few points in the neighborhood of criticality.

Depth first search

Now implement the depth first search algorithm.  You may find that  labeling time has improved to the extent that you will need to attempt much larger runs.  Fixing p at criticality, scale N until your runs take about 5 seconds, and plot your results.  Repeat the criticality plot of the previous section, and note any differences. 

Next, use your code to determine some statistics about the cluster populations. Report the number of clusters as a function of the independent probability p, as well as the mean and standard deviation of cluster size. Generate these statistics for a few different values of N. Do you observe any trends as you vary N?

Things you should turn in

You should document your work in a well-written report of about 5 pages, not including code listings or appendices.   Include two appendices. The first  should contain the performance data for the clustering algorithm. Any plotted data should also be included in tabular form. In the second appendix, submit a listing of your software. You should include both the enhanced iterative method as well as the depth first search method.

Your writeup should provide sample output demonstrating correct operation of your code (Note that the code provided will print out the clusters for smaller domains). It should also present a clear evaluation of the performance, including bottlenecks of the implementation, and describe any special coding or tuned parameters.   What factors will limit performance as you increase N? 

You should also turn in an electronic copy of your report and code.

 


Copyright © 2002 Scott B. Baden. 10/25/02 11:20 PM