Week 2 (Jun 13 - Jun 17)
After analyzing and categorizing the existing issues in the vision system and highlighting the need for improvement to the court detection, this week our goal was to deep dive on the court detection architecture and try to find optimizations that could enhance the performance. I spent a few days analyzing the code and creating intermediate results in an effort to isolate each function from the next and methodically seek out optimizations. I found that the existing system uses an off-the-shelf pixel intensity threshold to divide the image into black and white, but the white lines on the far end of the court were being detected as black instead of white.
The system then uses policy-based logic to create lines from the white pixels (e.g. max line gap, max line length, min line length, etc). These lines are then classified as horizontal or vertical based on their gradients. I decided to relax the white pixel intensity threshold to allow more pixels to be classified as white, in an effort to detect the back line as a white line. My concern was that this may detect too many white lines, but since the system later uses a hough transform to compare the lines to the 2D standardized court-lines, I suspected that the excessive white lines would be taken care of in later stages of processing.
Thankfully, after re-running the model on the dataset, I found that this optimization yielded a significant improvement in the court detection, and now many of the previous misdetections were operating correctly!