Looking at Camera Data
To retrieve camera data from the virtual sensor, we can use the following RACECAR
function:
image = rc.camera.get_color_image()
To display the camera data from the virtual sensor, we can use the following RACECAR
function:
rc.display.show_color_image()
This will open up a display window when the program runs. For Windows users, a very
common error may pop up: qt.qpa.xcb: could not connect to display
localhost:42.0
To solve this issue, open XLaunch and set the display number to 42.
Camera frames are structured as a 3D array, with each dimension representing the row,
column, and channel respectively. Camera images come in as the BGR color space, which
means that the first channel has a blue color, the second color has a green color, and the
third channel has a red color.
To find the shape of the image, use the command:
dimension = image.shape
Which returns the height, width, and channel depth of the numpy array.
We can extract the blue, green, and red pixel values using array indexing. Three sets of
square brackets represent the row, column, and channel of the value that we are grabbing.
# For example, let's find the pixel in the middle of the screen
row = dimension[0] // 2
col = dimension[1] // 2
1
# Extract the blue, green, and red values
blue = image[row][col][0]
green = image[row][col][1]
red = image[row][col][2]
# Display the color to the screen
BGR_color = (blue, green, red)
BGR_image = np.zeros((300, 300, 3), np.uint8)
BGR_image[:] = BGR_color
cv.namedWindow('BGR Color Display', cv.WINDOW_NORMAL)
cv.imshow('BGR Color Display', BGR_image)
# Draw a circle in the location of (row, col) on the screen)
cv.circle(image, (col, row), 5, (0, 255, 255), -1)
Image Segmentation by Color (HSV)
https://math.hws.edu/graphicsbook/demos/c2/rgb-hsv.html
RGB is a color scheme that represents all colors on the color spectrum using a blend of red,
green, and blue values from 0-255. This is similar to mixing paint. Depending on the
amount of paint from 0-255 for each color, you would end up with a different color when all
is mixed together.
- For example: Red + Blue = Purple, or Red + Green = Yellow
In this lab, we want to detect an object using its specific color in the simulation. With RGB,
it might be difficult to deduce the exact color of an object (or create a threshold that
surrounds the object) since it requires a good understanding of mixing color between the
three channels.
HSV is a type of color space that decouples a few axes and makes it easier to identify raw
colors:
● Hue: Hue is the raw color of an object, with a numeric range of 0-180. The low
numbers represent reds, oranges, and yellows, and the higher numbers represent
blues and purples.*
● Saturation: Saturation represents the intensity of the color, with a numeric range of
0-255. At 0 saturation, the color is completely white, and at 255 saturation, the color
is most intense and matches the raw hue of the color.
2
● Value: Value represents the brightness of the color, with a numeric range of 0-255.
At 0 value, the color is completely dark/black, and at 255 value, the color is the most
bright and matches the raw hue of the color.
By observing the HSV color cylinder, we can separate each of the parameters of HSV into
two distinct decisions when deciding the color range an object can fall in.
Hue represents the raw color from 0 - 180. On the color cylinder, this is represented by
degrees. If we unwrap the cylinder, we can see that the raw colors fall into these
approximate values:
Saturation and Value are closely linked together and fall on the same axes. We can map
them out on a graph, with saturation (0-255) on the x-axis, and value on the y-axis. Then,
we can specify the following results from changing saturation/value:
● If saturation is low and value is low, the color will be black
● If saturation is high and value is low, the color will be black
● If saturation is low and value is high, the color will be white
● If saturation is high and value is high, the color will be the most intense color as
3
selected by hue.
Using this logic, we can specify a “range” of colors that match with our detected object. If
the pixel color of our object is within the hue, saturation, and value thresholds we set for it,
we can say that it is (with high certainty), the object we want to detect.
To test this out, use the file hsv_tuner.py.
WINDOWS/LINUX USERS
wget
https://raw.githubusercontent.com/MITRacecarNeo/racecar-neo-oneshot-labs/refs/hea
ds/main/labs/lab2/hsv_tuner.py
MAC USERS
wget
https://raw.githubusercontent.com/MITRacecarNeo/racecar-neo-oneshot-labs/refs/hea
ds/main/labs/lab2/hsv_tuner_non_gui.py
Programming a Stoplight Detector
# Grab a frame from the color camera
image = rc.camera.get_color_image()
# Define lower and upper hsv bounds
hsv_lower = (10, 50, 50)
hsv_upper = (20, 255, 255)
# Change color space from BGR to HSV
image = cv.cvtColor(image, cv.COLOR_BGR2HSV)
# Create a mask based on the hsv threshold
mask = cv.inRange(image, hsv_lower, hsv_upper)
# Display the frame to the screen
rc.display.show_color_image(mask)
4
# Crop the image
image = rc_utils.crop(image, (180, 0), (rc.camera.get_height(),
rc.camera.get_width())
# To find contours:
contours, _ = cv.findContours(mask, cv.RETR_LIST,
cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(image, contours, -1, (0, 255, 0), 3)
# To remove small contours (filter):
CONTOUR_MIN = 30
contours_filtered = []
for contour in contours:
if cv.contourArea(contour) > CONTOUR_MIN:
contours_filtered.append(contour)
# Draw the contours to screen
cv.drawContours(image, contours_filtered, 0, (0, 255, 0), 3)
# To find the largest contour (modify previous script)
CONTOUR_MIN = 30
max_contour = contours[0]
for contour in contours:
If cv.contourArea(contour) > CONTOUR_MIN:
if cv.contourArea(contour) > CONTOUR_MIN:
if cv.contourArea(contour) >
cv.contourArea(max_contour):
max_contour = contour
# Draw the contours to screen
cv.drawContours(image, [max_contour], 0, (0, 255, 0), 3)
5
# Find the center of contour and draw to the screen
contour_center = rc_utils.get_contour_center(max_contour)
cv.circle(image, (contour_center[1], contour_center[0]), 6, (0, 255,
255), -1)
racecar_utils function equivalent
# [FUNCTION] Update the contour_center and contour_area each frame
and display image
def update_contour(img):
global contour_center
global contour_area
# Crop the image to the floor directly in front of the car
# image = rc_utils.crop(img, CROP_FLOOR[0], CROP_FLOOR[1])
if image is None:
contour_center = None
contour_area = 0
else:
# Find all of the contours of the saved color
contours = rc_utils.find_contours(image, COLOR_THRESH[0],
COLOR_THRESH[1])
# Select the largest contour
contour = rc_utils.get_largest_contour(contours,
MIN_CONTOUR_AREA)
if contour is not None:
# Calculate contour information
contour_center = rc_utils.get_contour_center(contour)
contour_area = rc_utils.get_contour_area(contour)
# Draw contour onto the image
rc_utils.draw_contour(image, contour)
rc_utils.draw_circle(image, contour_center)
6
else:
contour_center = None
contour_area = 0
# Display the image to the screen
rc.display.show_color_image(image)