The difference between the X-coordinate, Y-coordinate, and

The Python source code below in Figures? ?1.5(a)-(d)? ?shows our initial object detection
algorithm to detect a green ball. Each of the three video source will use their own instance of
this object detection algorithm. Since each of the cameras will be placed in a triangle, each of
the cameras will use unique parameters to detect a green ball. This is explained further in the
next section, Object Localization Triangulation Algorithm. In the final implementation of the
algorithm, we plan to detect a wider array of objects. For this initial implementation, we
designed our algorithm to only detect a green ball. Our detection algorithm supports
movement of a green ball in the X, Y, and Z planes. Figure? ?1.4(a)? ?shows how we defined
parameters for one camera. Figure? ?1.4(b),? ?Figure? ?1.4(c?),? ?and? ?Figure? ?1.4(d)? ?shows how we used
the defined parameters to detect a green ball.
Multiple parameters need to be defined for each camera. Figure? ?1.5(a)? ?shows these
parameters. The parameter, KNOWN_DISTANCE, is used to define? ?the distance away from the
camera, in inches, that the object will be detected. The parameter, KNOWN_WIDTH, is used to
define? ?the approximate width of the object, in inches. The parameter, marker , is used to define
the detected object’s region/area that will be bounded by a box. The parameter, focalLength,
is then calculated to determine the optimal depth to which the algorithm will detect the object.
The parameters, greenLower and greenUpper, are used to define the range of green colors on
the HSV spectrum to detect. The variable, counter, will be used to keep track of how many
frames the algorithm has computed. The variables dX, dY, and dZ will be used to store the
difference between the X-coordinate, Y-coordinate, and Z-coordinate of the object in the
current frame and the X-coordinate, Y-coordinate, and Z-coordinate of the object in a
previously calculated frame. The variable, direction, is computed to store the current direction
that the object is moving in. In the next few lines of code, we will define the video source for
the algorithm. This video source will be supplied by the code previously discussed in Video
Source Data Collection.
After defining the initial parameters and the video source, we supply these parameters
to OpenCV algorithms. Figure? ?1.5(b)? ?below shows how we defined more parameters using
OpenCV functions. The first few lines of code make sure a video was supplied to the algorithm
before continuing. We then use OpenCV functions to apply a Gaussian blur to the frame in
order to smooth the image, reduce noise, and convert it to the HSV color spectrum. Then we
use OpenCV functions to construct a “mask” for the color green and perform a series of
dilations and erosions to get rid of any small discrepancies in the mask. Finally, we contour the
mask’s outline
We then perform calculations based on the contours that were previously calculated.
Figure? ?1.5(c)? ?below shows how we perform these calculations. First, we make sure that at least
one object was found in the contour. If the object (green ball) was detected, we find the largest
possible contour based on its area. We then compute the minimum enclosing circle and the
center of the object. We require that the object have at least a 5 pixel radius in order to track it.
If it does, the minimum enclosing circle surrounds the object, marks the the center, and
updates the coordinates of the ball.
We then loop over the X, Y, and Z coordinates that have been calculated. Figure? ?1.5(d)
below shows how this is done. We compute the direction the green ball is moving by checking
previous x, y, and z coordinates. We compute dX, dY, and dZ of the current frame and with a
previously calculated frame. We use a previously calculated frame because using the frame
immediately preceding the current frame would result in unwanted noise and inaccurate
results. We then calculate the magnitude of dX, dY, and dZ to determine the direction that the
object is moving. The rest of the code handles placing the calculated coordinates and direction
onto the GUI. After runtime, all of the values for dX, dY, and dZ are displayed onto a graph.

Object? ?Localization? ?Triangulation? ?Algorithm? ?and? ?Database:
The triangulation algorithm and database will be implemented in the Final Design. Here,
we will be able to localize a moving object based on the data collected from each camera. Since
each camera will be placed in a triangle around a room, the triangulation algorithm will give
unique parameters to each of the cameras in order to detect objects accordingly. Each camera
will continuously send data to a central database. The triangulation algorithm will use the data
from the database to determine if an object detected in one camera was correctly detected in
the other cameras. If the object was detected on all three cameras, data from all three cameras
will be quantified to determine the final X, Y, and Z- coordinates of the object within our
system.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now