From 9683385ddf517a0e95f15b0aa61d22a82cbf1f43 Mon Sep 17 00:00:00 2001 From: lonkaars Date: Thu, 8 Jun 2023 16:14:00 +0200 Subject: final edits --- assets/blob_invpers.pdf | Bin 344612 -> 427895 bytes doc/dui.md | 21 +++++++++++---------- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/assets/blob_invpers.pdf b/assets/blob_invpers.pdf index 0c8b071..3de2cf1 100644 Binary files a/assets/blob_invpers.pdf and b/assets/blob_invpers.pdf differ diff --git a/doc/dui.md b/doc/dui.md index 1d5e8aa..b8a28e8 100644 --- a/doc/dui.md +++ b/doc/dui.md @@ -1,6 +1,8 @@ + # Problem statement @@ -245,12 +247,12 @@ naive approach where the car drives towards where 'the most road' is could suffice for our road detection needs. A simple prototype for this approach was made using Matlab, shown in figure -\ref{fig:matlab-roaddetect}. The top part of the figure shows the raw camera +\ref{fig:matlab-roaddetect}. The left part of the figure shows the raw camera image (flipped), with a gray line down the middle, and a red arrow showing the steering value. The red arrow is the only 'output' of this algorithm. -The bottom part of the figure shows the detected blobs (green bounding boxes) -on a copy of the original top image with the following transforms: +The right part of the figure shows the detected blobs (green bounding boxes) +on a copy of the original left image with the following transforms: 1. Reverse perspective-transform 2. Gaussian blur (3x3 kernel) to smooth out any noise caused by the floor @@ -259,10 +261,11 @@ on a copy of the original top image with the following transforms: The steering value (red arrow) is calculated by averaging the horizontal screen position (normalized to between -1 and 1) using a weight factor calculated by -using each blobs bounding box area. The weight factor has a minimum 'base' +using each blob's bounding box area. The weight factor has a minimum 'base' value that is added, and has a maximum value so large blobs don't 'overpower' -smaller blobs. This is so the inside road edge of a turn doesn't get lost -because the outer edge has a larger bounding box. +smaller blobs. This is so the inner edge of a turn doesn't get ignored because +the outer edge has a larger bounding box, which could otherwise result in the +robot following a single line. ![Road detection prototype in Matlab](../assets/blob_invpers.pdf){#fig:matlab-roaddetect} @@ -320,7 +323,7 @@ In conclusion, line detection offers various possibilities, and through testing Later in the project, we discovered a new image processing technique called image correction, also known as bird's eye view. This approach allowed us to visualize more lines and provided greater opportunities when combined with the Hough transform. Although blobbing also utilized bird's eye view, the Hough transform offered more precision. In summary, we chose for blobbing due to its robust and simplified algorithm, resulting in higher frames per second (fps). However, the Hough transform demonstrated greater precision when combined with bird's eye view. Considering the time constraints, achieving optimal integration between these techniques proved to be challenging. -}. +} \roadConclusion ## Communication between the Nicla and Zumo @@ -385,9 +388,7 @@ In the case the Nicla module crashes or fails to detect the road or roadsigns, it will stop sending commands. If the Zumo robot would naively continue at it's current speed, it could drive itself into nearby walls, shoes, pets, etc. To make sure the robot doesn't get 'lost', it needs to slow down once it hasn't -received commands for some time. As mentioned in section \ref{TODO}, the Nicla -module is able to process at about 10 frames per second, so 2 seconds is a -reasonable time-out period. +received commands for some time. \def\communicationConclusion{ The complete protocol consists of single byte commands. A byte can either -- cgit v1.2.3