aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorheavydemon21 <nielsstunnebrink1@gmail.com>2023-06-08 11:41:58 +0200
committerheavydemon21 <nielsstunnebrink1@gmail.com>2023-06-08 11:41:58 +0200
commitb072aa16c3808523aaefc1dfacc8703f9c635fb5 (patch)
tree434a2111ae98df8882042642607141d670acf65a
parent8e9bf59c200c28e225b0e430e0a82541332d4375 (diff)
road conclusion
-rw-r--r--doc/dui.md12
-rw-r--r--nicla/road.py6
2 files changed, 14 insertions, 4 deletions
diff --git a/doc/dui.md b/doc/dui.md
index 6260fb5..a2e6f99 100644
--- a/doc/dui.md
+++ b/doc/dui.md
@@ -281,6 +281,16 @@ As you can see there is quite a lot of difference between them. This function ne
All the above algorithms could be used with OpenCV, But the Radon transform needs more work than the others with the amount of information in the doc.
+
+\def\roadConclusion{
+In conclusion, line detection offers various possibilities, and through testing and experimentation, we employed the blobbing method using OpenMV due to the limitations with WiFi communication. Additionally, we chose blobbing for its robust and straightforward algorithm. However, while I still believe that the Hough transform is generally superior in terms of diversity and consistency in line detection, it presented challenges in our specific project due to the varying floors and lighting conditions, requiring additional image processing.
+
+Later in the project, we discovered a new image processing technique called image correction, also known as bird's eye view. This approach allowed us to visualize more lines and provided greater opportunities when combined with the Hough transform. Although blobbing also utilized bird's eye view, the Hough transform offered more precision.
+
+In summary, we chose for blobbing due to its robust and simplified algorithm, resulting in higher frames per second (fps). However, the Hough transform demonstrated greater precision when combined with bird's eye view. Considering the time constraints, achieving optimal integration between these techniques proved to be challenging.
+}.
+\roadConclusion
+
## Communication between the Nicla and Zumo
In order to make the Zumo robot both detect where it is on a road, and steer to
@@ -523,4 +533,4 @@ solution (this requires testing).
\signDetectionColorConclusion
\signDetectionShapeConclusion
\signRecognitionConclusion
-
+\roadConclusion
diff --git a/nicla/road.py b/nicla/road.py
index 7edd871..2c4e84a 100644
--- a/nicla/road.py
+++ b/nicla/road.py
@@ -91,6 +91,6 @@ while(True):
if data is not None:
uart.uart_buffer(data)
- #drive_img = sensor.snapshot()
- #drive(drive_img)
- #uart.uart_buffer(0x1f)
+ drive_img = sensor.snapshot()
+ drive(drive_img)
+ uart.uart_buffer(0x1f)