Mapping Robot



Source Code

Idea:

I designed this robot as my entry into the 8th grade 2015 Seattle Public Schools Science and Engineering Fair. At the fair, I was proud to win the "Best Overall Engineering Project" award for all of Seattle. This project aimed at providing detailed maps of real world settings that could then be used for various home-remodel projects. One feature I was particularly interested in was to build a model of a room and then easily let people simulate changes by adding different items of furniture to it. During demonstration the project only functioned in 2D, although 3D mapping is supported in the hardware and would only require various software changes.

This robot works as a team with a standard computer on the same wireless network. The computer visualizes the data and allows for manual control if needed during the automated scan. It also allows for the actual robots' programming to be a simple 300 line python script that listens to a socket. The main computer gives simple text commands such as "SERVER/TURN_RIGHT/59", which tells the robot to turn right 59 degrees. On the computer is a much larger Java program running my graphics tool, which provided a simple visualization of the data with large amounts of information. The resulting maps could be stored as .csv files meaning that Google SketchUp up will happily render them, and will allow you to add in other SketchUp items such as furniture, people, etc. This would allow real developers to see the map in 3D while using the tools they are familiar with. One particularly interesting aspect of this project is that I learned the difference between theoretical computing and actual engineering. The LIDAR sensor I was using would often give spurious false readings. If it gave a false close reading, the robot would decide that a wall was there and would not explore that part of the room. For that reason, I built a "confidence score" algorithm into the program, where physical points could lose confidence of being a wall if the laser saw points behind them. Because of this, the program was able to deal with LIDAR-, feline- and sibling- induced inaccuracies in the map, slowly erasing the bad data but keeping all the good data. As I have explored the interface between the digital and real worlds, I have learned that this type of engineering approach is essential to success.

Simple (but functional) Mapping Algorithm:


* Take data sample from -160 degrees to 160 degrees relative to the robot every 2 degrees

* If there exists a point on a raytrace that is closer than your current point, lower that points "credibility" value

* If there exists a point right next to or at the location of the new point, increase the credibility of that point

* Divide the world in 8 45-degree segments (0-45, 46-90, 91-135 ...)

* Find the sector with the lowest average credibility

* Turn to that sector and then scan it with 1 degree increments

* Drive 1/3rd of the way to the nearest point in the segment, stopping every 15 cm to check for obstacles.

* Repeat from beginning.


Images:


A view of what the in progress mapping screen looks like.

Note that the console in the top left has its own command parsing and execution techniques, with built in commands to manipulate everything from sending commands to the robot to moving the locations of GUI elements live. I am quite happy with it and it saved many hours during the project as I was able to make modifications and updates during each room-mapping run, rather than having to restart.

The physical robot



The resulting image in sketchup

Awesome videos of it in progress:


What the controls look like as the map gets created: A video of the physical world:

Plotting a path for the robot was a key part of the project. The initial algorithm, and the one used in the project, was a brute-force algorithm based on which directions were "most-mapped" and "least-mapped". The robot would be told to go towards unmapped areas. However, during the project, I determined that another algorithm would be to use location seeds and costs to determine the most efficient routes to the least explored areas. An initial proof-of-concept of this algoritm is shown below:


Please contact me if you are curious how any of this was implemented / what parts I used.

All content on this page belongs to Zachary Porter. You may use, reproduce, or modify anything from this website, provided that you give credit to zackporter.com in your usage.