I was not aware of those documents, so thank you! They are relatively high level, though; I was hoping for a bit more detail. Examples: entrance angle is specified but says "varies according to distance" -- varies how?; what is "Fast Distance sensing"? The output resolution is given (1mm) but what is known about the accuracy? Also, if the resolution is 1 mm, how come I only get integer output in cm?
Fair point. At the moment, I am just exploring how far I can push the provided hardware. I have just coded up a routine that spins the robot and takes a measurement as fast as I can, storing into a list of 360 elements -- one distance measurement (the smallest, if multiple) for each degree of yaw. Then I segment the array by looking for discontinuities -- say a jump of > 5 cm in adjacent readings (all the objects in the scene are at least 10 cm apart). Each span of similar readings should be "one object". What's puzzling is that when I do consecutive runs without moving the robot in between, I may find 5 objects, or 4 or 3 or ... I was naively hoping that even with noise, at least the number of detections would be robust. I wonder if the scene complexity is a problem (echos?)? I was doing the experiments on my dining room floor, with a number of chair and table legs as the objects. Maybe I'll try a less cluttered area.
I would think lidar -- being a distance measuring device -- would have similar limitations? Or do you get a distance map instead of a single distance?
Oh, that camera addition looks very cool!! I was wondering how I could get a camera on board - this might be the way to go. Thank you!!