Object Mapping and Avoidance on RPI

Hi there,
I’m planning on building a semi-humanoid robot to stay outdoors for a significant amount of time, and I wondered if any of you had some suggestions or experience with object avoidance?

I’ve used sensors like the HC-SR04 before, and they’re pretty crude when used on there own, and can often give very confusing and slow data when confronted with complicated objects such as plant life, or even reflect completely away from flat surfaces like walls.

From what I have found, the two main options are visual avoidance, where a camera and image processing package is used, or a sonar like system, using a panning sensor, like a laser range detector.

Pimoroni stock what looks like a very nice time-of-flight laser sensor from Adafruit here, however, the range is quite limited at 1.2m, and it would also have to be mounted on some form of servo rig (which isn’t too bad, considering that a pan-tilt hat will be mounted on the platform). The other issue with using such a sensor is that the amount of readings it would have to take per second would have to be considerably high in order to correctly map complex shapes like trees, or even moving objects such as pedestrians.

The other option, optical avoidance, seems much harder to implement, although from what I can see substantially more accurate. Pi Robot have an example of what I mean by optical avoidance here: http://www.pirobot.org/blog/0004/ where they use a spherical mirror in order to capture the environment around the platform, before processing the image in order to find clear space for the robot to manoeuvre in.
This design, although accurate, seems like it could take up not only a lot of processing power (which means less scans per second) but a lot of space as well, which is a disadvantage for such a delicate piece of machinery. There’s also the issue of prolonged exposure for the mirror, i.e: dirt may obstruct vision and cause inaccuracy over longer periods of time.

Pi Robot also used another system in which they took readings from a distance sensor and “mapped” it’s environment: http://www.pirobot.org/blog/0015/. In theory, this looks very appealing, however, that particular example spends it’s time indoors, where not many changes occur. I believe the constant changes in the environment due to pedestrians and over obstacles could pose issues with this type of design.

TL;DR: Which method of obstacle avoidance is the most efficient for such a design? A panning laser distance sensor, or an optical processing system?

If anybody has any examples, I would love to see some other project similar to my own which may give me insight!