I am taking the Udacity full stack developer Nanodegree program to refresh my memory. The autonomous cars Nanodegree ads are all over the website, so I can’t stop thinking about our autonomous driving future.
Here are some of the thoughts
As much as it is important to make a car able to drive itself and follow the rules, it is also important to make the car able to understand the human signals that can not be directly inferred by sensors. Like someone signaling to a pedestrian to cross, a driver signaling to another driver that they are going to pass, or a driver telling the other I am waiting for you.
If you want to build an autonomous car that can drive on the streets of Cairo, one of the most important inputs is sound. Egyptians are crazy about using the horn.
In Cairo some drivers use the horn as much as they use the gas pedal, we have a tone to tell someone to cross, another to tell them to get lost, we have a tone for different swear words (yes we do), we have a tone to celebrate a new marriage (crazy?), and we have a tone to tell other drivers or pedestrians thank you!
For a car to recognize this in such a noisy environment, it has to not only interpret different tones, but also filter them based on distance and which one is relevant for the car’s situation.
With horn sounds, there are also visual cues that car makers never built their cars for, like drivers getting their hands out of the window to signal turning left, sometimes the passenger next to the driver also does that to signal turning right.
Luckily, no one is trying to build an Egyptian autonomous car.
Another thought that came to mind while climbing up the street where I work with the wheelchair on snow, what inputs do we need to build an autonomous wheelchair?
My thought of autonomous wheelchairs come from the need to text while walking like everyone else. It is not a matter of comfort or laziness but rather satisfying my smart phone addiction. That’s why I hate the experience of having of tap and hold to record video because I can’t push my wheelchair while recording.
As a start, we need a camera to identify humans, cars, and obstacles. We also need radars that can build a 3D image of what’s in front of the chair to identify distances.
We then need another radar at the front bottom to estimate heights of different obstacles so the chair can decide whether to try climbing up a step or jumping off one without its user falling.
I think one of the challenges will be identifying cross points. After all the lowest place on a side walk isn’t always in the same location, sometimes it is to the right, other times it is to the left, sometimes it is visually distinct so that visually impaired users can find it with their sticks, other times it is not.
Sometimes the lowest point is blocked by a car, or two cars, and the chair has to decide whether the space between them fits.
Another thing I am obsessed with is which route to take in a square to move to the opposite corner, would I go straight then left, or left then right? My heuristic is to go with whichever light open first, but how a chair would see and decide?
Indoor navigation will be a challenge. GPS has limited accuracy and won’t work well inside malls. However the conditions are easier. Keep going straight, avoid colliding into people and objects, avoid falling off stairs, and when there is a decision on whether to go right or left, just ask me.
Welcome to the future.