Connected/Automated vehicles and infrastructure in Boston

But if you're going to limit connected cars to maneuvers that would still be safe with cameras, what do you gain?
A lidless eye, staring out in all directions, never distracted by texts or the entertainment system.
 
But if you're going to limit connected cars to maneuvers that would still be safe with cameras, what do you gain?

The sensors on self-driving cars are way more than "cameras". They use LIDAR (laser based dynamic ranging) SONAR and RADAR. They are not just looking around in visible light.

Which, by the way, is much more than a human can do. And I again point out we allow humans to drive these machines.
 
If it's not unhackable, then there will won't be much benefit. People won't put up with a system where a single hack could result in a multi-car pile-up.

I would trust the computer with a very low probability of being hacked above the idiot behind the wheel next to me. It's all just probability.
 
How about 400ft hovercraft buses from location to another through the air?
We need to start innovating---traffic has become so bad for all the surrounding communities 30miles+ from Boston
 
Humans have a key advantage over driverless cars' artificial intelligence at this point:
The ability to creatively sensemake and resolve unexpected sensory inputs...for instance, a plastic garbage bag stuck in the tailgate of a truck, flapping in the wind, was known to cause one of the test cars in california to assume there was something going wrong in front of it

The algorithms have fail-safes...so if they encounter something unexpected, you're more likely to endure the inconvenience of the car stopping or pulling over when it's a non-issue. But that's still annoying.

If you think about it, there's almost an infinite array of visual or aural inputs that could look so foreign to the computer, but about which a human could sense-make in a split second.

A friend working in the industry tells me that these are the last major technical hurdle for which there is still much work to do.
 
Humans have a key advantage over driverless cars' artificial intelligence at this point:
The ability to creatively sensemake and resolve unexpected sensory inputs...for instance, a plastic garbage bag stuck in the tailgate of a truck, flapping in the wind, was known to cause one of the test cars in california to assume there was something going wrong in front of it

The algorithms have fail-safes...so if they encounter something unexpected, you're more likely to endure the inconvenience of the car stopping or pulling over when it's a non-issue. But that's still annoying.

If you think about it, there's almost an infinite array of visual or aural inputs that could look so foreign to the computer, but about which a human could sense-make in a split second.

A friend working in the industry tells me that these are the last major technical hurdle for which there is still much work to do.

Agreed, sense-making of obstacles is a big hurdle still. The sensors on self-driving cars can "see" more than a human, but cannot differentiate what they are seeing very well yet. Human ability at pattern recognition from pretty subtle inputs is really amazing.
 
Agreed, sense-making of obstacles is a big hurdle still. The sensors on self-driving cars can "see" more than a human, but cannot differentiate what they are seeing very well yet. Human ability at pattern recognition from pretty subtle inputs is really amazing.

This much I agree with, but keep in mind that humans have quite a bit of i/o latency. Once computers are able to recognize these situations they will have drastically faster reaction times. There's still work to be done, but much of this work is being done by the computers "themselves" thanks to machine learning.
 
Knowing the distance, speed, & direction of all objects (and being able to plot their likely paths and intercepts/crashes) is a huge advantage bots already have (as foreshadowed in the adaptive cruise and automatic emergency braking we already have).

The next level, as you all point out, is interpreting what the objects are and what their "intentions" are (of that plastic bag caught on tailpipe). But isn't that part both subtle AND secondary (much less important) if you already have done a good job of plotting all the objects?

Seems to me we'll very quickly be training cars to be "good enough" on things that people aren't particularly good at either (which way to swerve when something makes an unexpected move (like stuff ejected from the bed of a pickup, a truck re-tread, or a dear, bike or ped)
 
Many good points made above, particularly that AVs have many, many more inputs than human drivers, but aren't as good at making sense of it. A great advantage of connected cars could be their ability to communicate with one another. My radar or lidar can see a car across the intersection, say hi, exchange introductions, and then they tell each other what they are doing (e.g. we are both turning left, so nobody needs to yield). All that can happen in milliseconds with dozens of vehicles at a time.

AVs have to be able to drive the way human drive as a last resort, but they can be significantly augmented with capabilities that humans don't have like the ability to precisely measure distance and speed or to communicate complex messages with dozens of other vehicles nearly instantaneously. Overlooking all these enhancements would be a terrible waste.
 
Many good points made above, particularly that AVs have many, many more inputs than human drivers, but aren't as good at making sense of it. A great advantage of connected cars could be their ability to communicate with one another. My radar or lidar can see a car across the intersection, say hi, exchange introductions, and then they tell each other what they are doing (e.g. we are both turning left, so nobody needs to yield). All that can happen in milliseconds with dozens of vehicles at a time.

AVs have to be able to drive the way human drive as a last resort, but they can be significantly augmented with capabilities that humans don't have like the ability to precisely measure distance and speed or to communicate complex messages with dozens of other vehicles nearly instantaneously. Overlooking all these enhancements would be a terrible waste.

Vehicle to vehicle communications is what gives you the ability to do advanced coordination of vehicle movements that will create greater efficiencies over today's driving.

That said... Driving with just cameras is not a last resort. It is the primary way autonomous vehicles can and should drive. Lidar and radar can't be relied on 100% as there are some common materials/paints that won't reflect the beams. You could end up with accidentally stealth objects that are otherwise visible to a visible spectrum camera sensor.

Nor can active communications or GPS be relied on because both are going to be prone to latency and accuracy issues. Not to mention hacking and jamming. In all cases you want a car that sees another car to rely on its cameras (and supplemental Lidar/Radar) over some other car telling you where it thinks it is based on its best time stamped communications data. The other sensor or communications data will just be supplemental data and in some case will have to be disregarded if the cameras don't agree.

And just like the human eye you get good accurate distance information by using two cameras offset from one another by a very precisely measured distance. Better than a human, a car can see in all directions at once.

Also, don't forget we have an entire infrastructure based on signs that are human readable... construction zones, speed limit changes, accidents, cones, flares etc etc. No way as a society will we suddenly pay huge amounts of money to create additional radio communications based infrastructure for what would initially be a very small elite.

The way you get widespread adoption is by making autonomous cars drive like people first. And before widespread adoption of fully autonomous vehicles, we should get widespread adoption of autonomous breaking.

With currently available technology, it should be considered a safety defect and design flaw that cars will allow their drivers to hit other things. Autonomous breaking should be as standard as air bags in the next few years.

Once people are confident that most cars won't hit other things, people or drive off a cliff, then navigation becomes more about pointing the car in the right direction.
 
My understanding is that the autonomous vehicles never rely on just one kind of input, but rather overlay the multiple types of inputs to create a comprehensive picture of the surroundings (Camera, LIDAR, RADAR and SONAR for close approach like parking). There are situations where each input can be fooled or "blinded" -- cameras, for example, lack the exceptional dynamic range of the human eye, and can end up blind in extreme light conditions.
 
The internet/networked connection/aspect shouldn't be used for any real time driving at all, that should be all on board sensors and cameras. The connectivity piece should be overall route routing/GPS work that is able to analyze traffic and traffic patterns and optimally route it as a whole. The actually automated driving should be on a separate completely non-networked ( or at least not open to the WAN/net like the GPS/routing/entertainment/etc) and reasonably self-contained system.

Vehicle to vehicle communications is what gives you the ability to do advanced coordination of vehicle movements that will create greater efficiencies over today's driving.

That said... Driving with just cameras is not a last resort. It is the primary way autonomous vehicles can and should drive. Lidar and radar can't be relied on 100% as there are some common materials/paints that won't reflect the beams. You could end up with accidentally stealth objects that are otherwise visible to a visible spectrum camera sensor.

Nor can active communications or GPS be relied on because both are going to be prone to latency and accuracy issues. Not to mention hacking and jamming. In all cases you want a car that sees another car to rely on its cameras (and supplemental Lidar/Radar) over some other car telling you where it thinks it is based on its best time stamped communications data. The other sensor or communications data will just be supplemental data and in some case will have to be disregarded if the cameras don't agree

Thanks for these explanations. I think the issue I was having was that people seem to automatically go to things eliminating stop lights when discussing the connected vehicles that I was having trouble staring to thinking about what advantages could from using communication between cars as a tool rather than the primary method of driving. Things like highway mergers or letting traffic lights adapt to current conditions are coming to mind.

A lidless eye, staring out in all directions, never distracted by texts or the entertainment system.

The sensors on self-driving cars are way more than "cameras". They use LIDAR (laser based dynamic ranging) SONAR and RADAR. They are not just looking around in visible light.

Which, by the way, is much more than a human can do. And I again point out we allow humans to drive these machines.

I was referring to the advantages of connected cars over non-connected AVs, not AVs over humans.
 
Your scenario where people are living in their cars seems pretty hellish and pessimistic to me.
Living in vehicles as a way to arbitrage cheap streetspace into otherwise-expensive residential plots is already reality in Silicon Valley, where the public street is the only underpriced real estate in town:

Low-income workers who live in RVs are being 'chased out' of Silicon Valley streets

# of RVs parked on streets:
200+ In Mountain View
45 on just one street in Palo Alto

Santa Barbara, banned RVs from all city streets in 2016, but also has a social service operation offering 115 parking places (http://sbnbcc.org/safe-parking/)

Self-driving cars will certainly make compliance easier, and may lift some of the social stigma.

Mountain View's solution is requiring them to move every 72 hours, which is a longstanding policy in many places (most Boston towns will tow non-residential cars after 3 days in one place if you call and complain), but entirely based on the fading notion that "attended" == "driven periodically by a human". Self driving cars will be "self-attending" and easily meet the test for "allowed to be there" in most areas.

Three trends favor living in your self-driving car*
1) Luxury [housing/cars] eventually age into affordable [housing/cars]
2) Moore's Law turns big ticket items into "burner phones"
3) The movement away from an ownership economy

*or using it to live in a cheap place (whether parking spot or exurban home), and work in a high-wage place, by consuming more lane-mile-minutes which today we essentially give away free.
 
Last edited:
Self-driving cars will certainly make compliance easier, and may lift some of the social stigma.

People should not be living in their car, that is not suitable housing. There are humane ways of sheltering people, like building public housing.
 
People should not be living in their car, that is not suitable housing. There are humane ways of sheltering people, like building public housing.

Your or my personal "shoulds" and "humanes" are unlikely to withstand an onslaught of falling prices and travel-while-sleeping, and push-button compliance/defiance of government attempts to define a line between bad car living and polite car living.

I reckon a lot of folks in 1900 would have deemed the whole idea of living outside the city, and being utterly the slave of an automobile to be unsuitable and inhumane. It didn't stop the arbitrageurs of that era from the auto's alchemy of creating cheap housing from near-costless gasoline (a waste product of fuel oil refining) and abandoned farms (killed by hay's collapse* and cheap western wheat)

*Hay had been 25% of agricultural output before the auto and electric transit displaced horsepower
 
Last edited:
People will learn that you can always run in front of or cut off a self driving vehicle, and traffic will still suck.

Well, given that gradual means smarter driver assist features, they’re still be someone behind the wheel. And I expect, even once cars are fully autonomous, people will sit in the drivers seat, just to keep pedestrians from doing just that.
 
Boston driverless car company will expand testing citywide

Under city rules, a safety driver is required to sit behind the wheel of any test car and be ready to take control as necessary. That will remain the case with the expanded testing. NuTonomy has not had any safety problems since it began testing in Boston, officials said. Though a relatively small area with only a handful of main streets, the Seaport District is also among the city’s most congested neighborhoods.

...

Bryan Reimer, associate director of the New England University Transportation Center at MIT, said the city’s approach of testing driverless cars in a small area before expanding should be a model across the country. Other states have allowed more testing with less oversight.

“It really leaves Boston as a leader in this field, with a collaborative discussion and policy framework that is being organically developed,” he said. “Take baby steps, then take bigger steps, now take much larger steps.”
 

Back
Top