Jump to content

Tesla Thread


GearsNSuch

Recommended Posts

1 hour ago, YNM said:

I can tell you that I could more or less predict what other vehicles will do just based on how they've been going, what the vehicle is, and who are driving them.

"if

>BMW ahead

then 

>Expect no turn signals"

Or something to that effect?

Link to comment
Share on other sites

9 minutes ago, Codraroll said:

"if

>BMW ahead

then 

>Expect no turn signals"

Or something to that effect?

LOL.

We have a Rover, and a BMW, and I want to add something to this, because I notice it when I drive the bimmer, and it drives me nuts, and while it's fun to call BMW drivers names for not using the signal, there is an engineering reason why (and it's an abject design failure on their part, IMNSHO).

One, my wife and I always use turn signals. Always. I have found myself signalling reflexively turning the 180 into my carport—in my own driveway. My kids think I'm a monster because I have said after yelling at non-signalers that anyone who doesn't use their turn signal 2-3 times in a row should safely arrive at their destination so no innocent gets hurt, then they simply expire.

In my Rover, I tap the signal, and it blinks 3 times. This is for lane changes. Real turns, you fully move the signal, and the turn resets it as you would expect.

On the BMW, tapping the signal makes it blink ONCE. To signal a lane change with more than one blink you must HOLD the signal without fully making it click. If you pull slightly too hard, it clicks, then you must unclick it (often giving an erroneous signal the other way). Holding it delicately while steering is annoying, and having it click so you have to manually un-signal is also pretty awful. So I try and remember to hold it for a few blinks, but often I hit it the one time, then switch lanes (and it clicks and sticks some of the time). It would be very easy to miss the single blink, and some people might just not bother since accidentally having it click is pretty easy.

Since signally is a habit/muscle memory thing, maybe they get trained not to signal at all?

So yeah, I bet they could figure out with ML that "if BMW, no turn signal on lane change expected."

 

Link to comment
Share on other sites

1 hour ago, Codraroll said:

"if

>BMW ahead

then 

>Expect no turn signals"

Or something to that effect?

Ha ! Not a lot of western cars here, and if you see one new and/or in good conditions you know they're pretty darn loaded. Most makes here are japanese.

Normally like share taxis, or city minibuses (and half-buses) which will stop wherever they like, or moms on motorcycles - please stay a mile away (and they aren't any better in cars), or delivery guys (on pickups etc) - you better stay a good distance, or the online drivers - expect a lot of looking at smartphones and barely reacting to stuff, or taxis - loaded it's slow while empty it's fast, or the empty trucks or the produce pickups that speeds like heck even though they barely could brake for things, also the night buses. It's often not on the first sight but after a while (like half a minute) you kind of can tell what kind of driver is in the car, ie. online cars look the same as the others but the way they drive is different most of the time.

But yeah there's barely any reason here to expect that the signs and the signals are to be respected so you just go and look out for yourself really. Good thing the roads are almost always packed so there's no way for people to speed up properly, and most of the fatal crashes are on tollroads where people can speed up considerably. (or anywhere for daredevil motorcyclist.)

39 minutes ago, tater said:

On the BMW, tapping the signal makes it blink ONCE. To signal a lane change with more than one blink you must HOLD the signal without fully making it click. If you pull slightly too hard, it clicks, then you must unclick it (often giving an erroneous signal the other way). Holding it delicately while steering is annoying, and having it click so you have to manually un-signal is also pretty awful. So I try and remember to hold it for a few blinks, but often I hit it the one time, then switch lanes (and it clicks and sticks some of the time). It would be very easy to miss the single blink, and some people might just not bother since accidentally having it click is pretty easy.

What XD

Sole car I've been using you just go and turn it on and off as needed, although I'm not entirely used to hold it down etc. for reverse turns as it turns automatically off. Stalk on the right as it's right-hand-drive but idk if it's the same in the US.

Motorcycle blinkers only turn on when moved and turn off when pressed again, but I use them a lot since it's much much easier. Perhaps having a large hand helps a bit...

Edited by YNM
Link to comment
Share on other sites

13 minutes ago, tater said:

On the BMW, tapping the signal makes it blink ONCE. To signal a lane change with more than one blink you must HOLD the signal without fully making it click. If you pull slightly too hard, it clicks, then you must unclick it (often giving an erroneous signal the other way). Holding it delicately while steering is annoying, and having it click so you have to manually un-signal is also pretty awful. So I try and remember to hold it for a few blinks, but often I hit it the one time, then switch lanes (and it clicks and sticks some of the time). It would be very easy to miss the single blink, and some people might just not bother since accidentally having it click is pretty easy.

Since signally is a habit/muscle memory thing, maybe they get trained not to signal at all?

So yeah, I bet they could figure out with ML that "if BMW, no turn signal on lane change expected."

This sort of AI/machine learning algorithm seems like a recipe for disaster. Not sure how you'd work around it, though.

Suppose, for example, that the machine learns to recognize behavior from certain classes of vehicles (perhaps certain luxury brands) which, as a result, makes it change its behavior around those brands. This allows it to successfully reduce its overall accident rate. However, its change in behavior in turn makes accidents with OTHER vehicles more violent. What's the trade-off? Do you end up with a self-driving algorithm that effectively discriminates against poor people? 

What about prioritization? Is the AI's objective to reduce total property damage? Total injuries? Total number of people injured? Prioritize the safety of the occupant(s)?

Suppose I'm driving a large SUV and a dump truck suddenly stops directly in front of me, and I have the opportunity to swerve either to the right or to the left. If I'm paying attention and I see that there is a motorcycle ahead to my left and a sedan ahead to my right, then I'm probably going to swerve right, hoping in either case to avoid the accident entirely but knowing that the occupants of the sedan are going to be much less injured by a collision than the guy on the motorcycle. Or maybe if the sedan is much closer to me than the motorcycle then I will swerve left, knowing that the motorcyclist is more vulnerable but deciding that I'm less likely to hit him at all. Whatever decision I make, I'm going to be forgiven because it's assumed I will make the best snap decision I have with the information at my disposal.

But when you introduce an AI with access to VASTLY more information, VASTLY more processing power, and (almost) infinitely faster reaction time, things change. How many factors is the AI going to use? What if the AI determines that there are 4 occupants in the sedan and calculates that the totality of minor injuries to them would be greater than the totality of serious injuries to the motorcyclist? What if the AI decides, "This vehicle is extremely safe and so I will not swerve at all, committing to an accident with the dump truck because I know my own driver will only suffer minor injuries"? What if the AI decides, "That is a particularly expensive motorcycle and so the motorcyclist is probably wearing expensive motorcycle armor, so he's probably going to be okay"?

Any statistically-collected data reflects our very racist and classist society which means any machine learning algorithm that uses that data is going to end up being...well, racist. 

Link to comment
Share on other sites

11 minutes ago, sevenperforce said:

But when you introduce an AI with access to VASTLY more information, VASTLY more processing power, and (almost) infinitely faster reaction time, things change. How many factors is the AI going to use? What if the AI determines that there are 4 occupants in the sedan and calculates that the totality of minor injuries to them would be greater than the totality of serious injuries to the motorcyclist? What if the AI decides, "This vehicle is extremely safe and so I will not swerve at all, committing to an accident with the dump truck because I know my own driver will only suffer minor injuries"? What if the AI decides, "That is a particularly expensive motorcycle and so the motorcyclist is probably wearing expensive motorcycle armor, so he's probably going to be okay"?

I mean, you don't need to imagine, just try developing it in India or Indonesia. The kind of knowledge and courage - and a little stupidity - that I have mentioned in my post before are kind of the basics of driving here.

Link to comment
Share on other sites

On 1/15/2021 at 10:06 AM, sevenperforce said:

This sort of AI/machine learning algorithm seems like a recipe for disaster. Not sure how you'd work around it, though.

Suppose, for example, that the machine learns to recognize behavior from certain classes of vehicles (perhaps certain luxury brands) which, as a result, makes it change its behavior around those brands. This allows it to successfully reduce its overall accident rate. However, its change in behavior in turn makes accidents with OTHER vehicles more violent. What's the trade-off? Do you end up with a self-driving algorithm that effectively discriminates against poor people? 

It's not like colliding with other vehicles is desirable. The goal would be to not collide with anything. This seems like a pretty unlikely case to even happen.

The garbage notion of "discrimination against poor people" not seeing it. Beater cars are more likely perhaps to be driven by younger people who can only afford that, who are more aggressive, and have less time in type, and are not as skilled (or rely on their better reflexes, etc). maybe there is a signal there. The "reward" for the ML system is always "don't crash," so learning how to avoid crashes, even with specific "black box"  rules no human ever groks that might well include certain types of cars etc... doesn't matter, fewer crashes is fewer crashes.

 

Quote

What about prioritization? Is the AI's objective to reduce total property damage? Total injuries? Total number of people injured? Prioritize the safety of the occupant(s)?

Best crash is no crash. Do that first.

The [silly] "trolley problem" notions of "run over those 4 fat people, or that little kid" are not realistic cases. Cars will always avoid crashes. Short of that, if I spend my money on the vehicle, I expect it to protect my occupants, the other cars (self or human driven) can deal with themselves. All things equal, if that became a brand issue: "Brand A protects these groups of people ahead of you, because of (pick ugly rationale here)." and Brand B says, "We protect the occupants." I buy B. If Brand A has 100X fewer crashes than B (and both are better than people), then I look at how many times the trolley problem is an actual issue, and if it's meaningless virtue signalling I get the just plain safer car (cause A in that case is safer, even if virtue signalling). But as I said, seems like a problem to talk about over drinks, not a real world event.

I'd always prioritize no crash for the car itself #1.

 

Quote

Suppose I'm driving a large SUV and a dump truck suddenly stops directly in front of me, and I have the opportunity to swerve either to the right or to the left. If I'm paying attention and I see that there is a motorcycle ahead to my left and a sedan ahead to my right, then I'm probably going to swerve right, hoping in either case to avoid the accident entirely but knowing that the occupants of the sedan are going to be much less injured by a collision than the guy on the motorcycle. Or maybe if the sedan is much closer to me than the motorcycle then I will swerve left, knowing that the motorcyclist is more vulnerable but deciding that I'm less likely to hit him at all. Whatever decision I make, I'm going to be forgiven because it's assumed I will make the best snap decision I have with the information at my disposal.

But when you introduce an AI with access to VASTLY more information, VASTLY more processing power, and (almost) infinitely faster reaction time, things change. How many factors is the AI going to use? What if the AI determines that there are 4 occupants in the sedan and calculates that the totality of minor injuries to them would be greater than the totality of serious injuries to the motorcyclist? What if the AI decides, "This vehicle is extremely safe and so I will not swerve at all, committing to an accident with the dump truck because I know my own driver will only suffer minor injuries"? What if the AI decides, "That is a particularly expensive motorcycle and so the motorcyclist is probably wearing expensive motorcycle armor, so he's probably going to be okay"?]

I think these sorts of trolley problems are amusing, but not a thing in the real world. Honestly, I'd expect self-driving cars to thread a lot of needles. There's a vid of a Tesla auto-braking in an intersection where a car runs a light. It hits the other car that didn't auto-brake.

Deal with threats in order. The motorcycle vs sedan issue above seems like a physics issue. Car vs car all moving the same direction is a low energy event. Motorcycle ends up coming through the windshield (or the rider), and hence is a bad idea vs a ~flat bumper rear-end at low closing velocity.

More likely of course is that the 1.25-1.5 seconds it would have taken my arms to start steering from the moment I noticed the truck stop (also likely delayed from the instant it happened) would already have been used for slowing the self-driving car. The timeline is:

1. Truck starts decelerating from 30 m/s. Typical semis stop in ~160m. Cars are more like 65m.

2. That information reaches me after a vanishingly small time and I react 0.7 to 3 seconds later. (they tend to use 1.5 s as an average).

In the case of the self-driving car, it starts braking some fraction of a second (millisecond?) after the truck brakes. so that 0.7 to 3 second period is spent with the car braking. Cars already stop in less than half the distance of trucks, so there is no trolley problem, the car simply slows behind the truck (and is more likely to be hit from behind probably).

So the bike left speeds off at highway speed, as does the sedan. The self-driving car now has to decide if it can change lanes to avoid the car BEHIND IT hitting it. That's a more likely case. Now there's a motorcycle in one lane, a car in another, can it bang either of those to not be rear ended? I have no idea, but I would expect it would not be programmed to active crash into anothe vehicle, and would expect it to eat the rear end collision (which for Tesla is fine, their care now hold the top spots in crash testing of any cars, ever, 1, 2, 3, and 4).

 

 

 

Edited by Vanamonde
Link to comment
Share on other sites

12 minutes ago, tater said:

self-driving car now has to decide if it can change lanes to avoid the car BEHIND IT hitting it.

The closest I've ever come to being squished was a time I stopped for a long yellow light, just before it turned red. A fast semi with trailer had to swerve around my right going at least 40 mph. 

I think in terms of risk mitigation strategy for an AI, simpler will be better. As in: IF roadway is blocked THEN stop quickly. Even with fast reactions, fancy maneuvers are probably going to get you in a more complicated situation.

Link to comment
Share on other sites

46 minutes ago, Nightside said:

I think in terms of risk mitigation strategy for an AI, simpler will be better. As in: IF roadway is blocked THEN stop quickly. Even with fast reactions, fancy maneuvers are probably going to get you in a more complicated situation.

I was going to ask about the manoeuvring capabilities of Tesla (and any electric-drive cars) vs. combustion engine powered cars when it comes to AI interface... Electric drives are quite more capable of finer control than mechanical drive, is it an advantage for AI driving algorithms to be put onto an electric-drive cars than on mechanical-drive cars ?

Edited by YNM
Link to comment
Share on other sites

  • 3 weeks later...
7 minutes ago, StrandedonEarth said:

As in into a tree? 

been there, done that, got the dent

Trees are particularly unforgiving, but you don't actually even have to hit anything. Just the moment when you realize you are now, as we say, "a passenger" rather than being in control is enough to turn fun into fear.

Link to comment
Share on other sites

The right thing to do in a developing incident 99.9% of the time is to throw on the brakes to the limit of grip, and do so as quickly as possible.

If you still hit the vehicle in front then that's your problem for being too close. Self-driving cars don't do that.

If someone rear-ends you that's their problem for being too close. Self-driving cars don't do that.

Any case that isn't covered by the above that might involve dodging would require superhuman reflexes and perfect situational awareness. Dodging out of lane doesn't help if there are oncoming vehicles or obstacles you're not aware of. It involves a lot of hope and desperation. Humans don't actually dodge very well.

Whereas self-driving cars might actually be capable of dodging to a higher degree of capability, but they won't, because 99.9% of the time applying the brakes in good time is the least risky option, which they do very well, and any accidents caused by dodging opens questions of liability. 

Link to comment
Share on other sites

I honestly wouldn't question the ability of machines to control stuff to a higher precision than humans do - given existing stuff like CNC or ATO on trains and it's already abundantly clear that we simply don't trust humans to do it up to such barriers, and even when we do we often opt to make sure that we're overseeing a machine doing it instead (like autopilot on aircraft, it doesn't remove the liability of the pilot but it greatly lessens the load).

The real question IMO is how do we exactly "teach" the algorithm about things that even humans are often not entirely sure about. Do you get the bests in the field to work on it, or outsource it to the cheapest bidder to do menial data cleaning ?

 

Although if it's in relation to that spinning car thing then that's a physical limit, I'd be willing to bet that whoever is the owner of that vehicle didn't put on winter tires or other measures to avoid slipping on ice/snow, and might risk themselves to offenses of the traffic codes of the place.

Link to comment
Share on other sites

On 2/7/2021 at 9:49 PM, YNM said:

Depending on where did that happened exactly it might be an actionable offense. (for any curious english speakers.)

On public roads or public parking spaces it it here in Norway. You can end up loosing your driving licence but think that is more if you do it on dry asphalt. Now testing your cars limits on an empty parking lot is ok if you have margins and you can not hit anything. 

Now remember my first car back in the 1980's rear wheel drive, light rear, not very good tires. If the car suddenly decides to flip around and go backwards in 80 km/h you get a bit scared. 
 

Link to comment
Share on other sites

6 minutes ago, magnemoe said:

Now testing your cars limits on an empty parking lot is ok if you have margins and you can not hit anything. 

Yeah, like I said, it depends on where exactly that happened. Private roads might be OK, public roads you might've committed an offense. (the first 'might' is because some private roads are used often enough to be regarded as a public road; the second 'might' is because it depends a lot on your jurisdiction and enforcement.)

Edited by YNM
Link to comment
Share on other sites

  • 3 months later...
6 hours ago, kerbiloid said:
  Reveal hidden contents

E1OEK8jVEAMEv5A?format=jpg&name=medium

So, Tesla can be charged only from solar and wind power stations?

It can distinguish and reject the electricity produced out of the fossil fuel?

Clever, clever Tesla. Such smart.

“ Nope, this electron is dirty, it came from that coal plant, it can go charge a Leaf... Oooh, this one’s breezy, and that one’s shiny.... Oh hey, a wet one!  Now what do I do with this green glowey one?”

Edited by StrandedonEarth
Link to comment
Share on other sites

"Proof Of Work" cryptocurrencies are an environmental disaster. I applaud Tesla for taking this step (even though of course it just means someone has to convert their bitcoins to dollars to use their bitcoin stash to buy a Tesla).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...