Jump to content

Recommended Posts

16 hours ago, Nightside said:

I don't really follow auto racing. Are there any top level races that allow electric cars?

Formula E is a bit like F1. Accelerations are similar, but the electric cars top out at 170ish mph Vs 230mph for F1, and each driver has two cars and they swap half way through because the batteries can't go full distance.

Link to post
Share on other sites
  • 3 months later...

Probably the flower bouquets.

 

Anyway, while we're in the thread of AI cars, I have to say though that humans can't identify everything perfectly either. I've missed out a few things on those CAPTCHA tests (that the system take for granted, so it was asking me for it), plus I'm sure I've miscategorized zoomed-in rumble stripes (so road side lines that's dotted) as a pedestrian crossing without thinking many times (and I'd always realize what is it after looking at it again).

Are there anyone who're properly checking what the training data the system is fed with is entirely true ? Also, are there retrospective checking for the things that this system is identifying once it's out there ?

On 9/25/2020 at 1:30 AM, Nightside said:

Are there any top level races that allow electric cars?

There are efforts to make autonomous racing cars and events entirely done with autonomous cars I think. They're all electric. This is beside Formula E.

Link to post
Share on other sites
On 1/13/2021 at 10:09 AM, YNM said:

Anyway, while we're in the thread of AI cars, I have to say though that humans can't identify everything perfectly either. I've missed out a few things on those CAPTCHA tests (that the system take for granted, so it was asking me for it), plus I'm sure I've miscategorized zoomed-in rumble stripes (so road side lines that's dotted) as a pedestrian crossing without thinking many times (and I'd always realize what is it after looking at it again).

Are there anyone who're properly checking what the training data the system is fed with is entirely true ? Also, are there retrospective checking for the things that this system is identifying once it's out there ?

Obligatory:

self_driving_issues.png

Link to post
Share on other sites

@sevenperforce I'm not sure, given the few members of the public in a few places that drives like maniacs, that comic might not be entirely true. Ignorance are already bad enough as it is with human drivers.

Also, given that quite a lot of people ignore signs and instructions anyway, if anything the sole reason why we haven't done that to human drivers are because they're too stupid to notice them... or just ignore them anyway. (more examples, because laughing at them are great.)

Why that cars aren't exactly the best in cities, but then we cross over to a different story.

 

My real concern is, if AI drivers are only learning from how we drive, then I question if all these stuff would completely change, and if they really would be better drivers than we do.

EDIT : Although I suppose there'd be a few things they won't try at all.

 

Edited by YNM
Link to post
Share on other sites
Just now, YNM said:

@sevenperforce I'm not sure, given the few members of the public in a few places that drives like maniacs, that comic might not be entirely true. Ignorance are already bad enough as it is with human drivers.

Also, given that quite a lot of people ignore signs and instructions anyway, if anything the sole reason why we haven't done that to human drivers are because they're too stupid to notice them... or just ignore them anyway.

Why that cars aren't exactly the best in cities, but then we cross over to a different story.

My real concern is, if AI drivers are only learning from how we drive, then I question if all these stuff would completely change, and if they really would be better drivers than we do.

Elon has said that the challenge is not to get AI to be as good as a human driver, but to get AI to be SO MUCH BETTER than a human driver that you KNOW it's safer to get in the car with an AI.

The tricky thing is that humans are not all the same. Human reaction time, human distraction level, human ability to solve edge cases and react to new situations, human memory -- these all lie on bell curves, and combine to form one big bell curve. We heuristically assume that people we're getting in the car with are average drivers. But to have the same level of confidence with AI, the software needs to be way, way, way out on the upper side of the bell curve. The AI needs to be better than 99.99% of human drivers.

Link to post
Share on other sites
5 minutes ago, sevenperforce said:

Elon has said that the challenge is not to get AI to be as good as a human driver, but to get AI to be SO MUCH BETTER than a human driver that you KNOW it's safer to get in the car with an AI.

And honestly I'm not sure how to do that if we were to teach them. Hence I was asking if anyone is checking on the training data, as well as reviewing the reactions etc.

And like, AI driving in crowded or poorly built conditions, like in a lot of places in Asia or Africa are going to be something that need to be surmounted at some point I suppose.

Edited by YNM
Link to post
Share on other sites
15 minutes ago, YNM said:

And honestly I'm not sure how to do that if we were to teach them. Hence I was asking if anyone is checking on the training data, as well as reviewing the reactions etc.

And like, AI driving in crowded or poorly built conditions, like in a lot of places in Asia or Africa are going to be something that need to be surmounted at some point I suppose.

Edge cases are always going to be the problem.

There will be accidents in AI cars. The important thing, however, is that the accident happens because it was a set of unavoidable conditions, not because of a software error. They need to be able to analyze it afterward and say something like, "The other vehicle spun out so abruptly and so unpredictable that there was no combination of control input which would have prevented the collision, but the AI's reaction time was able to reduce the collision impact by 20% well before a human driver would have been able to react."

The edge cases are the thing that will cause a problem because we don't know what we don't know. What does AI do if a crowd of protesters enters a city street? You don't want it to sort through a bunch of options and end up with something that results in a sudden burst of acceleration, for example. What does AI do if it thinks it detects a pedestrian on an interstate highway, where sudden braking could cause a multi-vehicle collision? 

What you want to avoid is a post-accident press conference where the software engineer says, "Well, rain splatter on the sensors caused the computer to determine there was a 55% likelihood that the cyclist had suddenly swerved into our lane, when in fact the cyclist had not changed her direction of travel. This is why the vehicle swerved and ran straight into a crowded dining patio at 35 mph." 

Link to post
Share on other sites
19 minutes ago, sevenperforce said:

Edge cases are always going to be the problem.

Yeah, and that's where I'm saying that even for a human it is a problem... then how are we supposed to teach them ?

The only other worry that I have is on infrastructure development itself. Like, there are places around the world where we have formed segregation of cars, and also there are places where instead you're forced to merge and give way because you're clearly in the minority of users. Those are the good examples. Problem arise if you have barely enough infrastructure, one where the ideal situation is a segregation but there isn't any yet.

There's also the problem with how it potentially affect car usage etc. - ownership might go down, and this would eliminate the parking problem, but in terms of capacity and density car-based transport aren't exactly the most dense option available, which are important for city planning. I know the pandemic might prove a shift away, but in some places the shift has happened in the right direction, ie. Paris which now plans to fully pedestrianize large parts of the inner city, which started from pop-up pedestrian and cycling spaces. But like I said, not necessarily the problem with the individual car machine itself, but rather the ones holding control over it - governments, planners, businesses, and the populace.

Edited by YNM
Link to post
Share on other sites

The whole training paradigm is edge cases. Staying on the road, and nominal driving is a solved problem I think. Tesla seems to take any "disengagement" data, and uses that to train.

2 hours ago, sevenperforce said:

The tricky thing is that humans are not all the same. Human reaction time, human distraction level, human ability to solve edge cases and react to new situations, human memory -- these all lie on bell curves, and combine to form one big bell curve. We heuristically assume that people we're getting in the car with are average drivers. But to have the same level of confidence with AI, the software needs to be way, way, way out on the upper side of the bell curve. The AI needs to be better than 99.99% of human drivers.

On the plus side, human reaction times at best are slow compared to a computer. So the car gets some leeway there in terms oif how many decision cycles it gets before it has to act.

Level 2 seems well in hand (highway driving as an automated-driving cruise control). Full autonomy? Dunno, hard to tell.

One thing I think Teslas need to add are camera washers. Not a huge issue for my friends with Teslas here in NM, but I bet in places with snow the cameras get pretty gunked up.

Link to post
Share on other sites
4 hours ago, tater said:

One thing I think Teslas need to add are camera washers. Not a huge issue for my friends with Teslas here in NM, but I bet in places with snow the cameras get pretty gunked up.

Ooof yeah. The gritty salty spray during winter driving here can cause quite the buildup. Gotta make sure the blinker washer fluid is topped up regularly during winter. 

Link to post
Share on other sites
6 hours ago, tater said:

On the plus side, human reaction times at best are slow compared to a computer. So the car gets some leeway there in terms oif how many decision cycles it gets before it has to act.

Skilled drivers don't react to things. They anticipate them. This is a major difference between AI and humans.

Edited by mikegarrison
Link to post
Share on other sites
38 minutes ago, mikegarrison said:

Skilled drivers don't react to things. They anticipate them. This is a major difference between AI and humans.

Driving is amazingly complex, and I think it's surprising how well it works (for HUMANS driving) considering how dopey many humans are.

That said, planning ahead is an aspect of driving, but there are loads of drivers that don't seem to do all that much of that (all the people I see looking at phones, for example). Teslas do some planning (estimating where blind curves are going, etc), and do it pretty well. I suppose the question is where the edge cases come in—are those more anticipation, or more reaction? Can any of the former be mitigated by not anticipating the action of another vehicle, but instead noticing it move on a timescale humans would not notice, then reacting. I have a feeling as complex as it is, it's some of all.

Deciding the idiot in front of you and left in the fast lane going slow is probably gonna get in the right lane and turn at the next light (never using a turn signal, what are those?) is anticipation, certainly :D The question is can the Tesla notice the lane change so quickly and slow slightly it doesn't matter?

 

Link to post
Share on other sites
1 hour ago, tater said:

That said, planning ahead is an aspect of driving, but there are loads of drivers that don't seem to do all that much of that

That's why I said "skilled" drivers. I'm not the best out there, but I have over 100 days on racetracks. I've taught performance driving to people, and it's amazing how hard it is to hammer into people that they need to always be looking ahead and anticipating what might be going to happen. You drive a racetrack with your eyes, not your reaction time.

Link to post
Share on other sites
2 hours ago, mikegarrison said:

That's why I said "skilled" drivers. I'm not the best out there, but I have over 100 days on racetracks. I've taught performance driving to people, and it's amazing how hard it is to hammer into people that they need to always be looking ahead and anticipating what might be going to happen. You drive a racetrack with your eyes, not your reaction time.

Cool, what do you race? (wife and I always wanted to do a racing camp thing for vacation, never worked out with RL, kids, etc)

The hardest anticipation aspect for machine learning seems like it will be "reading" the motions of the other cars, or what you can see of the people inside them. You can tell when some people are looking down by what you see in their mirrors, for example. I adjust my assumptions about what they might do, knowing they are reading/texting.

The only serious accident I was ever in was the sort where reaction would have mattered. In effect my good reaction time, and decent situational awareness saved me, but I assume multi-cameras, and even faster reaction might have avoided any accident at all. I was stopped behind a car at a red light. Light turned green, car ahead of me went into intersection, then I did... I caught movement on left side, hit brakes, 1970s land yacht ran the red—he had to swerve into the left turn only lane to do so, cars were already stopped at the light.

My car was totaled, but he only took the bit off in front of the front wheels (Honda Accord). He fled the scene, too, after briefly stopping. I went into the motel at that light to call the cops, and a police car showed up before I could. I ran out to tell him what happened, and he said, "Hang on, he didn't get far. He rear ended a police cruiser at Carlisle." Cops told me later he said that his brakes were out, but he came to a complete stop, saw me walking towards his car, then took off. Police had already said they didn't believe him, but were having his car towed to their service area for inspection and a brake check. No need to document the accident well when the same dude hits a police car. They had a crime scene photographer out there, lol. Got to see him cuffed, too. Best part? He was actually insured.

Edited by tater
Link to post
Share on other sites
36 minutes ago, tater said:

Cool, what do you race?

I don't race. I do (well, did -- I haven't been on the track in a while) what are called "track days" or "high performance driving events". It's track driving without the "racing" part -- the only thing you race against is your own laptimer, trying to improve your personal best.

I have an S2000 modified for the track with much of the interior removed and replaced by a welded-in roll bar and race seats and harnesses.

Here's a lap I did at Sears Point where I got a little too excited about catching the (M3, I think?) in front of me.

https://www.youtube.com/watch?v=3ffxaK0XD7E

Edited by mikegarrison
Link to post
Share on other sites

In response to "how do humans train a better AI" the answer is - there's far more time available for review than there is in the moment.

Also most accidents are caused by distracted idiots. AI will always be focused on the road and will always have the same capacity of decision making.

Link to post
Share on other sites
7 hours ago, RCgothic said:

There's far more time available for review than there is in the moment.

Yeah, that's what I was asking about. So I presume there indeed do exist people who review the stuff, right ? Or is it the wrong presumption to have ? That was my original question.

9 hours ago, tater said:

The hardest anticipation aspect for machine learning seems like it will be "reading" the motions of the other cars, or what you can see of the people inside them.

I can tell you that I could more or less predict what other vehicles will do just based on how they've been going, what the vehicle is, and who are driving them. I don't drive cars very often though, more often on motorcycles. But I almost always sit on the front since a good while, so I observe things, also inform things. That being said given the lockdown we haven't been going out very often.

I guess they'd just need a separate AI trained to do that specific thing, so one part deals with recognizing the object while another deals with prediction...

9 hours ago, tater said:

The only serious accident I was ever in was the sort where reaction would have mattered. In effect my good reaction time, and decent situational awareness saved me, but I assume multi-cameras, and even faster reaction might have avoided any accident at all.

To be fair stoplights actually make you less aware of your surroundings... Hence they're not exactly ideal in a city, but if you're pumping vehicles through then there's very little else that one can do before resorting to grade-separation.

Here though regardless of the intersection control, you just have to watch out for people.

Link to post
Share on other sites
2 minutes ago, YNM said:

I can tell you that I could more or less predict what other vehicles will do just based on how they've been going, what the vehicle is, and who are driving them. I don't drive cars very often though, more often on motorcycles. But I almost always sit on the front since a good while, so I observe things, also inform things. That being said given the lockdown we haven't been going out very often.

I guess they'd just need a separate AI trained to do that specific thing, so one part deals with recognizing the object while another deals with prediction...

It's entirely possible the ML ends up doing this, though in a different way than we do.

I mean with enough camera resolution, it's not impossible that the system eventually "figures out" that when what we see as the bill of a hat in the rearview mirror (because driver is looking down, what should be edge on in mirror is "from the top"), the car is erratic, underspeed, whatever. The system doesn't "know" anything, it will just possibly have patterns it associates with car behaviors. That or the micro-movements of cars themselves telegraph things we notice as the type of driver (doing makeup, reaching back to kids in back, whatever we see that signals to us), but the ML will simply see as erratic vehicle movements at a micro level.

I have no idea when full self-driving becomes a thing, but it really is fascinating.

Link to post
Share on other sites
13 minutes ago, tater said:

The system doesn't "know" anything, it will just possibly have patterns it associates with car behaviors.

Yeah, good point. The system is merely seeing things as byte values... There isn't any particular reason for it to group things together, the whole algorithm matrix is just set up to be able to choose things.

Link to post
Share on other sites

When a machine learning system can explain why it does things to us in plain language... we'll have AGI, lol.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...