Jump to content

tater

Members
  • Content Count

    18,109
  • Joined

  • Last visited

Community Reputation

25,811 Excellent

About tater

  • Rank
    Rocket Surgeon

Profile Information

  • Location
    On the side of a mountain in New Mexico

Recent Profile Visitors

16,544 profile views
  1. It's not like colliding with other vehicles is desirable. The goal would be to not collide with anything. This seems like a pretty unlikely case to even happen. The garbage notion of "discrimination against poor people" not seeing it. Beater cars are more likely perhaps to be driven by younger people who can only afford that, who are more aggressive, and have less time in type, and are not as skilled (or rely on their better reflexes, etc). maybe there is a signal there. The "reward" for the ML system is always "don't crash," so learning how to avoid crashes, even with specific "black box
  2. LOL. We have a Rover, and a BMW, and I want to add something to this, because I notice it when I drive the bimmer, and it drives me nuts, and while it's fun to call BMW drivers names for not using the signal, there is an engineering reason why (and it's an abject design failure on their part, IMNSHO). One, my wife and I always use turn signals. Always. I have found myself signalling reflexively turning the 180 into my carport—in my own driveway. My kids think I'm a monster because I have said after yelling at non-signalers that anyone who doesn't use their turn signal 2-3 times in a
  3. When a machine learning system can explain why it does things to us in plain language... we'll have AGI, lol.
  4. It's entirely possible the ML ends up doing this, though in a different way than we do. I mean with enough camera resolution, it's not impossible that the system eventually "figures out" that when what we see as the bill of a hat in the rearview mirror (because driver is looking down, what should be edge on in mirror is "from the top"), the car is erratic, underspeed, whatever. The system doesn't "know" anything, it will just possibly have patterns it associates with car behaviors. That or the micro-movements of cars themselves telegraph things we notice as the type of driver (doing makeu
  5. "Needs to be a few hours at most." is "aspirational" I assume.
  6. Yeah, NS seems more forgiving. The fatal accident of Spaceship 1 might be impacting my POV a little, but I see NS as reliant on technology, and I see Virgin's Spaceship as reliant on both technology, AND pilot ability. Since most all aircraft crashes are pilot error... that seems like a weak link for what is an experimental aircraft.
  7. Given it's a few miles south of the NM state line, I think it looks like the Earth. Course I like "the bones of the Earth." Yeah, I'd do the NS flight as well (if it didn't involve spending ridiculous amounts of money). I'd even pay up to some non-ridiculous amount*. I was hiking with a friend yesterday and we talked about NS vs the Virgin version of suborbital tourism. I told him if I won a free flight on Virgin I'd sell it, but I'd go on the BO flight if I had a free opportunity. *I'm sure my definition of not ridiculous is an order of magnitude or more lower
  8. Underwhelming. I mean, it would be cool to do if a few hundred grand was pocket change, but their launches are not super interesting, and you think they'd hire some random youtuber to do their streaming‚BO only has 172k subs, lol. Seriously, they could hire any number of randos with more subs who could give them advice on better video presentations.
  9. Cool, what do you race? (wife and I always wanted to do a racing camp thing for vacation, never worked out with RL, kids, etc) The hardest anticipation aspect for machine learning seems like it will be "reading" the motions of the other cars, or what you can see of the people inside them. You can tell when some people are looking down by what you see in their mirrors, for example. I adjust my assumptions about what they might do, knowing they are reading/texting. The only serious accident I was ever in was the sort where reaction would have mattered. In effect my good reaction time,
  10. Driving is amazingly complex, and I think it's surprising how well it works (for HUMANS driving) considering how dopey many humans are. That said, planning ahead is an aspect of driving, but there are loads of drivers that don't seem to do all that much of that (all the people I see looking at phones, for example). Teslas do some planning (estimating where blind curves are going, etc), and do it pretty well. I suppose the question is where the edge cases come in—are those more anticipation, or more reaction? Can any of the former be mitigated by not anticipating the action of another vehi
  11. They haven't really iterated the boster as far as I know, but certainly the capsule was not entirely crew ready I think. One of their stated objectives with NS is learning operations, I've just been surprised how slowly they've worked that issue. Seems like they should be flying as often as possible assuming they have employees at van Horn all the time anyway.
  12. The whole training paradigm is edge cases. Staying on the road, and nominal driving is a solved problem I think. Tesla seems to take any "disengagement" data, and uses that to train. On the plus side, human reaction times at best are slow compared to a computer. So the car gets some leeway there in terms oif how many decision cycles it gets before it has to act. Level 2 seems well in hand (highway driving as an automated-driving cruise control). Full autonomy? Dunno, hard to tell. One thing I think Teslas need to add are camera washers. Not a huge issue for my friends with T
×
×
  • Create New...