Jump to content

K^2

Members
  • Posts

    6,181
  • Joined

  • Last visited

Everything posted by K^2

  1. And if the ATC gave them clean separation, they could have gone to 500' without endangering anyone. And in fact, if helo crew went to 500', everyone would be safe, including helo crew. Whereas if helo crew stuck to assigned altitude, they might have still died. There was an error that resulted in more fatalities than there could have been in this particular incident and might not have mattered at all in another, namely exceeding the altitude, and there was an error that absolutely should never have been allowed in any air space ever and would always result in a dangerous situation - namely the overlapping of the paths. I understand that in this case, by pure chance, the helo exceeding altitude happened to be a deadlier of the two. And I understand the natural instinct to assign blame based on that. But the reality is that allowing the aircraft close together is the more egregious error. That one could have happened to two heavies and we'd be talking about hundreds dead. That is a more serious error. And we do not know if helo crew, ATC, or airport administration are responsible for this. That part's completely silly. Any limit must have safety margins already built in. If they "should have been closer to 150'," then the limit should have been 150'. It wasn't. It was 200', because it was expected that 200' is safe - and that includes margins of typically about 100' on each side, meaning safe separation distance plus 200' extra for safety, which simply did not exist on this particular intersection. The 200' ceiling was not to provide separation with traffic landing on 33. It was never meant for that.
  2. ATC told them to maintain visual separation and to remain East of CRJ. Given how many planes there were in the air (you can see this on footage from a number of angles) I'm not sure it was reasonable to expect that helo had a fix on the correct plane. And if they believed that they were given a fix on a plane ahead of AA5342, then they were flying exactly as vectored. In other words, they proceeded on the course they had in their plans which would bring them directly under the landing airliner, well within the wash. Again, the only deviation from helo was the altitude, and while maintaining 200' would have saved the airliner, it would not prevent an incident with the helo, potentially with the loss of its crew. The vectors they got from ATC did not prevent the approach far below the minimum allowed separation. The incident turned into an accident turned into a catastrophe due to helo exceeding altitude, but the situation shouldn't have existed in the first place. The vectors helo got from ATC were not adequate for the situation. That is not even a question. The question is whether this was because ATC didn't follow the procedure, helo crew didn't request better vectors, or both did what they were supposed to and the procedure was just bad. This is really the extent of the unknowns at this point.
  3. Yeah, I must have missed a word and thought you were saying disorientation instead of a failure, rather than in combination.
  4. The estimates I've seen put it under 10MT, because relative velocity isn't all that high. It's not great, and the latest track places rather populated parts of India as a potential (though, less likely) point of impact, so this could still turn out to be the worst disaster in recorded history. But the far more likely outcome is an impact in an ocean or less populated parts of Africa, which would result in comparatively minor destruction. Still, if we can redirect it, it's better than letting it hit the Earth. So worth investing resources into it. My understanding is that the 2028 fly-by will be when we'll get a much better idea if it's a risk for 2032 or not, and should give us a bit of time to prepare a response. Oh, and even if we chose a point of impact, a 10MT explosion is still going to have climate impacts, and our precision on redirect is not going to be great. It's far, far safer to just cause it to miss. Yes, that leaves a small chance of it becoming a problem again in the future, but at that point, it's just as likely as any random rock.
  5. I mean, crap happens, but this was during takeoff while turning towards heading given by ATC. You usually watch your instruments while doing this, and it's pretty hard to confuse left bank and right bank on the ball. I am having hard time picturing this being caused by disorientation on an otherwise normal takeoff. When disorientation can play a huge role is if something else in the cockpit goes wrong, and pilots trying to deal with a minor emergency cause a much more serious one by drifting off course. That I can easily believe in the IMC takeoff. But it would still mean that something happened that the crew thought was more important than watching the instruments. Of course, if it was something comparatively minor, it might be very hard for the investigators to find out.
  6. Every indication - visual recordings, recovered altimeter data, and radar data, suggest helo went over 200'. It's unclear if they had a reason to or if this was a mistake at this point, but that looks to be one of the factors in the accident. Looking at the approach, if the liner went below 200' at that point, it was not on a slope to land at the airport. Since it was established, I assume the plane was on ILS slope. There is not really a whole lot the crew of the airliner could have been doing differently than they did following their ATC clearance. All of this said, if the plane passed 100' over the helo, the latter would still be caught in a wash with potentially tragic consequences for the latter. So while the altitude might have contributed to the direct crash, causing the loss of the airliner, the situation where the two flight paths were allowed to cross with less than 500' of vertical separation (which was simply unavailable) should never have been allowed in the first place. As far as I can tell, the crew of the airliner made no mistakes. Their warning systems would have been useless in congested airspace, and they would be landing with nose-up attitude and slight left bank, meaning they couldn't possibly see the helicopter, and would be relying on ATC to make sure their landing path is clear. The helo was given a directive to maintain visual separation, which they failed to do. What I have no idea about, and I hope the investigating team sorts out, is whether that was a reasonable request in the first place, whether the helo crew would have been expected to follow such a request or should have protested, and why visual separation is even part of the procedure in such a congested space. This could place responsibility on any combination of the helo crew, ATC, and the airport admin. It is entirely possible that everyone did what they were supposed to, and the procedure was just bad. But usually, accidents like this happen when multiple things go wrong, and that's probably the case here.
  7. Yeah, I've seen some experienced pilots mention stall, and I don't know what kind of planes they fly, but a Lear isn't going to stall at 200kts+ while speeding up. Whatever caused that aircraft to lose control was not aerodynamic. After seeing more precise numbers, I'm also doubting an engine failure. At least, a total engine failure, as the plane is still climbing and speeding up at rates that shouldn't be possible with an engine out after it already began banking to the left. Keeping in mind that the clearance was for a right turn, I'm pretty sure the moment we're seeing deviation to the left, the plane is already in an emergency state. Likewise, because clearance was for a right turn, and it looks like the plane's starting it, it doesn't sound like the controls simply locked up, or we'd expect it to keep banking to the right. The sudden left bank that developed into a dive that the pilots could not recover from suggests a very sudden loss of lift on the left wing. I've been trying to find the hydraulics schematic for 55, because the timing would match the pilots going gear and/or flaps up around the time of the emergency. I wonder if it would have been possible for a hydraulics failure causing flaps to have retracted on the left wing, but remain in takeoff position on the right. What this would do is reduce the AoA on the left wing, reducing its lift, causing a roll to the left, while also increasing drag on the right side, causing a yaw to the right. That could have been disorienting enough to where the pilot flying reacted to the yaw first, kicking the rudder to the left, which would have made the roll even worse, and cause it to develop into a dive. From there, it literally might not have been time enough to correct it before ground impact. Normally, this would be fairly straight forward for the investigators to establish, but the impact left a sizable crater. I have a feeling, piecing that plane together is going to be a particularly awful sort of a puzzle, and investigation might take a bit. A data recorder would provide best clues, but I don't know if it would survive this.
  8. The reason Orion's involved is a) Because budgeting works by committee and there are a lot of parties interested in Orion/SLS, and b) Redundancy. Now, the second point was mostly used to excuse the former, but it might end up being the more important of the two depending on how various projects unfold. As for the Gateway, its purpose is to serve as a gateway. There are a lot of reasons why control of L1 is of strategic importance, or at least believed to be such, and the Gateway station is meant to function not just as part of the Lunar mission, but also as an L1 outpost. Yes, if all we wanted was a quick return to the Moon, plant the flag, stomp some boots, there would be easier ways to do it. We can use Falcon Heavy for the mission and modify an existing upper stage into a lander. Even if we were interested in repeated trips, as you suggest, there are much better options. But that's just not what Artemis is about. That might be the public excuse for it, but it's always been more about corporate interests and claim over L1. Whether or not the latter is actually a factor in anything is also kind of irrelevant, because again, corporate interests.
  9. Bait used to be believable. Anyways, FAA asking for a mishap investigation indicates that a mishap resulting in increased risk to public has occurred, in their opinion, and they want to know why. https://www.faa.gov/newsroom/statements/general-statements Notably, from the FAA statement: Emphasis for anyone with trouble parsing the statement whole. So the claim that, "The hazard zone was indicated in advance, and that's where the debris fell," does not hold water. The debris fell outside of the hazard area, creating danger to airline traffic.
  10. Bezos is better at keeping low profile, but I agree that it's no reason to let him off the hook. People should scrutinize BO and its gov't contracts more. Both the past under current admin, and what they'll get under the incoming. Relevant read: Whataboutism I'm pointing out specific patterns in company statements and decisions in the context of technology used and stated and expected timelines for the development as well as past behavior of companies under the leadership of the same individuals - the political stance of said individuals was at no point brought up, and you trying to make it sound like it's about politics rather than company policy is exactly what you're accusing the rest of us of doing right now. Company policy is relevant to the discussion of technology and its safety, and so setting that aside because you want to make it sound like decisions made by company leadership are automatically politics, because it hurts your own narrative, is not helpful to anyone involved in the discussion.
  11. Because there is no reason to expect that CEO of Sierra is going to get any special considerations from the admin. He is not politically involved and doesn't own media companies. I can't say the same about SpaceX or Blue Origin. It's not a reason to automatically dismiss either of these two companies, but it is a reason for significantly more scrutiny. Pretending not to understand that is very bad faith. Outside of that, the scrutiny should be on grounds of policy of the company itself, and that's all we've been talking about, so no need to pretend that it's anything else.
  12. Generally, I'm on board for letting people put themselves at a risk they fully understand on a voluntary basis. When Mad Mike Hughes was building his steam rockets I had zero objections - he clearly knew what he was doing and how it can end. I start liking it less when large corporations are paying a crew to take on a risk, because now we're getting into significant power disbalances. It's worse if the crew is under military service or another obligation. Even so, this is a limited concern. If SpaceX says, "Hey, we have conducted enough tests, and can honestly conclude that there is a 10% chance it will blow up, but we want to try anyway," my ethical concerns stop at making sure the crew understand the gamble and are prepared to play Russian Roulette for whatever prize money is on the table. The thing I'm afraid of is SpaceX instead declaring that the Starship is safe when their tests do not fully support that conclusion. Consent to risk cannot be built on a lie. And yes, this hasn't happened yet. My fear is based entirely on how Tesla has done this exact thing with their "autopilot," lack of 3rd party testing on Cybertruck, and so on, and still standing on PR that their vehicles are safe. SpaceX has started as a very different company, but they have lost a lot of people who got SpaceX to where it is and built hat safety and transparency into the Falcon program. Current leadership seems to be going for a Boeing and Tesla approach instead. And yes, this is all hypothetical, and if none of it happens, great. I was worried over nothing. I'll take that gladly. But if in a couple of years we see SpaceX leadership stand there and declare that "Starship is safe," after it managed ~5 complete test flights, do think back to this discussion.
  13. I agree with the broader sentiment, and the situation is very different between the tech and knowledge we have now and back when N-1 was developed. However, there are problems we can't address with simulation, because they are due to an interaction with other systems of nearly infinite complexity. Flight 7 is a good example. Simulating one engine and proving it to be perfectly stable wouldn't have prepared you for how it failed in concert with the rocket. And when you add more engines, you increase the number of potential points of failure that you did not foresee in a simulation. So instead of the problem with N-1, where you multiply failure rates of individual engines, you are multiplying factors that have nothing to do with an individual engine. The outcome is still an exponential growth. One you have better control over, but not one you can eliminate entirely. Naturally, building a small number of giant engines also has issues, especially in context of reusability. And the pros vs cons of Saturn V vs N-1 approach are on a completely different set of scales now because of history, technology, manufacturing processes, the economies involved and so on, and so on. But neither is it fair to say that the inherent problem of N-1, that, "More engines = more things that can go wrong," has disappeared. It's not something you get to just ignore and pretend it doesn't matter now, because the simulation tells you that the engine is 100% reliable, and all tests show that when running in conditions identical to the test, that really is so. Because the engine will never run in conditions identical to simulation in a real mission. Nothing we invented lets you take that risk factor to zero, which is the only case where you could claim that comparisons to N-1 are bogus. That said, without taking this risk, I'm not sure we'd be getting a super heavy at all. So they either get it to work and fly safely, or they don't, and it's not so much about the road not taken. Just so long as nobody pushes for a crewed launch before the system has been properly tested. The biggest risk of Starship due to above mentioned problems with the booster and the shifts in company culture is that when Starship is "done" and has undergone a few successful flights, we won't have nearly the certainty that the rest of them will fly safely until we launch enough for a statistically significant sample. And if the corporate types at SpaceX try to convince the regulators that the Falcon and Soyuz safety records are an indicator that Starship is ready, I only hope there are people in whatever admin's in charge at the time that will call them out on this BS. We need to see dozens of flight in a row without failure to declare the Starship safe enough to even start running crewed tests.
  14. I think that's the crux. It's a rocket test. Everyone involved in it should at least have been prepared for it to go spectacularly wrong. And investigation is primarily for the benefit of the team building the thing, so they might as well be the ones to do it. If safety concerns are raised, these should be investigated separately, but even then, access to actual debris and establishing cause of the accident might be entirely unnecessary. Again, outside of SpaceX and these curious, nobody really cares why it blew up, and it wasn't a surprise to anyone that it could. So safety precautions would have to be built on that assumption, and if they were insufficient, the investigation would be into why proper safety procedures weren't followed or if there is a reason why these were not enough. These kinds of investigation happen in aviation all the time, and that's how it gets to be a safe industry. There is a question of costs due to disruption to normal traffic and environmental impact and how the company conducting the test should be recompensing all these impacted. But again, in this particular case, it's probably on airline companies to sue SpaceX if they believe their time tables were sufficiently delayed. Given that the impact on traffic was that of a moderate thunderstorm, I doubt anyone will want to waste the effort. If such disruptions happen often and without proper warning, I can see it being escalated, though. I'd also complain more and louder if this was a hydrazine rocket. MethaLOx gets a 'meh' from me. It's not that dumping rocket debris of any kind into the ocean is great for the environment, but as far as I'm aware, it's all pretty benign. A shipping vessel crossing the ocean will dump more hazardous materials (through leaks, corrosion, and accidents) during its voyage than what ended up in the ocean from Flight 7, if I had to hazard a guess. (I'm happy to stand corrected.) I have general concerns about the program as mentioned earlier. And everything I was able to find on procedurals and hazard zones before the explosion makes me feel uncomfortable, as the traffic exclusion zones got activated only after the breakup. I haven't flown in nearly a decade, though, and even if there were deficiencies, it's on FAA to figure them out, and amend how they work with SpaceX going forward. Either way, it's not really a complaint about Flight 7.
  15. Engine. Unity is just not a good fit in 2025. Lacking budget for something dedicated, Unreal is a much better foundation. Rigid physics for crafts. Look, there's flex to real structures, but it's hard to simulate in a way that works. You're better off simulating the stress across a rigid structure. You'll still have failures of overstressed parts, but no flappy rockets. Assembly building and part manager are a mess. This is less concrete - it just needs work. From setting up unit tests on saving/loading the craft so that the system isn't broken half the time, to the UI/UX side of things.
  16. My concern is that, based purely on preliminary, the part that failed is not related to systems that have been changed for this flight. The failure happened due to insufficient safety margin on systems that have already passed the tests. And as a consequence, some of the new changes still need to be first-tested on the next flight on top of things that need to be improved for the parts that failed. Technical debt has gone up, not down, as an outcome of this flight. There is a part of the design process where you move fast and break things. But you have to ramp it down gradually. You're not going to arrive at a working model by changing everything every time. You have to arrive at smaller and smaller changes as time goes on. Sometimes, that means flying with something imperfect knowing it will work well enough and get the mission to results. This is something that all good engineers understand, and something we've seen with Falcon's design process. They've flown older engine configs when they were itching to test new ones, because there were other flight systems that had to be checked first. I'm not seeing that with Starship. Now, maybe, it's still very far away from completion. Maybe nobody expects it to even remotely reliably until flight 100-something, and I'm expecting too much consistency from flight 7, and SpaceX is happy to burn $100M per flight to keep doing these tests where they overhaul multiple major systems for each flight. But that's not congruent with the promises that SpaceX is making on the schedule. And it sounds like the team's under pressure to test more things in every flight precisely because management is trying to cut cost by flying fewer tests. Which is, predictably, backfiring. And even that isn't a huge deal if all that happens is that SpaceX burns more money. But we've seen how self-driving on Tesla was handled. The progress was slow, and instead of that resulting in more tests, the company pushed a raw product around the safety checks that should have prevented it. Hopefully, I'm wrong here, but it sounds like SpaceX is making all the same mistakes that Boeing has been, from how they cut corners, to how they handle tests and safety, except that SpaceX now has an ability to bully its project through by using the gov't connections leverage, regardless of the safety in ways that even Boeing couldn't. If that is the case, this largest rocket might also turn into the largest disaster with loss of human life. Maybe I'm being too pessimistic here. I am confident that the Starship project is not in as good of a shape as the PR tries to show it, but some delays and overruns aren't the worst thing in the world - so long as appropriate steps are taken when it starts to become unsafe. Given how close the debris track came with threatening a few airliners this time, hopefully, we'll see FAA take steps to increase the safety in the future by requiring that the airspace is properly cleared. That will put additional costs and constraints on SpaceX. If the SpaceX plays along - good. They're taking responsibility and eating the costs. If not, and they bully FAA into allowing the future launches to continue putting passenger flights at risk instead, then we should all start being way, way more concerned about how SpaceX is handling the Starship project. FAA's investigation shouldn't be into why the rocket failed. Rockets do that sometimes. It's into why airliners managed to make it so close to the debris track. The warning to traffic was only given for the area immediately around the launch site, presumably, because the rocket was above relevant flight levels. That's fine if you have a well-established rocket with known failure characteristics. It's irresponsible if you're flying an experimental rocket that may fail in novel ways, like what just happened. Clearly, the simplified procedures were some sort of an agreement with FAA that SpaceX takes responsibility for keeping the launches safe to air traffic. That didn't happen. FAA must clamp down on that. It is part of the agency's direct responsibility. And yes, if that doesn't happen, we should be worried about influence of corporate interests over regulatory agency, because that's putting all of us at risk. Especially in light of the Boeing's recent failures on that front.
  17. That was very pretty. And I don't want to make a big deal about any particular failure - this purely a superstitious side of me speaking - wow, we're off to some bad omens this year.
  18. "Contractor working at <place>" is the standard phrasing. We have a lot of "Contractor v. FTE" (Full Time Employee) discussions around here in the Valley. If your resume lists FAANG (MAANA????) it makes a huge difference if it's a "worked at" vs "worked for". Personally, I think it's ridiculous, because these people are doing literally the same job most of the time, but I have a lot of empirical evidence for it making a huge impact on your ability to find jobs in the future. Being an FTE in one of these automatically passes you on a pre-screen and sometimes the first round of interviews.
  19. I'll let you figure out what the problem with referring to that article is based on this screenshot from it.
  20. Thanks, I almost drowned. It'd be embarrassing. "How'd Kat die?" - "Oh, she was drinking water while reading Joe's post."
  21. Ok, so first of all, you're 2 for 2 on YouTube channels I would strongly recommend avoiding. Astrum has a lot of the similar problems, where it takes sensationalist approach to an observation and runs with a hypothesis that's, to put it lightly, not seen as likely, and without any deeper understanding of cosmology involved. Secondly, you need to understand that any acceleration that we observe has to be explainable by matter within observable universe. Because we'd be observing it. Another way to look at it is noting that the path from gravity's source to us is at most as long as the sum of paths from source to observed mass flow and from there to us. (Worst case scenario, that is the shortest path.) Meaning, if the gravity's source is outside of our horizon, it can't produce acceleration detectable by us. However, it is very important to keep in mind that the earliest universe is opaque. What we call the observable universe is everything down to the age when it cleared, around 380ky after the big bang. Anything older than that is effectively obscured by the cosmic background. Because the universe is expanding at an accelerated rate, some galaxies that started out in that shell have moved beyond it since, making them invisible to us now. Meaning, there are masses we don't account for, and any random fluctuation anisotropy in them would be exacerbated by the higher density of the early universe. So if you naively compare the observable universe to movement of distant galaxies, there is an anomaly, that's trivially explained by the clusters and superstructure that goes just beyond, and are still falling within the normal distribution of densities consistent with the part of the universe we can observe. In simple, simple terms - if these galaxies were accelerating, it'd be weird. Movement is easily explained. In fact, if you simply started with the Wikipedia entry for Dark Flow, you would have discovered that a) evidence for it is inconclusive and somewhat controversial, and b) even if we were to assume it exists, the amount of flow consistent with measurements could be explained by density fluctuations in the early, opaque universe well within fluctuations allowed by the cosmic background radiation that we do observe. One paper puts the detection significance at 0.7 sigma. By the way, note the date. It's from 2009. As well as the original claim that the Astrum video talks about. Yes, there hasn't been anything new recently discovered. This is something that astronomers looked into more than a decade and a half ago, and there's been little change since, yet it's presented across several sensationalist pseudo-scientific channels as sensational new discovery. In other words, please, stop watching junk. If you want a recommendation on a good astronomy channel that goes over interesting discoveries and is done by somebody who is actually an active researcher, you can't go wrong with Dr. Becky. I would much rather you be legitimately confused by Hubble Tension than fall for that pseudo-scientific garbage.
  22. His videos are full of misinformation. Anton's background in astronomy is rather rudimentary, and in cosmology non-existent. He often brings up speculation about models that aren't just unconfirmed, but retracted, and does so without understanding any of the math that went into them in the first place. This particular video is a good example.
  23. I don't know if you've seen any footage from the ongoing, but the cities don't get occupied. They get leveled. Yes, once you firmly establish territorial control, you need to bring in supplies, establish new infrastructure, possibly rebuild and repopulate in time. But at the point of actual contact? It's a meat grinder, and there is zero reason to send in humans when unmanned means of destruction are clearly doing a better job at everything from scouting, to laying mines, to clearing trenches and buildings. I'm not saying equipment like APCs is becoming obsolete in this kind of warfare. You still need evacuation transport for when the battle lines shift quickly or general transport along threatened routes. But the current use as an infantry delivery vehicle? That's just a waste of resources now - both human and materiel.
  24. I don't know about water bottles, but use of literal sticks tied to a drone is well documented. Einstein was off by one in the most horrifying way imaginable. Any military still trying to figure out the most efficient way to deliver troops to the contact line is investing resources into falling further behind. The future is clearly unmanned. Delivering the equipment, though, that's still going to be relevant.
  25. The math says nothing of the sort. The news article you link has this quote from the scientists especially for you: The fact that phase velocity can exceed group velocity when traveling through certain media is well known, and there have been similar experiments with excitation being detectable on the far side of the medium before the near side. Both, however, are delayed by more than time required for light to travel from the original activation of the lasing medium. In effect, it's the side closer to the laser that's getting a delayed response due to the weird way the wave propagates through the medium, allowing intensity to build up to a detectable level on the far side before the near side. (You can't have a perfectly instantaneous laser pulse, because physics.) This is sometimes waved away as quantum weirdness, but really, the effect has been known, at least on paper, in classical electrodynamics for a suitable choice of μ and ε of the medium and a given source spectrum. I don't know if we've had experimental verification until now, and if not, kudos to this particular team. But again, this doesn't involve FTL, which the scientists themselves are fast to point out, precisely because they don't want somebody running away with it like you just did. It's just waves being waves. Your second link is to an article about quasi-particles. Again, it's about an excitation in a medium - in this case, not even a real particle, and yeah, you can make waves in matter do weird stuff. None of it allows a wave to arrive at a destination faster than a beam of light in a vacuum would. I've spent what, 5 or 6 years in grad school basically just doing particle/wave propagation and interaction. I might be rusty, because it's been over a decade, and I might need to look up a reference or derivation here and there, but that's one topic I can talk about confidently. There are topics in cosmology and gravity that I'm very rudimentary in. Like, if you ask me about how the universe expansion is accelerating (which is required for a strict horizon) or something about the observations of background gravitational waves, I only know the barest of basics. Yes, with a little bit more math than most people, but still nowhere near the levels of anyone actually studying these things. But if given an expanding universe, you ask me about how waves propagate through it, be they light, particle, or gravitational waves, that's my domain. In short, locality is built into the space-time itself and comes down to the fact that no matter how weird the curvature gets, if you zoom in far enough, you'll find a patch of space-time that's basically flat, and that will have a metric where distance is x2+y2+z2-t2. It's that final -t2 that guarantees that no matter what else is going on, an excitation in a vacuum, be it a force field, a particle field, or curvature of space-time itself, cannot propagate faster than c. And in a universe that expands at an accelerated rate, that means you can pick out two points far enough apart, that signal from one can never reach the other. If I had to design a prison universe, I cannot think of anything more secure than what we have going on in this one.
×
×
  • Create New...