Jump to content

rdfox

Members
  • Posts

    760
  • Joined

  • Last visited

Everything posted by rdfox

  1. Also, unlike Corona, they ejected the entire camera, not just the film canister. (If you watch the *full* version of the famous S-IC/S-II separation sequence, or the S-I/S-IV sequence from one Saturn I mission, or the S-IB/S-II sequence from an unmanned Saturn IB mission, you'll see that the film actually kept running right up to the moment the camera left the booster and the power connections separated; you can see it suddenly rush forward out of the booster structure right before cutting off.) I would be willing to bet that NASA will find some future use for all the OMS engines, plus new-production ones, likely in some form of "space tug" where their durability and reliability would be immensely valuable. (There's a REASON that none of the Orbiters have real OMS pods on them in their display sites now; they have high-fidelity mockups NASA fabricated for the display duty, just like the SSMEs.) ...sort of. I don't think it was quite as bad as you suggest; the data I've seen indicated that, other than swapping out ignitors, the F-1 engines were designed to be fired full-duration up to four times without a teardown, due to the requirement that, since each engine was handmade and no two were exactly identical, each engine had to be run for a full-duration hot-fire test on the test stand *after* final assembly and *before* being mounted on the S-IC and flown. (The four-firing requirement was meant to give them two chances to fix any problems found in the testing before they'd have to do a teardown and rebuild.) This is actually a stiffer requirement than the SSME had; the SSMEs could only be hot-fired twice before needing a rebuild, and that was regardless of firing duration. With the static fire test requirement still in place, this is why a pad abort of a Shuttle mission after SSME ignition required the vehicle to be rolled back to the VAB; the engine had already been static-fired once, and the ignition on the pad meant it now would need a rebuild, so all three would have to be pulled and replaced. (This was also probably one of the biggest employment programs for the Stennis Space Center, since it meant pretty much constant static firings up until the end of the Shuttle program... firings that have resumed as part of development work for the SLS.) Personally, I always thought it was a crime to not use J-2s as the mains on the Shuttle, or to use the "flyback" S-IC as a first stage instead of the SRBs, but whatcha gonna do?
  2. NASA bought exactly as many Saturn boosters as they expected to need, actually; the original plan was to fly several manned orbital test missions with the Saturn IB instead of just one, then fly a fairly long and conservative series of unmanned and manned missions with the Saturn V to get to the Moon on Apollo 17 or so, leaving them with three spare Saturn Vs for either missions that failed to complete all objectives and needed to be repeated, or for additional landing missions if everything went right--which, given the state of manned spaceflight at the time, was honestly cutting things a bit close; the odds of needing *only* three spares seemed... very, very low. As it turned out, delays in the program resulted in George Low's suggestion of "all-up" testing and a general telescoping of the program being adopted as the only way to make the landing before the Kennedy deadline, cutting the Saturn IB program down to three unmanned and one manned CSM tests and one unmanned LM test, cutting the Saturn V program down to only two unmanned tests, and (later) eliminating a manned test flight from the Saturn V program (by eliminating the C and E missions from the schedule in favor of the new C-Prime mission that became Apollo 8 to cover most of the C and E mission objectives), for a net reduction of *six* missions to the first landing attempt (which would give five more launch windows for meeting the Kennedy deadline if the first attempt failed). This did allow a major expansion of the number of planned landings (to nine), albeit with the knowledge that budget cuts would doubtless curtail the landing program. The "Apollo Applications" space station science program that eventually became Skylab was an entirely separate project that was intended to take over from the lunar program in the 70s to pave the way for a Mars landing in the 80s, and was to buy its own Saturn boosters (mainly IBs, though there was a proposal for a lunar-orbital station that would have required a Saturn V for the station and V derivatives like the Saturn INT-20 for the crews) rather than share them with the lunar landing program during the period when the two interlaced. (The original plan was to have either a Saturn V or Saturn IB launch for either Apollo or Apollo Applications *every month*, alternating between the two. Meaning a Saturn V launch *at least* every other month. This is why Complex 39 was planned for five pads, and built with provision for completing a third pad...) As it turned out, Apollo Applications never got much of a budget due to concentration on the lunar program, and while plenty of planning and design work could be done for both "wet workshop" and "dry workshop" configurations of the converted S-IVB Orbital Workshops (the "dry workshop" was the version actually flown; the "wet workshop" would have been launched on a Saturn IB to Earth orbit, or a Saturn V to lunar orbit, and would have still had its engine and been launched with propellants on board--i.e., "wet"--and then cleaned out and all equipment and stores installed in space by the first few crews. The only legacy of this was the "grating" bulkheads used to separate compartments in Skylab, which would have been installed in a wet workshop before launch), and for ways to "shut down" an Apollo CSM during the stay at the station (and for McDonnell's "Big G" proposal as a more affordable and capable crew/logistics vehicle for Apollo Applications than the actual Apollo, hauling more people and more stuff on the same booster for less weight), there wasn't actually any money to procure any EQUIPMENT for it. Thus, when the Congressional budget cuts started on the lunar program, it was actually a boon to Apollo Applications. Indeed, the very first lunar program cut, cancelling Apollo 20, was specifically to free up its Saturn V for the Apollo Applications/Skylab program, to be converted into a Dry Workshop and a Saturn INT-21 booster to launch it. Even so, the lunar program was still expected to continue to Apollo 19 and use up all remaining Saturn Vs and all but three of the remaining Apollo CSMs (after four were earmarked for Skylab, three for regular missions and one in a "rescue" configuration that could be used to retrieve a stranded crew--something that came up most heavily in NASA thinking after the movie Marooned was released), so it wasn't really expected to be particularly wasteful. Even with the conversion of a Saturn IB to the Skylab configuration (wet workshop... ironically, using the Apollo 20 Saturn V's S-IVB, while the dry workshop launched used the converted Saturn IB's S-IVB!), there would have been enough of them remaining to send three crews to this second Skylab station. Then the budget axe fell again. Apollo 18 and 19 were both cancelled. This meant we now had two fully flight-rated Saturn Vs that would never be used. What's more, despite having a fully functional backup Skylab, your choice of two Saturn Vs to launch it, and three fully functional Apollo CSMs and Saturn IBs to fly crews to it--meaning that the cost of flying a second Skylab would have been minimal, merely the personnel and consumables costs that were a tiny fraction of the cost of each flight--the Apollo Applications program was cut back to a single station, to transfer money to the STS program... and even that station only flew because it had reached the point in the contracts where cancelling it would have cost more than flying it. One more Apollo-Saturn IB mission did get added to the program, when a politically-motivated joint mission with the Soviets (Apollo-Soyuz) was scheduled to fly in 1975, but that was it. NASA was left with two surplus Saturn Vs, four surplus Saturn IBs, one surplus Skylab, four surplus CSMs (including Skylab Rescue one), and either four or five (I can't remember which) surplus LMs. In the end, one Saturn V went to Houston, while the other went on display at the Cape. One Saturn IB went on display at the Cape, one at a rest area/welcome center in Alabama(!), and I can't recall what happened to the other two. The CSMs went on display with the Saturn Vs and IBs, and the surplus Skylab is on display at the National Air and Space Museum. (For the record, the "third" Saturn V, the one on display in Huntsville, was never actually a flight-rated booster; it was the "battleship" test article, built extra-heavy for use in repeated static test firings, and was expected to be surplus at the end of the program because it was too heavy to actually fly. There was also the Saturn 500F, a full-scale test article with dummy engines and spacecraft, used for facilities verification at the Cape; IIRC, parts of it were used in both the Huntsville and Cape display articles--I can't recall why they used parts at the Cape, but in Huntsville, they used its second stage because both the "battleship" second stage and its replacement from the dynamic load test article blew up on the test stand. The remainder of the dynamic load test article was, as expected, stressed to the point of damage in the load testing, and was scrapped after the end of the program.) Additionally, there was a certain amount of Block I CSM hardware that was surplus; the CMs tended to go to museums, but not all of them wanted the SMs, and as a result, several Block I SMs were sold to a Huntsville junkyard in the 70s (without engines), lingering in the back of the yard until they were finally cut up in the early-to-mid 80s. (Additionally SM-017, a Block I model, was destroyed when a propellant tank exploded in a ground test in 1967, and the wreckage was scrapped. The remainder of the Block I CSMs were at least partially converted to Block II configuration and used for ground trainers and unmanned testing; an example would be the CM at the Detroit Science Center, a Block I vehicle with interior converted to a Block II mockup and the main hatch converted to Block II configuration, used as an emergency egress trainer. IIRC, when it was donated to the DSC, they did cut the tip of the nose off and installed a mockup docking probe, plus added the mylar thermal insulation, so that it would more closely resemble a Block II spacecraft, though the RCS thrusters remain in the Block I configuration...)
  3. Actually, to be accurate to scale while still being in the stockalike size progression series, IIRC, the S-IVB and S-IB should be 5m parts, and the S-IC and S-II should be 6.25m parts (the Saturn V had a base diameter of 10m IRL!), with the Apollo CSM being 3.25m parts (this also matches with the intention of having "Big G" be a direct drop-in replacement for the Apollo CSM on the Saturn boosters). However, that's Frizzank's decision, not mine. Also, Frizz, you might want to check the collider mesh and node placement on the new S-IB thrust structure, the one with separate fins; the old one was steady as a rock, but the new one jumps around like a Rockomax-8 with a Mainsail under it and a Jumbo-64 on top; I'm having to "staple" it down with struts to keep it from having the booster slowly and inexorably tip over during ascent due to the wobble.
  4. He might have been exaggerating the angle. I did manage to confirm that the wind *speed* was 30G50, but not the wind direction relative to the runway heading. It wasn't a fun landing, that's for darn sure.
  5. The worst airline landing I ever experienced was actually also the most recent one I experienced. (Haven't flown since 9/11, not out of fear, but because I haven't had a need to fly anywhere.) Lining up for final, I was in the back of the 727 with an airline employee sitting behind me. The guy muttered, "Oh, this is gonna be a fun one," about as sarcastically as possible; I'd already snugged down my seat belt, but I pulled it as tight as it'd get after hearing that. We never seemed stabilized on approach--for the first time in my life, I mentally reviewed the safety card on short final--and looking out my window, I saw nothing but ground as we crossed the fence in a thirty-degree right bank, still wobbling in roll the whole time. Right over the numbers(!), we levelled out and dropped what felt like about ten feet straight down onto the runway, hard enough to drop some of the oxygen masks in the cabin (first time I ever saw that happen, though I'd heard of it before). Getting off the plane afterwards, I asked the captain, "Bit of a nasty crosswind today, huh?" His reply sent chills down my spine: "Thirty-G-fifty perpendicular to the runway." Yikes. (Look up how aviation winds are reported to see why that scared me.)
  6. The version I've always heard was that a great one is one where you can use the airplane again...
  7. You look at a range of collision delta-Vs (not speeds, but the net change in velocity, covering both speed *and* angle of impact), and you design the car to protect against what's considered the Worst Credible Case. It's a basic principle of engineering that you do NOT design to withstand the absolute worst case possible; you design to withstand the worst case *likely*. It's a subtle, but key difference, and explains why, in ALL designs, there are situations where the answer to the question, "What if THIS goes wrong?" end up being, "Well, it's pretty much gonna be a bad day." The reason for this is that there's ALWAYS a way that you could be putting it through worse conditions; the old phrase "cost-benefit analysis" comes into play. It's certainly *possible* to design a car that would protect its occupants against collisions from any angle at any closing speed that would be theoretically possible. The problem is that it would be far too large to meet with legal limitations (wouldn't fit in a lane, too long for a standard driver's license, etc.), too heavy, completely impractical due to the sort of structure that you'd have to navigate to enter or exit the vehicle, and, most of all, so expensive that nobody could afford it, not even Bill Gates. Indeed, the primary standard that cars are designed to is, depending on where it's meant to be sold, either the NHTSA or Euro NCAP crashworthiness standards, because those are the ones that you *must* pass for it to be legal to sell. (I can't remember all of them, but the frontal-impact standard for NHTSA NCAP is no fatal injuries to unrestrained occupants in a 30 mph crash into a full-width fixed barrier perpendicular to the velocity vector--yes, that particular standard was set in the late 60s, why do you ask?) Cars aren't designed to withstand a head-on collision with both cars going at freeway speeds for the simple reason that such crashes are seen as not credible cases, for various reasons. Very wrong. Have you ever ridden on an airliner in a position where you could see back half of the wing out your window? Immediately after touching down, the aircraft's spoilers (which can be deployed to a small extent in flight for aerodynamic braking effect) are deployed fully, to approximately an 85 degree angle. This disrupts the airflow over the wing, both shedding lift (to place the aircraft firmly on the runway to make the wheel brakes effective) and directly generating a great deal of drag to help slow the airplane down; it also completely destroys the lift being generated by the airplane's flaps, causing them to provide additional drag to slow the airplane down. (Technically, airliners also reduce lift significantly as they climb and accelerate out from the airport, too, by gradually retracting their flaps; once airspeed is high enough, you don't need the additional lift that extended flaps and slats generate, and retracting them reduces drag.) Basically, you want all the lift you can get right up to the moment that you've planted the mains on the runway, and then you want *no* lift, to help make sure you can stop before going off the end of the runway. Well, let's see. Oncoming car, not often, though I've seen plenty in all sorts of racing series where someone plowed head-on into a *stopped* car at full racing speed, but for a fixed barrier? Right off the top of my head, I can name Michael Waltrip in the NASCAR Busch Grand National Series at Bristol International Raceway, Tennessee, in 1991, when he blew a tire, shot straight into the wall, hit a crossover gate, which failed, and plowed head-on into the fixed concrete barrier beyond the now-open gate. Mike Harmon, same series (now the "NASCAR Grand National Series"), same place on the same track (now Bristol Motor Speedway) in 2002, almost identical crash, except that the wreckage of his car was then hit by ANOTHER car after coming to rest. Mark Martin, NASCAR Sprint Cup Series, Michigan International Speedway, 2012, when he spun down pit road, managing to get to the inside pit wall just at an opening where the cars can enter the garage area, and hit the concrete wall SIDEWAYS at over 100 mph, just aft of the driver's seat. J.R. McDuffie, NASCAR Winston Cup Series, Watkins Glen International Raceway, 1991, when he suffered complete brake failure, shot off the track at full speed, crossed the grass runoff area, and plowed head-on into the wall, killing him. Tony Roper, NASCAR Craftsman Truck Series, Texas Motor Speedway, 2000, when a freak accident caused his truck to turn (not slide) hard right to hit the wall head-on at nearly full speed, killing him. Adam Petty and Kenny Irwin, Jr., both killed testing for the Grand National series at New Hampshire International Speedway in 2000, in two separate but nearly-identical crashes where their throttle stuck wide open, and the configuration of the corner caused them to hit the wall with a velocity vector perpendicular to it. I'm sure that if I put my mind to it, I could come up with dozens more, in dozens of different series--those are just the ones that popped into my head within a minute of reading that question. Incorrect. The vast majority of road accidents are, technically, glancing blows where the two vehicles do not hit each other square, and end up spinning away from the point of impact. (This is why Euro NCAP and the IIHS use offset frangible barrier tests rather than the NHTSA fixed-barrier test; this represents a more realistic collision where the cars aren't lined up perfectly square.) The difference is that it's more dramatically visible in racing accidents due to the higher speeds involved, making them slide further and thus be able to spin more before stopping. The entire job of the design engineers is to guarantee that the acceleration of the safety cell proper is within the survivable envelope, using crumple zones and other energy-management features, in any crash up to and including the Worst Credible Case. If the car goes through a crash within those design constraints, then the safety cell *will* remain within a survivable acceleration envelope. Also, have you ever been in a car crash? I have. I've been in a side-impact crash well beyond the car's design criteria (closing speed of 60 mph, directly into the passenger door), with my younger brother sitting in the passenger seat. He underwent far worse acceleration loadings than in any credible head-on collision, but because he was wearing his seat belt properly, he suffered relatively mild injuries--none of which came from the seat belt itself. (He broke three ribs and his collarbone due to the other car intruding into the safety cell, and one rib punctured his lung when he was writhing in pain as the paramedics examined him before they extracted him from the car; even so, he came home from the hospital four days later.) Seat belts, be they three-point belts in production cars or five/six-point belts in race cars, inherently have some "stretch" to them that allows a total of about a quarter-inch of additional passenger forward movement in a crash (which is why it's important to replace your seat belts if your car is fixable after a major crash--the stretch is permanent and the belts are thus compromised and no longer safe after the crash). If your seat belts can decelerate you at a rate that would be fatal, and the safety cell is decelerating at that rate, then you WILL decelerate at that rate, regardless of whether you have airbags or not. Airbags provide *supplemental* deceleration to try and prevent you from impacting the car's structure; if anything, the acceleration loading when you hit the airbag is *higher* than when you're just being restrained by the belt. However, the force needed to generate this greater acceleration is spread over a much larger area, spreading the load out over a larger area of the body and thereby reducing the force on each individual part of the body. No, no, no, completely wrong. The seat belt is not reduced in holding power in any way; this would be insanely dangerous in the approximately 10% of crashes where one or more of the airbags fail, because then the occupants would collide with the car's structure as though they weren't effectively restrained, and we're down to eating steering wheel again. I've been paying attention to crash test results since the late Reagan administration, before airbags were mandatory. I also recall the Big Three spending huge amounts of money on airbag research in the late 60s in hopes of avoiding having to pay license fees to Volvo for their patented three-point seat belt design, figuring that it could replace the shoulder belt, or possibly even the entire seat belt. I grant you, based on further research quoted later, that it has now been shown to be of minimal benefit to the unrestrained driver, but the original design was based entirely on Head-Injury Criteria (HIC) and Total Acceleration numbers; it was intended to keep the unrestrained occupant from striking the vehicle structure hard enough to be instantly fatal. The whiplashing effect causing basal skull fracture and internal decapitation was unforeseen, because it wasn't something the engineers were told to consider. Now, I will acknowledge, my opinions on this may be colored somewhat by my only personal experience with airbags being a crash my grandparents were in back in the early 90s, where they rear-ended another car at about 20 mph, and the front-seat occupants (with three-point belts and frontal air bags) suffered severe chest and facial bruising and broken noses, while the rear-seat occupant (with only a three-point belt, and the same motion clearance before they would strike anything) being left uninjured. Airbag technology has advanced greatly since then, but I still contend that *frontal* airbags are of limited value for a properly-restrained occupant; I'd much prefer a good seat belt system combined with a well-designed bucket seat (deep side and anti-submarine bolsters) and proper seating position for frontal crashes.
  8. Gonna have to cobble this together, since the replies are on different pages... It's not. A properly restrained passenger never even reaches the airbag; one in an improper seating position or not wearing the belt properly (or in a crash beyond the design constraints of the car's safety cell, which is what IIHS has taken to doing to try and get more cars rated poor so that the insurance companies can charge higher premiums) will see the occupant strike the airbag and suffer blunt force injury to the face and chest. The only instance where a properly-seated, properly-restrained occupant will benefit from the airbag itself is in a crash so severe that the safety cell is compromised to the point that the steering wheel or dashboard are displaced sufficiently that they would strike them even with the seat belt in place. In *those* cases, the airbag reduces the acceleration experienced by the occupant by lengthening the time that the impact takes, but that's a situation where the safety cell itself has failed and severe injuries are to be expected regardless of what protection systems the car has due to cockpit intrusion. In cases where deformation of the safety cell is less severe, the airbag can cause injuries that would not happen if the occupant wasn't going to strike the car's interior. I'm pretty sure I've seen more racing crashes than you have. Cars flying through the air are a very small percentage of racing crashes, as sanctioning bodies have put various aerodynamic measures into effect specifically to keep the cars on the ground and scrubbing off speed; the most obvious of these being the passive lift dumper "roof flaps" that NASCAR now mandates on all its cars, which use the same technique as the spoilers on an airliner to shed lift. The most typical racing crash sees a car either have a tire fail or lose control, slide off the racetrack, and into a barrier at high speed, usually in a single-car incident. In that way, they tend to be almost identical to highway accidents. (Indeed, the component of velocity perpendicular to the barrier is often surprisingly low--the crash that killed Dale Earnhardt at the 2001 Daytona 500, where his car was traveling at approximately 185 mph when it struck the wall, had a velocity component perpendicular to the wall of 41 mph.) And even in the rare incidents where a car DOES lift off and start pirouetting, it sheds almost no velocity while airborne. Virtually all of the delta-V comes from the multiple impacts with the *ground* as it tumbles along, and from friction as it slides to a stop. The difference is essentially irrelevant, anyway. The actual behavior of the occupant is the same--the car decelerates abruptly, the occupant continues along under inertia, and the car's structure has enough time to get a significant change in its velocity before the occupant strikes it and is forced to make the same change in an immensely short time. In all situations, the ideal way to protect the occupant is A) to essentially "weld" them to the car's structure, so that they're a part of it and change velocity at the same rate, and prevent intrusion that would cause them to contact anything that's moving relative to the car's safety cell. (This is why street cars now use inertia-reel seat belts with explosive pretensioners, and race cars use belts fitted to the driver and then snugged down to the point of nearly cutting off circulation--the old theory that a little bit of slack in the belt to gradually slow the occupant down was shown to be faulty, instead causing blunt-force trauma impacts of the type mentioned above.) This is done by restraining the occupants in a way that doesn't allow them to move at all relative to the safety cage, and then by making the safety cage itself strong enough to remain intact in even the worst impact situations considered plausible. Air bags are an inferior solution, to be honest; a form of cushioning the impact akin to using a padded steering wheel or dashboard. If you *are* going to hit part of the vehicle's structure, then an airbag will help, but it's vastly preferable to *prevent* contact in the first place. It has everything to do with convenience--the standard three-point belt is much easier to use than a five- or six-point harness, particularly if you're wearing a skirt (just try wearing the crotch strap then!), and it's more comfortable, too, since it has the slack of the inertia reel, plus it's less expensive for the automaker to install a single strap with a single buckle instead of five or six separate straps anchored to the car's frame. Ease of exit isn't really related to the seat belt (racing harnesses use a single-pull quick-release buckle similar to that used on airliners), but rather to the presence of the full roll cage that blocks a significant percentage of the door openings. As for low-speed collisions, there's no real difference between a three-point belt and a five/six-point harness there. Actually, this is the place where airbags provide the most benefit. Because the airbag is not a rigid structure, but instead is an air spring that also is designed to DEflate as quickly as it inflates, the unbelted occupant strikes a (relatively) soft cushion that extends the length of the acceleration compared to striking the steering wheel, the dashboard, or the windshield. Even if the bag is still inflating when they strike it--which is unlikely; it's designed to deploy completely before any occupant could reach it in the worst credible crash--it will still deform much more than the car's structure, allowing a more gradual acceleration. A good comparison to the airbag in terms of "softness" would be the SAFER Barrier developed to protect oval-track racing drivers in crashes by putting an energy-absorbing barrier between them and the concrete retaining wall on the edge of the track, which will deform as much as 18 inches (half a meter) during a heavy impact by a heavy vehicle. When it was first being introduced, people referred to the barrier as "soft walls," because the concept was that they were softer than the concrete retaining walls. That said, they're not actually SOFT; the front surface that the cars hit is 1/8" steel plate, with the energy absorbance coming from the styrofoam backing behind it. It still smashes hell out of the car, and if you kick it, you'll still break your toe, but it's a lot softer than bare concrete. It's a similar thing with airbags--they're not big soft pillowy things; they can easily cause severe bruises and even broken noses and orbital bones, but they're a hell of a lot softer than the car's structure.
  9. For the record, frontal-impact airbags actually INCREASE the potential danger to properly restrained (i.e., seat belted) drivers; the only reason they're mandatory in so many countries is that nobody can be bothered to enforce seat belt laws and get people to wear their seat belts properly. If they were a safety benefit, they'd be mandatory in professional racing series instead of six-point safety harnesses; instead, virtually every racing series that uses production-based vehicles (either "showroom stock," sports car racing, or otherwise) mandates that any air bags be deactivated and/or removed, partly because of their potential danger to the driver, partly because they pose a genuine risk to rescue workers until they can kill the car's electrical system. I'd much prefer to NOT have an airbag in my car, because I've always worn my seat belt, and it's a personal rule that I've had since childhood that the car does not move when I'm not buckled up. (Note that none of this applies to side-curtain air bags; those are beneficial to safety even with properly-restrained occupants, due to the fact that there's essentially no crumple zone possible in side impacts, and it helps protect from intrusion into the safety cell. The only thing that does apply is that they are a risk to rescue workers until the electrical system is deactivated by disconnecting the battery, which is why race cars prefer massive side bolsters on the seat to provide the same effect.)
  10. There is no difference. Deflection and reflection are the same thing; it's just that deflect is normally used for physical objects, and reflect for wave phenomena. "The angle of incidence is equal to the angle of reflection" is equally true in both cases.
  11. The S-I/S-IB is a beast by Kerbal standards, that's for sure. I cobbled together a "prototype" version earlier, and combined with a big ol' transfer stage, a couple of upper stages, and some Titan IIIC seven-segment SRBs, it let me fly a Big G all the way to Duna and back to low Duna orbit in a single throw. (After that, I used a stand-in Centaur with an Agena probe core and docking module as a return stage, which I was able to launch on a Titan IIIC.) For those wanting to experience a rough prototype of the S-IB stage, it can be cobbled together the way NASA really did it--use a Redstone tank core, and attach additional Redstone tanks on its sides in 8x symmetry. Use fuel lines to feed from the center tank to all the outer tanks, and put an H-1 on each of the outer ring of tanks. With appropriate strutting, this should pretty much approximate the stage's performance until Frizzank releases the actual S-IB stage. (I then used a Transtage with an additional tank as a Centaur second stage, and a Titan II second stage as a third stage, then a HUGE nuclear transfer stage on top of it... it's ugly, but dear LORD does it have delta-V!)
  12. Maybe use the technique they used on the real Gemini and explosively jettison the doors for the ejection? That said, as an interim fix, maybe the texture for the windows could be modified to have the crew visible "through the glass" with a fixed texture?
  13. The way I'd do it is counter it with a laser, using the same destructive interference technique as noise-cancelling headphones use. Or, alternatively, you could counter it with a bigger laser. Or a really, REALLY fast missile. Or anything else that lets you destroy the other guy before the heat buildup from his laser destroys you.
  14. For the TRUE 50s/60s feel, you'd need to eliminate all graphics and just give you telemetry data, updated once or twice a second at most, with realistic communications lag and imagery available only 3-4 days after the launch for the launch/ascent phase (to give time for the films and photos of the launch and ascent to be developed), or a similar amount of time after recovery of the onboard cameras for stuff beyond the limits of ground-bound cameras. Also, no external views unless there was someone in a position to take them (meaning EVA photography only if you want to see your vehicle from the outside). You might, by the time you're unlocking lander parts, get live TV footage of the launch and ascent to a certain slant range from KSC (like you got in real life space launches), but after it's far enough away to be effectively out of visual range, you're back to telemetry and, if you're willing to spend money, weight, electricity, and data bandwidth on it, maybe grainy, low-res SSTV footage from onboard cameras (think the look of the few bits of the Apollo 11 EVA where they were able to find some visual records of the original broadcast quality, as opposed to the n'th-generation duplicates we've all seen a million times). But any high-quality footage would have to come from film that gets developed after the return of the vehicle, and any "impossible" camera angle footage would have to be done as "pre-prepared animations" (go to YouTube and check out any of the live coverage of Apollo 11's flight; you'll see plenty of this on the network news coverage so that it's not just talking heads for most of the flight) that represents what it's expected that things would look like, as opposed to what they *actually* look like... User Unrelated: Why couldn't you have a roleplay-heavy future mech campaign in a KSP tabletop? I mean, we've seen people build mechs in the game already--just use a KSP tabletop system as a core ruleset for a campaign involving, say, Jeb and Rockomax finally having had it with each other, so they start building mechs to fight it out and see which one ends up in control... XD
  15. That *is* an "explosive jettison charge" warning logo. The main hatch always had an explosive charge (ever since the Apollo 1 fire, NASA has wanted a way for the white room crew and the vehicle crew to get the hatch open NOW in the event of a fire on the pad), complete with a recessed, covered T-handle on the outside that would allow it to be blown by the white room crew (hence the "RESCUE" arrow on the side--at the point of it is the cover for the T-handle; you open the cover, grab the T-handle, and run ten feet away from the vehicle while pulling it with you, and the hatch blows, just like the canopy on any airplane with ejection seats would when you pull the corresponding external handle). However, NASA also considered the possibility that the Shuttle would end up landing off-runway someplace, leaving the main hatch blocked. For that contingency, they made the port docking window (the overhead ones seen in the OP photo) able to be removed in an emergency, with a thermal blanket and escape rope that would allow the crew to crawl/slide down the side of the vehicle to evacuate in that situation. I don't recall it having a jettison charge then, but apparently one was added at some point. Of course, all of this blithely ignores the fact that there was no way in hell that an off-runway landing of the Shuttle would be in any way survivable, but hey, it weighed next to nothing, and had to be good for morale, right? (I think the thermal blanket and escape rope that were just inside the main hatch were much more likely to be useful in an emergency--say, the vehicle caught fire after a contingency landing and they couldn't wait for anyone to find the keys to the airport's stair truck.) Officially, the docking window exit was also supposed to be used in the event of a ditching (water landing), and there are plenty of photos of the water evacuation trainer at the Cape with astronauts clambering out that window during training, but again, there's NO WAY IN HELL that a ditching would have ever been survivable in the Shuttle (hence the escape pole added after Challenger)--I've seen films of some of the model tests at the Navy's David Taylor Model Basin, and regardless of the angle of descent, deck angle, and speed at impact, something about the design of the vehicle's belly causes it to slap the nose down, dig it in, and either start tumbling end-over-end, or yaw sharply to one side and then violently roll when the front wing digs in. In some cases, it was enough to cause the models (which are capable of handling far greater structural loads than the actual vehicle) to at least partially break apart... anyone on board would have been unconscious at best, and chunky salsa at worst, and the cabin would have sunk before anyone could have evacuated.
  16. Ironically, just Friday, in my physics class, we did the early version of the double-slit experiment, the one that revealed the wave nature of light. Used it to measure the width of a human hair through the interference pattern it generated with a laser, even. Fun stuff! As for the whole debate here, I just quote a physics professor--I think it was Stephen Hawking, but don't quote me on that--who once stated, "If you think that you understand quantum mechanics, then you can't possibly have it right." QM is some weird, weird ****, with very, VERY weird potential repercussions (ever heard of the "Many Worlds" hypothesis?), and even the physicists who've made it their life's work to study it don't FULLY understand it yet. Personally, I find it brain-breaking that things can be particles and waves at the same time (I have a 155-165 IQ, depending on the day and who's doing the testing, but I also have an engineer's mindset), but it brings absolutely magical possibilities with it. (How about the implication that there's an alternate universe out there where I'm Superman, that there's one where the Star Trek continuity is real, and that there's one where the Kerbin system exists EXACTLY as depicted in KSP, complete with constantly-respawning Kerbals riding boomcans?)
  17. The mod doesn't make any changes to any parachutes unless you install the included ModManager files into their directory. So the stock parachutes would retain their values AND BEHAVIOR unless you copied the "Stock" folder, and the ModManager DLL, from the "ModManager Files" directory provided into the game's GameData folder. If you do that, then the stock parachutes would behave like RealChute.
  18. In the real world, you usually put the ullage motors on... whatever the lowest point of the rocket will be at the planned ignition. So one on the fairing and two on the main craft would be probably the most realistic, if you're trying to squeeze out that last 1-2 m/s of delta-V, assuming you have ignition before fairing jettison. However, that said... it might be, in the real world, *heavier* to put in the extra support structure for ullage motors on the fairing, rather than just mount them directly on the upper stage's thrust structure, so you have to consider that--IIRC, the Saturn V used ullage motors mounted directly on the S-II instead of on the S-IC's forward skirt/interstage structure, despite not jettisoning the forward skirt until after S-II ignition. (The dual-plane separation was used there to ensure that the skirt would separate cleanly from the S-II and not strike the S-II's engines, for the record.) So basically, the most "realistic" would be to put them wherever feels right for you, because, as model railroaders say, There Is A Prototype For Everything. (Example: Some boosters with retrograde separation motors on their lower stages place them at the top of the stage. Others place them in the middle, and still others put them at the bottom. It all depends on the contractor's preference, and what provides the best CG balance for that particular rocket...) Also note that you don't necessarily need ullage motors for restart capability--the Saturn V's S-IVB stage only used ullage motors for its initial start after S-II jettison. Ullage for the TLI restart was provided by propulsive venting of LH2 out the engine bell, at a tiny trickle from the initial cutoff to TLI that provided just enough thrust to keep the LOX and LH2 mostly settled at the bottom of the tanks. (This was the same method used to guide the spent S-IVB onto a "safe" trajectory after CSM separation and LM extraction, guaranteeing that there wouldn't be a collision between the spent stage and the spacecraft--they used propulsive venting of the residual propellants to guide it onto a trajectory that would either put it into solar orbit (Apollos 8, 10, 11, and 12) or on course to crash into the Moon (13-17).) There's also the option, on spacecraft with a full RCS system, of simply having the RCS thrust forward just before the scheduled ignition, providing ullage acceleration, with the RCS thrust ending just after ignition. Again, it depends on what technique the contractor and the customer consider to be the most efficient for the mission profile.
  19. Hey, frizzank, a thought on the Saturn series. I know you don't plan on building a full Saturn I version, with the unique S-IV stage, but a thought occurred to me. The RL-10 engines used in the S-IV ended up being some of the most commonly used American engines ever built (because they were WIDELY used in upper stages, including Centaur). If you were to include the RL-10 and a baseplate for the S-IVB that would give us six radial engine positions and a lowered central bottom node for attaching the interstage, we could not only do a reasonable facsimile of the full Saturn I stack, but we could also use the RL-10s for their many other upper-stage applications. (The RL-10 is still in production today as an Atlas V upper stage engine, after all.) Also, this might be something you could toss into the flavor text for the H1 engines: I did the math, and for those who want to do an accurate reproduction of the original S-I stage from the Saturn I (as opposed to the Saturn IB's S-IB first stage), all that needs to be done, after checking astronautix.com, is to set the thrust limiter on the engine to 91.7%. (The Saturn IB technically used "H1b" engines that had been uprated in thrust for the same specific impulse by simply increasing fuel flow; my calculator shows the H1's sea level thrust to be 91.59865% of the H1b that you've simulated.) This would allow us to simulate the Saturn I first stage without any actual coding changes on your part...
  20. If you want to be accurate, the time to jettison the Atlas boosters is at T+2:05 (T+125 seconds) for the standard Atlas, or T+2:33 (T+153 seconds) for a stretched Atlas representing one of the later models, regardless of altitude and speed. (Of course, you also shouldn't throttle it at all, since the Atlas had non-throttled engines.)
  21. I, and thousands of other people found it insanely risky to have a crew on board for STS-1 and STS-2, given that NO American manned spacecraft had ever flown without at least one unmanned test flight. But that worked out. Lots of people thought it was insanely risky to plan to fly humans on only the third launch of a Saturn V, but that worked out, too. Spaceflight is inherently risky. Putting an EVA on the first manned flight isn't any more risky than putting a crew on board, since every single item that has to work for an EVA would have to work for any manned flight. Personally, I'd feel there's a bigger risk in going to the Moon on the first manned flight than there is in having an EVA during it... but nobody seems to be complaining about THAT.
  22. More accurately... the Saturn IB variant of the S-IVB stage (S-IVB-200 series stages) had the hinged fairing panels, while the Saturn V variant (S-IVB-500 series stages) used ones that jettisoned instead. I've never seen any explanation why the difference between the two, since the two stages were in production pretty much concurrently. (Other differences, like the number of ullage motors on each, can be explained by the different lower stages used to launch them, and the different applications.)
  23. Just go to www.astronautix.com and look up any ICBM of your choice. It'll have full stats, including thrust of each stage, TWR on each stage, and total delta-V, just like any other launch vehicle.
  24. Personally, I disagree with that. The whole point of the science points for "vehicle that returned from *insert situation here*" is the engineers getting to examine the vehicle after it's recovered, to see how it performed on the trip. If you leave it in Munar orbit, the engineers can't examine the vehicle itself, just the telemetry data. What might be cool is the ability to remove a relatively small part to carry home for analysis, like what Apollo 12 did with the camera from Surveyor 3. That could allow you to bring at least some of the Science points back, but those points should be for returning the vehicle for analysis.
  25. Moho. Moho Moho. Moho Moho Moho, Moho Moho, Moho Moho Moho Moho Moho, Moho, MOHO. Mo-**********-ho! I've tried to send probes there dozens of times. Of them, only ONE has ever even had an encounter, a lander that was built with far, FAR more delta-V than any calculations said it needed. That one managed to run out of fuel at exactly the right moment to plow straight into the planet at over 4km/s. :rage: :ragegasm: :throw computer out window:
×
×
  • Create New...