Jump to content

rdfox

Members
  • Posts

    760
  • Joined

  • Last visited

Everything posted by rdfox

  1. If you ever do manage to learn how to do launch clamps, custom launch clamps for the Atlas that match the *real* Atlas holddown clamps would be friggin AWESOME...
  2. For the record, the retroactive explanation for the "atmospheric" maneuvering depicted by the starfighters in Star Wars: The earliest starfighters were outgrowths of atmospheric fighter craft, and, indeed, flew using aerodynamic lift rather than repulsorlift technology while in atmosphere. Since they were expected to be able to fight effectively in an atmosphere, where they used aerodynamic lift, that meant that the pilots would have to learn to fight while flying them like the jet fighters we see in our universe. Now, air combat is a VERY stressful thing. You're in a tight cockpit, pulling many Gs, under information overload as you try to keep track of where you are, where the enemy is, where the ground is, and how each is moving relative to the others, plus watch for unexpected threats (like, say, a surface-to-air missile being fired at you), all while your heart is racing, you've almost got more adrenaline than blood in your veins, and the other guy is trying VERY hard to kill you. In that sort of situation, the way you control your fighter has to be completely instinctive. You can't be thinking about *how* you perform the maneuver, you need to just think the maneuver and have your hands and feet automatically make the moves to carry it out. (You get something similar with driving a car in poor conditions... or even walking. If you try to manually command every single move that needs to be done, you'll end up failing completely, which is why it takes babies and spinal injury victims months to learn how to walk.) What's more, with the far higher relative velocities possible in space combat, that happens even *faster* and you can't take the time to think about how to do it. So instead of having two separate sets of controls and/or control logic, the designers set up a control logic for these early exoatmospheric fighters (they were primarily atmospheric fighters with a basic space combat capability) that would allow the use of a single set of controls, and have the vehicle fly in exactly the same way in space as it does in the atmosphere, for human factors engineering. There is a manual override that allows the pilot to make (at least) uninhibited three-axis rotational maneuvers (possibly translation, too, though that's never been clarified), but by default, it remains in "aerodynamic flight emulation" mode simply to help the pilot not have to think about how he's going to maneuver. As later starfighters came along, including repulsorlift craft that didn't use aerodynamic lift at all, the deliberate decision was made to retain that same control logic to avoid having to completely retrain all existing starfighter pilots to a different set of control laws. (This is the same reason that, for example, helicopters are generally flown from the RIGHT-hand seat--the earliest helicopter pilots learned to fly from the left seat, but when they started training new pilots, they stayed in the left seat to retain the familiar viewpoint. As a result, pilots after them all learned to fly in the right-hand seat, and thus a standard was set that sticks to this day.) So yeah, the starfighters *can* maneuver like real spacecraft, but the designers set them up not to do so by default, initially to simplify the pilot's combat workload, but later, due to the same sheer inertia that results in MIL-STDs living forever. (Cue the likely-apochryphal story about the standard railroad gauge being derived from a Roman military standard for chariot axles...)
  3. I use a liquid-fuel core and then engineer from there. If I need a significant amount of additional delta-V, but have plenty of TWR to burn, I'll use a liquid-fuel upper stage. If my TWR is such that I don't want to increase deadweight, but need significant delta-V, or my delta-V is good but I need a lot more thrust at launch and for a significant portion of ascent, I strap on onion-staged LRBs to ignite in parallel. (If I'll need the extra thrust all the way up, I'll just change it from onion-staged to a fixed outer booster ring on the core stage.) If my delta-V is good and I'll have a good TWR after 30 seconds or so, but need a bit of extra kick off the pad, or if I just need < 400 m/s or so additional delta-V, I'll strap on a ring of onion-staged SRBs. And each time I add it, I re-evaluate the TWR and delta-V figures to find out what further changes I need. The only time I use solid motors in my core is if I'm building a ballistic missile or a booster intended for suborbital missions, be they testing or sounding rocket-type Science. Their total delta-V is too low, once you put a payload and liquid-fuel orbital insertion engine on, to be useful in the core of an orbital mission. (The only reason that real satellites often use solid-fuel "kick" motors for final orbital insertion is that they used a liquid-fuel setup to get a very precise apogee and apogee velocity combination, plus a carefully calibrated delta-V for the kick motor to complete the insertion. If you don't have them fire on a precise vehicle attitude at a precise location and velocity, with a precise total mass, you'll need a thrust termination system of some sort to get any kind of precise maneuver out of a solid rocket. And since KSP doesn't have a TTS available, that means that it's rarely a good idea to try and use solids as anything but "extra initial loft" motor.)
  4. Rule of thumb is 1 m/s is about 2 mph.
  5. If there's a "diminishing returns" mechanic in exchanging currencies, it should also have a "recovery" mechanic--i.e., if you trade a whack of Reputation for Cash, then you shouldn't be able to get the same amount of Cash for the same amount of Reputation afterwards... for a while. Once enough in-game time has passed (or maybe you've built up enough Rep again?), the reduction in the return on exchange should decrease gradually until it's at full rep again. Think of it this way--if NASA goes begging to Congress for a funding boost to pay for a given mission, trading on their reputation for results, it may be pretty harmful to their case if they did the same thing last year and the year before, but if it's been twenty years of successful missions since the last time they begged for a big lump sum of cash to cover starting something up, Congress isn't too likely to think of it as a bad investment. (Much like banks and credit, actually... yeah, you'll pay higher interest rates every time you take out a loan... but once a loan is paid off, it won't harm your interest rate for the NEXT one.)
  6. I don't know anything about the unintentional NERVA detonation, but the Total Nuclear Test (a name where you can tell that SOMEONE really wanted the initials to spell out "TNT" regardless of whether it meant anything or not) used an explosive charge to blow up the engine at full power for, as stated before, radiological hazard research. This was conducted in an area of the Nevada Test Site that was well within the exclusion zone for testing, but had not been significantly contaminated by aboveground tests, allowing them to gather baseline data. (It was actually not far at all from the area where we had conducted "zero-yield" tests of live nukes being set off in one-point-safe mode to gather data on radiological hazards from a nuclear weapon cooking off in the post-crash fire of a heavy bomber.) Technically, EVERY lunar flight of the Apollo program could be said to have wasted a great deal of fuel, in a series of different ways. For all of these, remember, every gram you shave off the payload of a given rocket stage is, very VERY roughly, about five to ten grams less propellant and propellant tank mass required in the stage to get the target delta-V, and if you add that up on what's essentially a four-stage rocket like the Apollo-Saturn V vehicle (S-IC, S-II, S-IVB, and SM), you end up with shaving one gram of deadweight from the CSM resulting in a total vehicle mass reduction of 781-11,111 grams (using those same numbers multiplied through for each stage's propellant reductions), so while these may seem small, they resulted in significant mass penalties in the lower stages. First off, the Service Module's Service Propulsion System engine was far bigger than it actually needed to be--it was a legacy of the days when Apollo was conceived of as a Direct Ascent lunar landing, and was sized to lift the entire Command and Service Modules off the lunar surface for the return home. While the SPS's net delta-V was reduced a great deal (by shrinking the propellant tanks to make room for the Scientific Instrument Module bay of the SM) after the LOR decision, the SPS engine itself was about twice the size, power, and mass that were actually required for its maneuvers; this meant a significant amount of extra payload weight for the Saturn V to put onto a lunar trajectory. When the final missions were so close on their weight limits that NASA even considered cutting the number of bandages in the first aid kit (from twelve to six) to save weight, that was a couple hundred pounds that they desperately would have loved to be able to eliminate. (And the defense here: Using the already-designed SPS engine saved a lot of money and time in the engineering process compared to designing a new engine for the job. The Lunar Module Ascent Engine would have been sufficient in size, but was designed to only be fired either once or twice before certain critical components failed and it wouldn't ignite again, and it had no gimbal capability, meaning that it would have required re-engineering to be used as the SPS engine. The LM Descent Engine didn't have problems with multistart durability and could be gimballed, but it was a complex, throttleable engine that didn't have the same "100% reliability" standard required of the SPS and LMAE, since it would always have the option of aborting the landing with the Ascent Engine if it failed. And just scaling down the SPS would have been essentially designing a new engine from scratch, so it was felt that they had a better chance of meeting budget limits and the Kennedy deadline by sticking with the oversized, overpowered legacy SPS engine.) Speaking of the SIM, it, too, resulted in wasted fuel on all the flights before 15. Since the Service Module was designed to have its center of mass correctly oriented on the longitudinal axis, providing for carrying the SIM on the J missions (15 through 17, plus possibly the Skylab and Apollo-Soyuz missions, though don't quote me on those having the SIM) meant that its bay not only had to be kept as void space for the earlier missions, but that ballast (in, IIRC, the form of a big steel plug) had to be carried in that space to simulate the mass of the SIM that would be included in the future. That's several hundred pounds of unnecessary mass on the earlier flights. (Defense: Pretty much the same as the above. If you didn't do this, you'd have to have two completely different SM designs, with one most likely shorter than the other; even if you made it so that the J-mission SM was essentially a standard SM with an SIM mounted on top of it, you'd still need to move the SM's RCS quads up the sides of the SM to balance them, and then you have issues with the CG shifting as propellant is consumed. Either way, if you have two different SM configurations used in flight, you're going to have to aerodynamically certify the Apollo-Saturn V vehicle with both SM lengths, meaning a lot more engineering work and at least one more Saturn V unmanned test flight to verify the wind tunnel numbers.) Finally, and probably the biggest issue, Apollo did not fly a Hohmann transfer orbit trajectory. Instead, it flew a highly accelerated transfer trajectory to the Moon (a free-return trajectory on 8, 10, 11, and 12, and an almost-free-return on the later lunar missions that required non-equatorial lunar orbits), taking just three days to make the trip between Earth and Moon instead of the 14 that a Hohmann orbit would have required. This is a huge guzzler of delta-V in two ways. First, a Hohmann transfer is the minimum-energy transfer to get up to the target distance from the Earth--the S-IVB could have been designed to just barely get the CSM/LM combination across the equigravisphere so that it would start accelerating towards the Moon on gravity alone, making it much smaller and thereby the rest of the Saturn V stack smaller. (I'm going to ignore Apollo 8 using a mass simulator in place of the LM; that mission was laid on incredibly quickly, at least partly for political reasons, and wasn't part of the original plan, but that would have been able to vastly save mass and fuel had they planned it that way from the start.) Secondly, a Hohmann transfer has lower delta-V requirements for capture and orbital insertion AT the destination--Apollo actually had to make three burns to enter Lunar orbit (one to change from the free-return trajectory to one for the orbital insertion, one essentially to "capture" it into a 160x60 nautical mile orbit, and finally one to circularize to 60x60 nautical miles), each of which required more delta-V than a Hohmann transfer would have, as an ideal Hohmann transfer would have just required one burn at perilune to insert *and* circularize all in one step. (Defense: NASA had two VERY good reasons for using the accelerated transfer. First, they weren't entirely sure whether the radiation environment of cislunar space would be healthy for the crew, and it was felt that it would be better to get them through it and the Van Allen belts as quickly as possible, to minimize the radiation dose--this later proved to be a good thing, as the incidence of cancer and cataracts amongst the Apollo lunar mission crews has been significantly higher than amongst the general population of astronauts, much less humanity as a whole. Secondly, and probably more importantly, NASA did careful weight calculations and determined that the three-day transfer was probably the best balance between the required mass of consumables and redundant equipment for a safe and successful crewed mission, and the mass of propellant required to get the required delta-V with a given payload. The extra food, water, oxygen, hydrogen (for the fuel cells), lithium hydroxide, hydrazine (for the RCS), and additional Environmental Control System redundancies needed to extend the missions by 22 days to take a Hohmann transfer orbit would have simply increased payload mass to the point where it would have taken *more* propellant to fly that than the accelerated transfer ended up doing.) So yeah, in all cases, the Apollo flights wasted a significant amount of fuel that could have been saved. However, in each case, there were very, very good reasons for the "wasteful" choices made--and that's even before considering the fact that the cost of re-engineering the vehicle to eliminate these wastes would have been far greater than the pennies per gallon of RP-1 kerosene, LOX, LH2, nitrogen tetroxide, and hydrazine, and pennies per pound of aluminum (vehicle structure) that NASA would have saved, even if the maximum weight savings had been achieved. There's an old engineering mantra: "Fast, good, cheap--pick any two." Apollo had to be good, because human lives were riding on it. And the Kennedy deadline mandated that it had to be fast. So this meant that it wouldn't be cheap, either in terms of cubic dollars spent on the program, or in terms of operational costs to fly the missions. Was it the most efficient way of getting to the Moon? No. Was it the most efficient way to get there with a manned spacecraft before 11:59PM Central Standard Time on 31 December 1969, given the program's start date? Almost certainly...
  7. I'm probably going to come off as pathetically underachieving here, but I recently built a vehicle to use the FASA Atlas-Agena booster to put four separate soft-landing probes on either the Mun or Minmus, for Science purposes, in a single throw. It's an impressive mess under that payload fairing...
  8. "Dammit, Mechjeb, when I clicked 'get closer' on the Rendezvous Planner, I meant like 20 meters on THIS side of the target, not the OTHER side of the target... or 20 millimeters at the braking burn... or..."
  9. Decades ago, long before the S-V replica and the revamped visitor's center eliminated about half of the rocket garden's exhibits...
  10. Speaking of the Gemini Docking Cone, does it have the same problem in MechJeb as the Big G docking module? Because I was trying to do a MechJeb automated docking with a Gemini and an ATV, and it ended up just bouncing them off each other until all my monopropellant was used up...
  11. 0.22, built my first-ever Eve lander, an unmanned rover. Ended up with unexpectedly high fuel consumption that meant the planned aerocapture-to-circular orbit-and-then-deorbit plan was infeasible, so I instead went with a simple multiple-pass aerobraking entry to landing (something I'd planned for as an option from the start). After eight passes through the upper atmosphere, I make my terminal descent. Pop the chutes, which had me worried that I'd tear it apart, and everything goes right. "Oh, good, now I can explore the surface of Eve some," I think. Right up to the point where it touches down. In a lake. And capsizes. I never even got any Science back from that mission...
  12. Hey, something I just noticed in the landing autopilot. It seems to be reproduceable with multiple different ships and in different situations. If I have all the onboard reaction wheels turned off when I activate the landing autopilot in "land at target" mode (haven't tried it in "land somewhere" mode), and then realize, "oh yeah, it's not turning because there's nothing to turn WITH," and turn on my RCS, it doesn't recognize it and just sits there petulantly. I have to shut off the landing autopilot and restart it again with RCS on *before* activating it for it to work. (I'm doing this with FASA-based ships, where you want to turn off the reaction wheels to preserve capsule monopropellant for re-entry, but it also was true when I did it with the default pod before having solar arrays, so needing to preserve my batteries.)
  13. Given my early efforts, I still consider "Still Alive" to be the appropriate anthem for my KSP games...
  14. For the record, I could *probably* build a gun-type device that'd fizzle with appreciable nuclear yield, given enough fissile material, but apparently, the design of the pit for a gun-type weapon is a lot more difficult than most people believe. In any event, there are two methods of mechanically "safing" a nuke via the Permissive Action Link system. The first is to have the PAL send out a pulse of electricity that burns out the firing circuitry, disabling the weapon without disassembly and replacement of the firing circuits. (That said, just about any competent electrical engineer could build a replacement for the firing circuits with parts only slightly harder to get than going down to Radio Shack.) The second is actually the older method, and probably the more effective at making sure that the weapon can't be used in any form without permission, but has been abandoned due to certain... issues that come with it. That's to set off the high explosive trigger in "one-point safe" mode, where instead of all the explosive lenses firing at once to create an implosion, you have one of them fire, which then sets off the others in a cascade reaction that generates unbalanced forces on the core as it propagates, instead pulverizing the nuclear material and spreading it quite nicely (500 pounds of C4 will spread dust a *long* way). You really can't collect the resultant plutonium dust to reassemble into a physics package, but there's the little issue of contaminating the area rather thoroughly. (This is why USAF standard policy for bombers with live nukes on board was, in an emergency, to let the bomber crash with the weapons still on board--it was felt that if the shock or post-crash fire set off the triggers, it'd be in one-point-safe mode, and in that case, the aircraft structure would help reduce contamination by absorbing a significant percentage of the blast energy so the core wouldn't be spread as widely.) That said... it was generally agreed by 1960 that if Orion ever flew, it would be in the form of a launched-from-orbit spacecraft rather than launching from the surface, specifically to make sure that A) fallout on Earth would be all but nonexistent, and 2) if it failed, you wouldn't have the ship crash someplace where someone might salvage nukes from it.
  15. Actually, the red ones you've seen are beacon lights, not anticollision lights. There's usually one on the tip of the vertical tail or the top of the fuselage (next to the top anticollision strobe) and another on the center of the belly. They predate the adoption of strobes, and they emulate the old "rotating beacon" technology you might be old enough to remember on emergency vehicles, with a continuously-lit lamp and a rotating reflector that sweeps the reflected beam of light around a circle. (The old ones really DID use that technology; the new ones are flashers, but emulate the slow flash rate they had.) They're not really considered that important in flight these days, though they're generally left on; however, they're extremely useful in ground operations at night, since, unlike the strobes, they *aren't* blindingly bright. (Strobes, like the landing lights, are not turned on until you're on the runway, ready to take off, to avoid blinding ground crew and the pilots of other airplanes on the ground.)
  16. For the record, many companies are entirely capable of designing and/or building things not at all related to their core business. For example, in World War Two, M1 Garand battle rifles were built by contractors ranging from traditional gun manufacturers like Remington to odd choices like Singer Sewing Machine Company and Rock-Ola Jukeboxes. At the same time, the M3 "Grease Gun" submachine gun and the FP-45 "Liberator" pistol were both designed *and* built by General Motors Guidelamp Division, which was GM's specialist division for making *headlights*. In a non-wartime example, the first stages of the Saturn I and Saturn IB rockets were all designed and built by Chrysler. I'd like to see someone do a study on the number of people who die as a result of cancers and diseases resulting from the rather toxic exhaust compounds emitted by solid rockets and hypergolic-fuel rockets. I doubt it's in the "one per launch" category, but I wouldn't be at all shocked if we found out that the increase in the concentration of those compounds in the air from using them ended up causing a similar number of deaths per ton to the same orbit as we would have gotten from Orion. Not to mention that we're a hell of a lot better at treating cancer now than we were in the late 50s...
  17. Not so much a failure as a learning experience. It was our *third* thermonuclear fusion test ever. We really didn't know what to expect of Lithium-7's effect in the fusion fuel; we thought it'd be inert. We learned that it instead gives up bonus neutrons that boost the fusion process vastly, making bigger yield possible with cheaper fuel.
  18. The OMS and RCS could crossfeed, too, and even if both OMS engines failed, they could deorbit using the RCS instead. The big reason that they used OMS Assist on ISS launches is that the rated maximum payload for the Shuttle that you see is the maximum payload *to a 28 degree inclination orbit*... in other words, launching due east from the Cape. Due to the location and alignment of the Russian launch complexes and missile ranges, the ISS is in a 67-degree inclination orbit. This requires launching in a direction well NORTH of east from the Cape, costing quite a bit of extra delta-V to reach the same orbit due to the loss of assist from the Earth's rotation. (Think about the difference in d-V required to launch on a heading of 45 degrees versus 90 degrees in KSP...)
  19. Depends on if I plan to use them again after entry. If it's a terminal descent for the module they're on, they stay deployed--no point in complicating things with retraction motors if the whole thing's gonna burn up/crash, right? Likewise, if they're on a descent stage for a two-stage lander, and the ascent stage has its own solar arrays, they stay deployed "to simplicate and add lightness," as Colin Chapman put it. However, any solar arrays that actually need to be used again *after* entry are closed before entry and stay closed until touchdown (or at least full parachute deployment, on one-way atmospheric probes to the inner planets). It's an engineer thing--keep it as simple as possible to meet the specs. IRL, you might even choose to include a second set of post-landing solar arrays, letting the cruise-phase arrays break up during entry and deploying the post-landing arrays... well, post-landing, IF that's seen as less of a complexity/weight penalty than outfitting the cruise-phase arrays to retract and then deploy again. (And don't laugh--getting solar arrays to fold back up to restow themselves is a pretty difficult task. Ever try folding a road map back up one-handed? That's basically what you're doing. Deployment is relatively easy. Restowing is hard enough that when they replaced the Hubble's solar arrays on the first servicing mission, they didn't even bother trying to restow them--even though it WAS equipped to do so--they just rotated them to not interfere with the Shuttle when they captured it, then disconnected them while still deployed.)
  20. You CAN, however, do what Sakharov did with the Tsar Bomba and add additional secondary stages (that effectively become tertiary, quaternary, etc., stages) to boost the yield, using the same radiation implosion technique to actually use the *secondary* to compress the fusion tertiary, the tertiary to compress the quaternary, and so on. Apparently, this is actually *easier* than using the fission primary to compress the secondary, but anything more detailed than what I've mentioned isn't mentioned in any open-source material I know of (and no, I have no access to anything that's not publicly available). Actually, it WAS weaponized--Khruschev's requirements to Sakharov specifically stated that the weapon had to be delivered by a bomber rather than being an Ivy Mike-style immobile physics experiment. It just was never deployed operationally. And there was a simple reason for this--there was never any application for it. Tsar Bomba was too big, both physically and in terms of yield, to have any practical application. Even at the reduced yield "clean" design, there were only three targets in the US that were large and sprawling enough to warrant it--New York City, Chicago, and Los Angeles. It might have also been useful for digging the Cheyenne Mountain Complex and Mount Weather out, but by the time those were operational and known to Soviet targeteers, ICBM accuracy had improved to the point where they could collapse them with a 20MT "city-destroyer" warhead like was on some mods of the SS-18. Nothing else was big enough or hardened enough to warrant the Tsar Bomba's yield, either "clean" or "dirty". On top of that, the weapon--which, remember, was deliberately designed to be as physically small as possible, due to the weaponization requirement--was so physically large that even the Soviets' largest bomber, the Tu-95, required modification to carry it: the bomb bay doors had to be removed, and part of the fuselage had to be cut away to clear the bomb, and even then, it could only carry the bomb semi-submerged, with part of the bottom of it sticking out of the belly of the aircraft. The added drag and weight meant that a mission carrying a Tsar Bomba to the US would have been a one-way mission ending in either a bailout, a ditching, or MAYBE a landing in Cuba; they also would have rendered the aircraft exceedingly vulnerable to interception by USAF and RCAF interceptors, since it slowed it down and reduced the maneuverability. (Not to mention that a "dirty" version would have literally been a suicide mission for the bomber crew, since they'd have been caught in the fireball.) Tsar Bomba *could* have been deployed, if there had been any mission for it. However, like most of the Russian "tsar" projects, it was a purely politically-motivated thing that had no actual use beyond overcompensating for perceptions of having a small *****. It still worked, though, and had an amazing fusion fraction. BTW, the biggest reason that the US never tested a weapon with a sufficiently high fusion fraction is that we were *also* dick-waving at the time, and big yield was considered more impressive than high fusion fraction. Therefore, when we were doing fusion weapon tests, we always used the "dirty" version that had the bigger yield. Ironically, most of our tests related to fusion weapons were done after Ivy Mike and Castle Bravo proved the basic design; the ones that weren't just saber-rattling were mostly tests of *only* the boosted-fission primaries for new fusion weapons, since we knew that the secondary would operate correctly if the primary had sufficient yield to get the radiation implosion going. (There was also the issue that the primary to a fusion weapon is a rather effective weapon in and of itself for smaller targets and/or with more accurate placement...) This is true, but scale is something relatively unremarkable to engineers--once you get it working small-scale, you can always scale it up and work out the bugs. It'd take a lot of work, but it's relatively simple work compared to, say, figuring out how to get the F-1 rocket engine to not explode seconds after it was ignited. (And even THAT was relatively simple compared to figuring out how to feed the damned thing...) Yes, there was a lot of precursor work done. There was still a huge amount of technology and materials involved that were pure unobtanium in 1962. However, the massive infusion of cubic dollars that the program put into R&D towards those meant that all of them were (relatively) common within five years--at least unremarkable enough that nobody had a problem specifying them, even if they were still so expensive that they were sold by the carat...
  21. [citation needed], to quote Wikipedia. As for SLAM, there's no mystery as to why it was cancelled--it was a whole combination of things. First off, our allies--over whom the missile would have to fly on the way into its target--were less than amused by the thought of a Mach 3 missile roaring past at 500 feet, flattening buildings with its shockwave and literally cooking chickens in the barnyard as it flew overhead with the radiation from its reactor. (Ironically, this last issue was proposed as being an asset in wartime, with the missile being programmed to pretty much perpetually fly back and forth over the Soviet Union after it had deployed its last warhead, to essentially render the place uninhabitable.) Secondly, there was the little issue of where you test-fly an unshielded nuclear reactor--and even if you find a safe place to do so, how do you guarantee that the guidance system won't go nuts and make a low-level run through Las Vegas, or even Los Angeles? There was a proposal to test-fly them at the Nevada Test Site's nuclear testing ranges, on an extremely long tether so that they couldn't run away (and that would have been a HELL of a tether!), but a more realistic one had them test-flying them off Johnston Island, then having the test missile end the flight with a deliberate dive to an ocean burial in the Marianas Trench. As Air and Space Smithsonian commented, "even in an era when the Atomic Energy Commission was trying to get people to think of radiation in terms of 'sunshine units,' the thought of dumping dozens of unshielded nuclear reactors into the ocean was enough to give people pause." However, the biggest reason it was cancelled was much simpler than any technical or political issues--those could have been resolved with enough work. By 1960, the Atlas ICBM was starting to show signs that it would be a reliable weapon, at a time when SLAM had yet to fly, or even test a flight-rated engine. What's more, whereas SLAM would have taken roughly four hours to reach its first target in Soviet territory, and saw genuine risk of being intercepted and shot down, either by fighters or ground-based air defenses, the Atlas could put a warhead on the same target a mere thirty minutes after launch, and was invulnerable to all then-existing defensive measures--AND it looked like it would have a significantly lower unit cost than SLAM, on top of that. So now, not only do we have a project where the final weapon would have a number of fundamental problems, but there's an alternative program that's at a much more advanced state of development--despite being started later--which avoids most of those problems, can do the job just as well, and will cost less. There wasn't any real question which one was going to be cancelled and which would be developed into an operational weapons system, even BEFORE the Atlas people started claiming that "SLAM" actually stood for "slow, low, and messy"...
  22. Only way the ending could have been more Kerbal is if we'd heard the recovery worker say, "Hey, guys, looks like the camera's still workin'!"
  23. For the record, a fully fueled Saturn V exploding on the launch pad would have had a yield of almost exactly two kilotons. This is why pads 39A and 39B are three and a half miles apart, and also three and a half miles from the VAB--it's to make sure that if a Saturn V blew up on the pad, they'd still have at least one pad and the VAB operational. There were also provisions made for 39C, and plans for 39D and 39E, since Complex 39 was planned out in an era when they thought they'd literally be launching a Saturn V every month(!) by the early 70s. (Technically, 39A *is* 39C; the decision was made that they'd only build the two pads closest to the VAB, and redesignate the closest one, originally marked 39C, as 39A; the third pad, originally to be 39A, would be surveyed and plans made to build it, but it wouldn't be built unless a pad accident, pad damage from launch, or sheer rate of flights required it. 39D and 39E were longer-range plans, intended for either Nova boosters or for Saturn Vs that used nuclear third stages, and would have been served by a completely separate crawlerway that would go through the Nuclear Assembly Building to install the reactor cores into those nuclear upper stages. The "elbow" in the crawlerway to 39B shows where original 39A/"backup" 39C would have been; that pad would have been at the end of an extension of the "straight" portion past the turn to 39B. As it turned out, 39B was only used for one launch during the Saturn V era, Apollo 10, and was essentially an unnecessary backup. However, at the time, nobody knew how much damage the Saturn V would do to the pad when launched, so having two pads was seen to be essential. Given it was also used for the Skylab and Apollo-Soyuz Saturn IB launches, allowed for quicker turnaround between launches in the Shuttle era, and was the *only* reason the final Hubble service mission could be flown, I don't consider 39B wasteful. As a side note, NASA doesn't plan to upgrade 39A from the Shuttle configuration until/unless they find a need for more than one operational pad at Complex 39 in the post-Shuttle era...)
  24. 85% is pretty typical of most vehicles designed for low Earth orbit; it varies some depending on what propellant mixture you use, the Isp. of your engines, and the vehicle TWR, but it's generally in that neighborhood. You just need that much reactant mass to get up to orbital velocity and altitude... As for 104% throttle, that's a longstanding, common question. Short version: when the SSMEs were being developed in the 70s, it was discovered that they could run reliably at above the originally planned thrust rating. Since a lot of paperwork and tests had been completed with the originally planned rating being listed as 100% thrust, this meant that if they rerated the engines to higher thrust, they'd either have to go back and change all the paperwork to adjust it to the new 100% rating, or they'd have lots of confusion as to which "100%" they meant in any given situation. NASA's solution was exactly the sort of "avoid the problem altogether" one you'd expect of engineers--they merely redefined the available range of values so that 100% was no longer the absolute maximum power possible. Instead, it was just a reference point, and they would refer to the engine thrust in terms of a percentage of that nominal value. Hence why, very early in the program, they switched to running the engines at 104% power for most of the ascent; they'd found that they could run them at 104% with no loss of reliability, and up to 109% without them failing, albeit at the cost of increased wear. Thus, 104% became normal "full thrust," and in an abort scenario, they would throttle up to 109% for that little extra shove that might be the difference between life and death. In the 90s, improved metallurgy and a larger nozzle throat allowed SOME of the engines to be refurbished to run at 109% under normal conditions and up to 114% in an emergency, and this became standard on the flights to Mir and the ISS, to increase payload capacity on those flights. (The engine controllers knew which model each engine was, and would operate their throttles independently, allowing the older engines to remain in the flight rotation right up until the end of the program; they could be mixed and matched with no issues, though NASA was careful to make sure there was sufficient TWR for the required payload on every flight anyway and, on a particularly heavy flight, might make sure to have three wide-throat engines grouped together.) This is also why NASA started using the OMS engines during SSME burn, after SRB burnout, as a sort of "afterburner" to give the vehicle a little more payload capacity on those high-inclination flights...
  25. A couple side notes... First off, the unit patch for EVERY "Wild Weasel" (air defense suppression) squadron in the US Air Force includes the letters "Y.G.B.S.M." on it somewhere. That's because of the legendary response of the first crew to ever be briefed for a Wild Weasel mission. See, air defense suppression means that you go out there and actively try to taunt the enemy ground-based air defenses into engaging you, so you can locate and destroy them. When the first pilots were briefed for the first-ever Wild Weasel mission back in Vietnam, one of them just stared at his commanding officer and said, "You've GOTTA be s**tting me," in relation to the thought of actively trying to get the North Vietnamese anti-aircraft guns and surface-to-air missiles to try and attack him. That ended up becoming the semi-official motto of the Wild Weasels, and enshrined, in abbreviated form, on their unit patches. Secondly, every USAF primary flight training class at Lackland AFB gets to design its own class patch which is to be worn on the flight suit until completing the course, and then kept as a souvenir. Earlier this year, one class elected (and I hesitate to mention this lest flames result) to design a patch I won't link to directly, but themed around the motto, "My Little Pilot: Flying is Magic." Yeah. You can guess what it looked like...
×
×
  • Create New...