Jump to content

wumpus

Members
  • Posts

    3,585
  • Joined

  • Last visited

Everything posted by wumpus

  1. I understand Bigelow's inflatable storage room is being used (presumably for expendable equipment) and doing well. I wouldn't be too surprised if they wind up sending up some real storage space based on the existing one. The falcon 9 (in recovery mode) can handle the mass of most of the Salyut space stations, although I don't know if they would fit in the fairing. Finding a justification to put a new Salyut in space after seeing the ISS in operation for decades is another story.
  2. A lot would come down to how much control NASA had over the design of Shuttle-C vs. the Senate. While SLS may have be designed by the Senate, the Shuttle was more or less a tight fit for a plethora of government requirements. I doubt NASA would be allowed to build the Shuttle-C without it morphing into the SLS.
  3. You left out the JWST. Presumably that will advance science, but it has been devouring NASA's budget for years. I can't see Shuttle C having less developmental costs than the SLS. On the other hand, if the shuttle was an ongoing program, you wouldn't have to throw *so* much pork at the shuttle suppliers.
  4. No finds? Archeologists have determined the layout of viking burial ships by locating rust deposits and determining that there had to be nails (and thus planks) in those positions. I guess nobody bothered to sink a trireme anywhere one could be dug up. Then again, I'm not sure triremes used nails...
  5. Except that is more or less how modern manufacturing "build to print" really works. The details you won't find anywhere in a single engine (you might get an approximation using all 5) are the tolerances, and certainly excess slop isn't something you can tolerate in a Saturn V. The other catch is that most of a Saturn V were effectively custom made, so that a single tolerance factor wouldn't be enough. All the other components were adjusted by hand to build a single engine, so you would have to know "WHY" each part has certain dimensions (and the notes by the guys who did this are lost). I'd expect you could copy a Falcon 9 using straightforward copy and use all 9 (plus vacuum) engines to determine tolerances, then make good guesses on the rest. PS: Until some time in the 1990s NASA had 3 copies of all their records during roughly the Apollo era (may have included Gemini). They decided that such redundancy was unnecessary and gave two of them away. The University of Maryland got one of the copies, so if you want to see what's there (and what's missing) you could try College Park, Maryland (presumably NASA's copy isn't in the US archives, also on the UMD campus).
  6. NASA did exactly that to test it as an option for SLS (or one of its earlier names). https://arstechnica.com/science/2013/04/how-nasa-brought-the-monstrous-f-1-moon-rocket-back-to-life/ Lots of 3d printing replacing lost welding techniques, and I'm sure the controls were modernized as well. I think by the 1980s NASA could no longer launch their remaining Saturn V rockets (I think there is one outside at KSC) due to fine details of the countdown that people no longer remembered, retired, died, or simply couldn't do the process fast enough (and might only get one try). There's a certain mentality that seeps through ISO-9000 procedures that ignores the issue that lots of technology is in the people who do the actual manufacture of the device. Ignore this at your peril. Also from a historical perspective technology *IS* infrastructure. If you can't buy 1960s MIL-SPEC parts, you can't build a Saturn V to the original NASA bill of materials. And you are probably better off replacing much of the materials with modern ones (asbestos heat shields might work better, but they also could kill your workers).
  7. Not very long (only really need Kerbal Engineer, if the others break with an update I don't rush to replace them). I remember >20 minute load times using my old (1981) Atari 400 with [audio] cassette loader/recorder (this was for 24k-32k games, small things could be done in a tolerable 1-5 minutes). I don't want to go back to that.
  8. My understanding was that the known size of the "black zones" only increased, as simulation after simulation of those aborts continued to fail.
  9. hello world.c compiles to 6,704 bytes on my Linux installation. That's a lot better than what I remember, but still a lot compared to what it took on DOS assembler (of course, I think the smallest non-zero file has to take up at least 4096 bytes, thanks to disk sectors. Filesystems may require even larger minimums). Of course, a decade or two ago people might have considered assembler for large projects. C seems to be heading the same direction. C++ might as well, but probably more for security reasons than "no apparent improvement for all the extra difficulty".
  10. Many high level programming languages are interpreters, making it quite a stretch to call them macroassemblers. C looks like a lot more like a macroassembler, especially if you are familiar with PDP-11 addressing modes (the computer K&R were using when they designed it). "The assembler is just a mnemonic notation of machine codes." Ok, that is correct. "Machine codes represent the CPU registers literally." Generally true for all computers programmed in assembler (currently and historically). Don't expect modern computers (especially Intel and AMD beasts, but Out-of-Order ARM chips can get pretty weird themselves). "CPU registers (logically) are the sets of output pins of corresponding transistors." Latches, not transistors (a latch needs at least two gates, and gates need a few transistors). And again, in anything out of order (i.e. remotely high performance), expect all registers to be renamed from some group of available registers. What you get in a multimillion transistor core looks nothing like a textbook CPU. If you want to know "how a computer *can* work, look at AVR microcontrollers (textbooks may use ancient systems). To learn how computers work *now*, you need the basics plus a long trip down the rabbit hole of Moore's law and finding new uses for all those transistors. Also don't expect that all the good tricks are published. https://www.amazon.com/Computer-Architecture-Quantitative-John-Hennessy/dp/012383872X This was the gold standard a decade or two ago, and likely still holds up. Of course it is for senior level engineering undergrads and/or grad students (and priced accordingly, although earlier editions are unlikely to be all that obsolete).
  11. I believe Claude Shannon wrote his master's thesis on binary being the ideal base for building a computer. While I highly recommend a deeper dive in to Claude Shannon's work (especially if you really want to know what a bit really means), note that in much of his work "coding" meant "a means of transmitting data in such a way to recover it by the receiver with minimal or zero error or loss. Since he worked at Bell Labs, this makes sense (he basically created the whole theoretical basis to transfer communications from analog to digital. In the 1940s (he had to. Bell Labs gear was expected to last 20 years, so he had to start early so the engineers could start building things a few decades later, all to have a digital transition in the 1970s-1980s). PDP-7 instruction set? Ok, it had a memory and not a tape. But it looked like that. I heard Woz mention that Dec was selling those computers in the early 1970s for "the lowest price ever for a computer". Unfortunately for the young Woz (probably not in college yet), it still cost more than his father's house. Once you understand how digital logic work and what an Instruction Set (assembler code) is, make sure you take some time to learn how microcode worked. Microcode has been more or less obsolete since about 1990 (not sure they still teach it in electrical engineering school: at least one professor insisted that one computer designer swore "he'd never make a computer without microcode again" after learning it, but before I graduated it was essentially obsolete), but an amazing concept. Basically it is a bunch of software that turns a pile of digital logic (and not much logic at that) into a computer. If $7.00 for a computer game significantly more niche than KSP isn't a problem for you, I suspect that TIS-100 is probably the "CPU emulator" that is easily the most fun to play around in. You might not want to play all that long, but it is meant to be an engaging CPU and set of problems. If not, I'm sure there are ARM, 6502, and various types of emulators out there. I'd probably recommend starting with the AVR 8-bit "RISC" (microcontroller chip) if you want to play with "the real thing", but I don't know of any emulators (especially free ones) off the top of my head (I wrote mostly 8086 and 8085 assembler, with a smattering of 6502. I'd avoid x86 assembly like the plague).
  12. I remember particle beams (particularly neutron beams: strip the protons off deuterium or similar means) being an important part of the SDI (Reagan's "Star Wars") program. Presumably because of accuracy, but I don't the lasers back then were anything like 21st century spec. I'm a bit more curious if these would make insterstellar (ultra-high Isp) engines. Or at least somewhat more efficient than using a bunch of LEDs (essentially infinte Isp, but the worst energy efficiency out there) as thrusters.
  13. It would require catastrophic failure on both SpaceX and Blue Origin (complete inability to launch both Starship/BFR and New Glen (and presumably New Armstrong as well)) to avoid obviating SLS entirely. SpaceX makes visible progress, Blue Origin is presumably doing something with those billions (at least test fires seem to be happening), but SLS is simply happily consuming the pork. SLS isn't going anywhere (except the right pockets) [in terms of progressing. Demonstration/test/make work launches might happen, but the real question is whether or not it will be obsolete before it launches (it might take a few years after launch). This isn't to say that NASA may yet insist on going to the Moon with SLS. It just will be even more obvious that it is a make work project for SLS (and presumably the Lunar tollbooth as well).
  14. I'd be curious if Rocket Labs is interested in modifying the Rutherford engine for restart and [deep] throttle operation. NASA appears to be assuming a RL-10 CECE (prototype throttleable RL-10), but I suspect that even without throttle modifications, the Ruthorford is closer to operation.
  15. No bucks, no Buck Rogers. And with the exception of Blue Origin and the self-funding parts of SpaceX (not necessarily the Mars dream) the bucks require politics.
  16. It never occurred to me before, but didn't Apollo 10 (and possibly 9) have a LEM on board, or just a dummy weight (I don't think it was ready for 9)? Had something like this happened (not the exact issue, as it likely broke when fitted for 10) could they have used the Apollo 10 LEM as a lifeboat? PS - I've expected SLS to launch block 1 without astronauts. Launching a one-off that won't be launched again with astronauts seems contrary to all NASA policy, and it would give them *years* to keep the gravy train going for blocks 2-3 whatever regardless if the thing turns into a giant firework or not. Blowing up astronauts is about the only thing that can derail SLS, and NASA certainly doesn't want to do it that way.
  17. Doesn't the NHL play hockey in such stadia? Of course, they hardly want a deep pond, and take the extra step of freezing it. I suspect that plenty of people who live there expect senators to answer to them.
  18. Akin's Law of Spacecraft Design #39: Any exploration program which "just happens" to include a new launch vehicle is, de facto, a launch vehicle program. https://spacecraft.ssl.umd.edu/akins_laws.html
  19. Note the "Falcon 9". Falcon 9 didn't need any new engines (although it more than doubled the power of the existing Merlin engines), while BFR successors require a whole new raptor engine. Falcon 9 (1.0) was thrown together on the cheap (they had to meet the NASA CRS contract), although if you include all the development since then it would add up to much more. Still don't believe any BFR successor could be made cheaper than Merlin by any realistic accounting. Who knows, not using all that carbon might go a long way.
  20. Antimatter fits [all?] the equations as if it was running backwards in time, but I believe enough antimatter has been observed to show it increases in entropy (the one bit of physics that definitely shows the direction of time). There's little reason to believe that dark matter would be anything like that.
  21. Or LOS to landscape, not the Sun or cosmic radiation. Or you might build a large dome atrium and have all the "windows" facing inward. While Martian colonists are likely to be a pretty weird bunch, you can probably expect that they will want to live in buildings with windows. Looks like they have a bunch of windows in Antarctica, although I can't say what rooms can see out of them: https://www.coolantarctica.com/Bases/modern_antarctic_bases3.php
  22. The "large complex building" would be largely be limited to how many windows you want, and if you want occupied areas to contain windows. Presumably you would have large multistoried buildings with storage/utility areas in the middle and living quarters/work spaces around the perimeter (yes this isn't good for radiation, but anything that gets the surface/volume under control will have a large advantage for radiation). Stairs without rails might happen, but I'd suggest living in .38g before removing them. The "tower" on the cover of the linked video might be a start, but I'd expect a much more squat building to be the final iteration.
  23. If you have 200 or so of those mission critical parts, you have roughly the safety of the shuttle (2 loss of crew in ~100 missions). They really add up (there's a reason that engineers love the chance to use redundant parts).
  24. From what I've heard, Computer players have a huge built-in advantage in StarCraft 2 in being able to click with mouse/keyboard far faster than even pro players. Are they just considering AIs that have been limited to pro (or even amature) production rates? This sounds like a real challenge, as a computer has a sufficient advantage to being able to produce a zerg rush (and force a human or other AI to deal with that) far more effectively than a human, with or without an advanced AI.
  25. I don't think I need to remind you on a rocket board that oxidizers are heavy. Afterburners consume something like ten times the fuel for twice the thrust, and I think cars have an optimal air/fuel ratio of 16:1*. Nitrous oxide should have a similar value for air (roughly 2 atoms of nitrogen for every atom of air) and I'd assume the weight issue would be worse than an afterburner. This would be something only used while dogfighting or doing a final bombing run (possibly taking off a short/juryrigged runway), and when it runs out you will be in trouble. * I think cars use a value closer to 12:1 because an ideal (for the first reaction) is bad for secondary NOX emission production. The biggest advantage I can think of for two values be cylinder is to use a OTV (pushrod, as opposed to overhead cam) engine. This allows a relatively smaller engine, especially relative to the displacement, and especially for "V-n" engines. Having two valves won't give any advantages at low rpm, but having more displacement certainly will. It generally will have issues spinning at high rpm (although GM used things like sodium filled and titanium valves to get a 7.0l OTV V-8 up to a 7000rpm redline [LS-7]) so typically won't be tuned for such things. It is also more difficult to use variable valve timing on such things (although the current corvette and some Dodge cars seem to manage variable valve timing), so that makes tuning for high rpms harder. But if you encounter such an engine in a car "in the wild", don't be surprised if it makes a "chug, chug, chug" sound at a redlight and takes off when the light turns green: obviously tuned for that high rpm (because the owner changed out the cam for a hot one (yes, one cam for an entire V8)). OTV engines have a distinct advantage in places that don't regulate engine displacement. On the other hand, even places that have such regulations tend to use the smaller mass produced low displacement SOHC/DOHC engines when you don't need the power of a big V8 (and Ford V8s have 32 valves regardless of the market). Kerbanauts from outside of America and/or under sixty may have a hard time understanding why these engines, at least when made between 5 to ~7 liters, are called "small blocks". Perhaps comparing the external size to a 5.0l (32 valve) Ford Coyote engine (typically found in a Ford Mustang) would explain it, but I think they just needed to contrast the engines to the 7-8 liter monsters also in production (mostly obsoleted by modern "small blocks", but often still loved by old school drag racers). Just remember that a "small block" engine is one of the biggest you'll find not powering a truck or industrial usage when you see the term. https://www.youtube.com/channel/UClqhvGmHcvWL9w3R48t9QXQ Link goes to Engineering Explained. I would go so far as to say that Jason Fenske is the Scott Manley of automotive engineering.
×
×
  • Create New...