Jump to content

wumpus

Members
  • Posts

    3,585
  • Joined

  • Last visited

Everything posted by wumpus

  1. That type of thing makes asteroid mining seem like a great idea. On the other hand, mining old landfills typically makes even more sense (unless you want iridium or platinum by the ton). Some of the crazier means of disposing of radioactive waste falls into this trap: we are far more likely to want the stuff back in a few decades than our descendants be in serious danger of the stuff in a few million years.
  2. You only get the full benefit of "the wings are holding up everything attached to the wings" when the landing gear completely supports the wings. And that only happens in extreme cases like the U2 and B-52 where they have those funky little wheels at the end of the wings. Certainly most planes get most of the benefits of that, but sometimes things get a little too hairy... But think about how much structure was saved thanks to those little wheels on the end of the wing.
  3. It absorbs radiation, just not well. No reason to use it in place of liquid hydrogen, anyway. Of course the only reason you would use metallic hydrogen is if it is metastable (or possibly just stable). Unfortunately it isn't, so nobody really cares. Typically the only thing that matters about shielding is the total mass (probably not true for all forms of radiation, but if you don't have enough mass you will get fried. If you do you won't). Hydrogen is lousy for this (unless you have rather full fuel tanks. In that case have them between your radiation source and all crewed and critical parts of the spaceship).
  4. Primitive ways won't work at all without external reaction (people reacting to "telegraphs" sending lights). One thing you seem to be missing is that computers simply need an amplifier *somewhere* in the operations or the signal will be lost. When nerve cells either fire or not depending on input, they can't be driven directly from input: they have to be able to create (or at least amplify from the input) the signal output. Ultra-high-tech ways don't appear to promising (although I've heard of plenty of work there trying to get post-CMOS chips working). One of the tricky bits is that I'm fairly sure you need branching instructions to be Turing complete. Quantum "computers" might avoid this by essentially executing massive loops as single programs (and have some standard computer feed them each "loop"), which might be some way to have a "standard" computer do most of the work and let your "optical computer" do the "loops" all at once (of course this would require the same tech that builds silicon chips). This "do one thing very wide" idea would at least avoid the need for an amplifier, or really just put the "amplifier" in the [non-optical] computer that is running the optical computer and creating all the light at once. You're also missing a lot with the whole idea of a TV screen having a lot of data. The HDMI cable that runs to the TV also has all the data the TV can show and isn't exactly cutting edge on the electrical scene. While light does have the capacity to harness a fantastic amount of data (see Claude Shannon's work on channel capacity [1940s]), trying to modulate light at that level seems next to impossible (then again, wifi has been running at microwave frequencies for a decade, who knows what the real limit is. Just expect to spend decades in infra red before reaching "optical"). Losing binary is a bigger problem than you might think. Binary is only somewhat important (SSDs ditched that decades ago), but the Boolean (thanks again Shannon) underpinning allows much of what we think of computer operations to happen. You would have to come up with some sort of similar operations that light can perform (perhaps some sort of associated memory (hash array) function, possibly with minor (presumably non-Boolean) operations to form some sort of map-reduce function (map-reduce basically powered google 20 years ago, but they've since moved on). Throwing away 80 years of work on computing is a bit much. I'd suggest learning the very foundations of computing (Shannon and Turing) to understand why things are made using CMOS electronics and exactly what problems you need to solve with light.
  5. In the dawn of computers, one common form of memory was a "delay line". Some slow propagating signal was chosen store the data (such as waves of mercury) and then sent out and back. This had two huge drawbacks: you only could access the data at specific times (and then typically had to resend the data), and you only had tiny amounts of data stored this way. I'm fairly convinced that magnetic core memory was probably the biggest single thing that let computers progress to where they are now. And while it has nothing to do with "pure optical", Intel made noises a few years ago about sending the most important/highest rate signals inside a computer (presumably meaning on the motherboard) through optical lines. I don't think they are even close to commercializing it, and I'm not sure if that is a high priority anymore.
  6. My guess is that anything going to Mars (or similar) in the next few decades will look a lot like the ISS. Building along a spine and attaching long tubes that fit inside rocket fairings is simply the best way to build things in space with the current technology. For technology that includes constant 1g thrust? You might as well use Jules Verne as a guide: we are at least as far from that as Jules was from Apollo. Extrapolating from current tech would probably look like building construction: they exist to support structures against a 1g "acceleration". Just be sure to update your building construction to match the engine tech.
  7. I remember hearing that the USAF was dropping bombs with the entire warhead replaced with concrete: the idea was that the guided bombs were so accurate, they didn't need the explosion (which would just hurt people who weren't the target) when dropping a heavy bomb from 10km+. This would be ideal for such situations. I doubt you need a railgun. I think a Scott Manley video claimed over 100kph could de-orbit something in more or less one orbit. Actually, it might have been a bit higher than that, as you couldn't throw something that fast (a pro baseball pitcher throwing that slowly will quickly lose his job) but you could presumably de-orbit a golf ball with a golf club (Alan Shepard, eat your heart out), or maybe a Jai alai glove. I'd expect better accuracy with more delta-v (combined with the inevitable RCS thruster for final guidance). The lateral velocity would pretty much be part of the "kill area" and make it much more of a line than a "spot". It would certainly make any damage mitigation tricky, and probably make such a weapon even more useless.
  8. Except that in rocketry pretty much every crazy idea is pitched and plenty of those have been tried. Including pulsed detonation rockets. The Air Force even powered a Rutan Long-EZ via pulsed detonation in 2008 (presumably way less thrust than a rocket would want, otherwise the airframe couldn't handle the acceleration or drag). I think the idea for rockets was that you get more momentum through detonation (energy more or less has to be a wash), so you get better Isp. Also they were typically trying to use air-breathers, so anything that gives you enough thrust without carrying oxidizer is a win. So I'm pretty sure there was plenty of research on doing just this (mostly before 2008 from memory), and it really didn't go anywhere. And no matter how they do it, any pulse detonation engine will have a noise issue that can't be silenced. Probably worse than supersonic propellers, and really useful for uncrewed flight (assuming TWR>>1).
  9. How do you prove that? Was there just one (or a limited subsection) of possible states of metalic hydrogen that were potentially metastable and were recently shown to be not metastable? Obviously the onus was to prove the feasibility of metalic hydrogen and that required assumptions that have been presumably been proven not true. - I have a lousy background in chemistry - I have been pushing our "science fiction" forum member to use this, with warnings that it would date any work and was likely to be seen as "no longer viable" at a moment's notice. Presumably now is too late...
  10. Forget about propellant. It either isn't going anywhere or neither energy nor propellant are an issue (you'll want to replace the propellant with either hydrogen or argon if you want some anyway). For a space station (with possible ability to move, possibly to Earth or Mars orbit, but expect to take the *long* way and require gravitational boosts for capture, or to access any other planet [and don't be surprised if you have to slingshot at Mars to get to Earth. If you can slingshot off Jupiter you can really get going...]). They may well be the primary form of "spaceship" in the next century or so, but not for long journeys. Expect to live in one while approaching (*slowly*) the next asteroid to mine...
  11. Direct thermal transfer without touching? Is there an intermediary that is cooled by the hydrogen and heated by the reactor but (mostly) non-radioactive? I think once I left the magnetosphere (probably just the atmosphere) I'd be more than willing to have radioactive exhaust (and jettison any intermediary).
  12. I know the Apollo spacecraft used ablation and I think Mercury (and Vostok) did as well. Oddly enough, white oak makes a pretty good ablator on its own. I'm curious of the earliest use of such ablation (older than Elon Musk's grandad?). Perhaps the lubrication of steam engines? Lubrication of cannon or rifles, especially high volume things like gattling guns and Maxim guns?
  13. But is using the Moller Sky car any better?
  14. And why those few minutes have to be continuous and you can't just throw it in the hold of the vomit comet (perhaps in a cabinet or something).
  15. Except that while in space you can have an arbitrarily hot "hot side" (I used 4000K as that seemed the limit for anything turning into plasma), you can't actually use the 3K of space in vacuum as your "cold side" (unless you accept infinitely large radiators). Using closed-loop cooling (open loop is asking to run out of coolant) and minimizing surface area lead me to heatsinks running at 2600K (they need to be hot to emit a lot of heat via blackbody radiation). I was surprised how large such radiators simply *had* to be to sink a lot of power. Perhaps either such vessels will either have the surface area (perhaps with fractal surfaces that emit in 180 degrees, perhaps in a single direction and contributing toward thrust) or perhaps they will simply not need such power and avoid the issue altogether. But I should point out that this was so odd because it was the "exception that proves the rule". Normally there are enough variables such that any one parameter can be reduced to what you need, in this case the minimum size for a heat sink (in vacuum) was simply a shocking size.
  16. I've heard a claim that the SSME was temperature limited*, although perhaps merely because they stopped the R&D when they could survive the needed temperature. And of course the hottest part of the SSME will be the combustion gasses themselves: in NERVA the reactor has to be hotter than the hydrogen and has to warm that up (presumably from cryogenic temperatures, unless active cooling is needed elsewhere). But now that you mention it, of course the nozzle will be cooler than the reactor (the throat might not be thanks to adiabatic heating, and might be tricky to cool actively). *An astronaut giving a speech on youtube (possibly a TED program). No idea how much it was dumbed down, but it was aimed at people who didn't know the rocket equation so take it with a grain of salt. It still was pretty good, but you had to notice plenty of "lies to children" going on.
  17. There are a few. You'd be surprised how close to the limit (Carnot efficiency) modern power plants really are, similarly for data transfer if you can afford the latency (I'm sure anything bouncing back and forth to geosync does this) you can hit as close to the Shannon limit as you want. The one that surprised me was where I worked out that cooling panels for space warefare had some very hard limits (the fins on the ISS might be 1% of this limit, but the ISS hardly has the power needed to blast another spacestation) and they would make ideal targets for overheating (you can angle them to avoid two enemies, but not three coming from different directions). This was almost entirely due to Carnot limits and the melting points of potential black bodies/thermal transfer lines. Typically all that means is that you've reached the top of this "tech tree", go climb another if you want to get any higher. And come up with some clever solutions to get around the hard limits (it might be possible to make a methane-based fuel cell that works more efficiently than burning methane and running it through a Carnot engine to power a generator. Or better yet, make sufficiently cheap solar cells and possibly a battery system).
  18. Is a magnetic (or otherwise virtual) throat and nozzle shown to be feasible? I think the melting point of the materials that make these parts limit the Isp of NERVA, not the temperature of the reactor. I have no clue about the workings of pebble beds to comment on those.
  19. While this is true, it wouldn't help at all to use diamond in a steel alloy. Although if you absolutely want to try such a thing, look up cermet (ceramic with molten metal jammed inside).
  20. There is more to Unity than the graphics engine. KSP used the Unity physics engine as well, and this ultimately lead to the Kraken and the "floating origin" fix (something HarvestR liked so much he named his new studio after it). Unless Unity can be convinced/compiled to run the physics engine in double mode (and eat the roughly 50% speed hit), the developers will be forced make similar adjustments over and over (not to mention that they need a completely different way to construct large crafts and constructable buildings [which may already exist]). For those wondering, 16 bit floating point should be enough for most graphic shading. I think AMD's new RNA does some of this. Doing the same for geometry work *should* work as well, but probably requires too much optimization work for too little gain (and those last few bugs look horrible and would be hard to fix). 32 bit floating point is overkill for just about any game (KSP being and odd example). Calculating with 32 bits (24 bits of effective mantissa) is overkill when you only use at most 10 bits (maybe 12 bits in horizontal resolution). 64 bits are needed when you need to use previously computed values for the next calculation and so on and so on. Anyone who learned to calculate "significant figures" via pencil and paper or calculator will get a sharp surprise when they learn how well numerical methods work when discarding "non-significant" precision in the midst of calculation (as far as I know, physically measuring to 32 bit precision is impossible in all but the most contrived experiments). I've tried to do a 32k point FFT in single point and the audio was worse than 8 bit resolution, so accumulating the errors of 32k (yes, each point only did about 15 multiply adds, but they spread the errors around similar to the formal definition of the fourier transform) swamped the signal in the rounding errors. Scott Manley's take (I didn't watch it again, and think it is almost entirely on address space although he probably mentions that "all scientific calculation *must* be done in 64 bit [or better]). https://www.youtube.com/watch?v=TSaNhyzfNFk This makes a lot more sense for RTX than typical "AAA" engines. Just look at what it did to Quake 2 and expect garage-level developers (like Squad or even less funded) to churn out nearly perfect lighting like the latest engine without the huge teams needed to make a "2019 engine" work.
  21. It mostly did. But I wasn't kidding about the commercial sector having to be almost completely isolated from US defense contracts. And that by the end of the cold war, the "cold war" equipment couldn't compete with the "not hobbled by MIL-SPEC" gear made by commercial companies. I'm also not sure of the real effects of splitting potential an entire generation of engineering talent between "military" and "commercial" domains. Things like the blackbird were spectacular. But don't forget the thing was made almost entirely out of unobtanium, and priced to match. And there was always some cross fertilization between military and commercial: for one thing computers pretty much followed similar trajectories: DoD (and intelligence) funded supercomputers lead the way, then mainframes/mini computers followed a similar trajectory, followed by the microcomputer (although I'm not sure IBM ever noticed what was done outside of IBM). MIL-SPEC was critically important in the creation of rugged gear. But it also tore a nearly unpassable chasm between the commercial and military worlds.
  22. Is there any indication that the horse harness was developed during the bronze age (and forgotten)? People had thought that the friezes depicting roman chariot races were a bit over-dramatic with the horses gasping for breath but it turned out that they were using ox-harnesses that literally strangled the horses as they pulled the chariots. It wasn't until the late middle ages that the horse harness was used in Europe allowing horses to plough instead of oxen. I wouldn't be at all surprised if this big detail was lost: they knew chariots existed and roughly how they worked, but the finer details about horse anatomy compatibility wouldn't show up in Homer. I suspect that somebody might have tried to fix things like that in the Constantinople races (a big thing), and been banned "because cheating".
  23. [Warning: this is detailed and probably too long. I was into this type of thing back in the day] further "prescript": if you want real performance increases, look at GPU architectures. Unfortunately, they are pretty hostile to programmers and not very compatible with nearly all algorithms and programming methods, but anything that can be made to fit their model can get extreme performance (and even bitcoin mining wasn't a great fit, but makes a good example of CPU vs. GPU power) CISC vs. RISC really belongs in the 1990s. I think there is a quote (from early editions) in Hennessy and Patterson (once "The Book" on computer architecture, especially when this debate was still going on) that any architecture made after 1984 or so was called "RISC". A quick post defining "RISC" (or at least where to place real processors on a RISC-CISC continuum by a then leading name in the field). https://www.yarchive.net/comp/risc_definition.html [tl:dr Indirect addressing was the big problem with CISC. Any complexity in computation is a non-issue to RISC, they just want to avoid any addressing complexity.] As magnemoe mentioned, early CPUs had limited transistor budgets (modern cores have power budgets - most of the chip isn't the CPU "core"). CISC really wasn't a "thing", just the old way of doing things that RISC revolted against. Even so, I think the defining thing about CISC was the used of microcode. Microcode is basically a bit of software that turns a bundle of transistors and gates into a computer, and you pretty much have to learn how to make it to understand what it is. It also made designing computers, and especially much more complex computer wildly easier, so was pretty much universally adopted for CPU design. Once CPU designers accepted microcode, they really weren't limited in the complexity of their instructions: the instructions were now coded as software instead of separate circuits. This also lead to a movement trying to "close the semantic gap" by making a CPU's internal instructions (i.e. assembly language) effectively a high level language that would be easy to program. The Intel 432 might be seen as the high point of this idea of CISC design, while the VAX minicomputer and the 68k (especially after the 68020 "improvements") are examples of success with extreme CISCyness. The initial inspiration for RISC was the five step pipeline. Instructions wouldn't be completed in a single clock, but spread over a "fetch/decode/execute/memory access/write back" pipeline with each instruction being processes on essentially an assembly line. So not only could they execute an instruction per cycle, the clock speed could be (theoretically) five times faster. Not only did RISC have the room for such things (missing all the microcode ROM and whatnot), it was also often difficult to pipeline CISC instructions. Another idea was to have one memory operation per instruction, any more made single cycle execution impossible (this made memory indirect access impossible, something that would make out-of-order much more viable). [Note that modern x86 CPUs have 20-30 cycle pipelines and break instructions down to "load/store" levels, there isn't much difference here between CISC/RISC] "Also, One of the features of many RISC processors is that all the instructions could be executed in a single clock cycle. No CISC CPU can do this." This is quite wrong. First, RISC architectures did this by simply throwing out all instructions they could that would have these issues and thus have to use multiple instructions to do the same thing (and don't underestimate just how valuable the storage space for all those instructions were when RISC (and especially CISC) were defined). Early RISCs couldn't execute jump or branch instructions in a single cycle as well, look up "branch delay slots" for their kludge around this. Finally, I really think you want to include things like a "divide" instruction: divide really doesn't pipeline well but you don't want to stop an emulate it with instructions (especially with an early tiny instruction cache). Once pipelining was effectively utilized, RISC designers included superscale processors (executing two instructions at once) and Out-of-order CPUs. These were hard to do with the simple RISC instruction sets and absolutely brutal for CISC. VAX made two pipelined machines: one tried to pipeline instructions, the other pipelined microcode. The "pipeline microcode" was successful but still ran 1/5 the speed of DEC's new ALPHA RISC CPU. Motorola managed pipelining with the 68040 and superscaler execution with the 68060. That ended the Motorola line. *NOTE* anybody who had to program x86 assembler always wished that IBM had chosen Motorola instead of Intel. The kludginess of early x86 is hard to believe in retrospect. Intel managed pipelining with the i486, superscaler execution with the Pentium, and Out-of-order (plus 3-way superscaler) with the Pentium Pro (at 200MHz, no less). It was clear that at least one CISC could run with the RISCs in performance while taking advantage of the massive infrastructure it had built over the years. Once Intel broke the out-of-order barrier with the Pentium Pro not to mention the AMD Athlon hot on its heels, the RISC chips had a hard time competing on performance against chips nearly as powerful, plenty cheaper, and with infinitely more software available. From a design standpoint, the two biggest differences between RISC and x86 were that decoding x86 was a real pain (lots of tricks have been used, and currently a microcode cache is used by both Intel and Ryzen) and the x86 didn't have enough integer registers (floating point was worse). This was fixed with the AMD64 instruction set that now has 16 integer instructions (same as ARM). The CISC-RISC division was dead and the RISCs could only retreat back to proprietary lockin and other means to keep customers. While that all sounds straightforward, pretty much every CISC chip made after the 386/68020 era was called a "RISC core executing CISC instructions". Curiously enough, the chips this was really true tended to fail the hardest. AMD's K5 chip was basically a 29k (a real RISC) based chip that translated x86 in microcode: it was a disaster (and lead to AMD buying Nexgen who made the K6). IBM's 605 (a PowerPC that could run x86 or PowerPC code) never made it out of the lab, although this might be from IBM already being burned by the "OS/2 problem" (emulating your competition only helps increase their marketshare). There really isn't a way you'd want to combine "RISC and CISC" anymore, RISC chips are perfectly happy including things like vector floating point multiply, and crypto instructions. The only CISCy thing they don't want is anything like indirect addressing (something not hard to code with separate instructions that can be then run out-of-order and tracked). Here's an example of how to build "CISCy" instructions out of RISC ones and wildly increase the power/complexity ratio. To me it is more a matter of making an out-of-order much more in-order, but you might see it in RISC/CISC terms: https://repositories.lib.utexas.edu/bitstream/handle/2152/3710/tsengf71786.pdf I'd also like to point out that if you really wanted to make a fast chip between a 6502 and an ARM1, a stack-based architecture might have been strongly tempting. The ARM1 spend half the transitor/space budget on 16 32bit registers alone, and I'd think that doing that with DRAM might have worked at the time (later DRAM and Logic process wouldn't be compatible, but I don't think this was true at the time). One catch with using a DRAM array for registers is that you could only access one operand at a time, which would work fine for a stack. Instructions typically take two operands and write to a third operand. The oldest architecture were accumulators (the 6502 was also an accumulator architecture) and would have a single operand either combined with the accumulator (single register) and the output would replace the accumulator or the accumulator would be written to memory. A stack[ish] would be an improvement on that with the accumulator being replaced with a "top of stack". CISC machines would allow both inputs to come from registers (or memory) and write to a register (or memory) [with the exception that the output would be the same as one of the inputs]. One of the defining characteristics of RISC was that they were load store: instructions either worked on 2 registers and outputed to another register (without the CISC requirement that one be the same) or load or store from memory to/from a register. The point of all this "single instruction operand" business would be that it would be compatible with the DRAM array (which then could fit whatever registers you needed into an early CPU). The downside would be that it would barely tolerate pipelining, and completely fail to go either superscaler or out-of-order (dying with the CISCs). But for a brief window it should really fly (much like the 6502).
  24. "Greatest swordsman in all of Spain" likely meant "best at unarmored rapier/smallsword fighting", which is a bit different from armored fighting, let along sword and shield. I'd expect a surprising number of warriors to favor the spear in such situations, especially if it is easier to get the blade out of an enemy after running them through (which isn't something you typically need to do when defending your title of "greatest swordsman in Spain). I also wouldn't call such combat "pike vs. pike" unless the Aztecs either had steel or some other spear tip capable of piercing at least some type of armor (Cortez couldn't get through a breastplate either, but almost certainly a gambeson). And the sword would only make sense if he could have a shield made (not sure he'd fight better sword and shield vs. spear, but perhaps it would raise morale to have such a swordsman leading your army). Matt Easton posted a video today that is amazingly on topic (but I doubt he is a space nerd as well): https://www.youtube.com/watch?v=1XcuZbMi0mM
×
×
  • Create New...