wumpus

Members
  • Content Count

    2,699
  • Joined

  • Last visited

Everything posted by wumpus

  1. While this is true, it wouldn't help at all to use diamond in a steel alloy. Although if you absolutely want to try such a thing, look up cermet (ceramic with molten metal jammed inside).
  2. There is more to Unity than the graphics engine. KSP used the Unity physics engine as well, and this ultimately lead to the Kraken and the "floating origin" fix (something HarvestR liked so much he named his new studio after it). Unless Unity can be convinced/compiled to run the physics engine in double mode (and eat the roughly 50% speed hit), the developers will be forced make similar adjustments over and over (not to mention that they need a completely different way to construct large crafts and constructable buildings [which may already exist]). For those wondering, 16 bit floating point should be enough for most graphic shading. I think AMD's new RNA does some of this. Doing the same for geometry work *should* work as well, but probably requires too much optimization work for too little gain (and those last few bugs look horrible and would be hard to fix). 32 bit floating point is overkill for just about any game (KSP being and odd example). Calculating with 32 bits (24 bits of effective mantissa) is overkill when you only use at most 10 bits (maybe 12 bits in horizontal resolution). 64 bits are needed when you need to use previously computed values for the next calculation and so on and so on. Anyone who learned to calculate "significant figures" via pencil and paper or calculator will get a sharp surprise when they learn how well numerical methods work when discarding "non-significant" precision in the midst of calculation (as far as I know, physically measuring to 32 bit precision is impossible in all but the most contrived experiments). I've tried to do a 32k point FFT in single point and the audio was worse than 8 bit resolution, so accumulating the errors of 32k (yes, each point only did about 15 multiply adds, but they spread the errors around similar to the formal definition of the fourier transform) swamped the signal in the rounding errors. Scott Manley's take (I didn't watch it again, and think it is almost entirely on address space although he probably mentions that "all scientific calculation *must* be done in 64 bit [or better]). https://www.youtube.com/watch?v=TSaNhyzfNFk This makes a lot more sense for RTX than typical "AAA" engines. Just look at what it did to Quake 2 and expect garage-level developers (like Squad or even less funded) to churn out nearly perfect lighting like the latest engine without the huge teams needed to make a "2019 engine" work.
  3. It mostly did. But I wasn't kidding about the commercial sector having to be almost completely isolated from US defense contracts. And that by the end of the cold war, the "cold war" equipment couldn't compete with the "not hobbled by MIL-SPEC" gear made by commercial companies. I'm also not sure of the real effects of splitting potential an entire generation of engineering talent between "military" and "commercial" domains. Things like the blackbird were spectacular. But don't forget the thing was made almost entirely out of unobtanium, and priced to match. And there was always some cross fertilization between military and commercial: for one thing computers pretty much followed similar trajectories: DoD (and intelligence) funded supercomputers lead the way, then mainframes/mini computers followed a similar trajectory, followed by the microcomputer (although I'm not sure IBM ever noticed what was done outside of IBM). MIL-SPEC was critically important in the creation of rugged gear. But it also tore a nearly unpassable chasm between the commercial and military worlds.
  4. Is there any indication that the horse harness was developed during the bronze age (and forgotten)? People had thought that the friezes depicting roman chariot races were a bit over-dramatic with the horses gasping for breath but it turned out that they were using ox-harnesses that literally strangled the horses as they pulled the chariots. It wasn't until the late middle ages that the horse harness was used in Europe allowing horses to plough instead of oxen. I wouldn't be at all surprised if this big detail was lost: they knew chariots existed and roughly how they worked, but the finer details about horse anatomy compatibility wouldn't show up in Homer. I suspect that somebody might have tried to fix things like that in the Constantinople races (a big thing), and been banned "because cheating".
  5. [Warning: this is detailed and probably too long. I was into this type of thing back in the day] further "prescript": if you want real performance increases, look at GPU architectures. Unfortunately, they are pretty hostile to programmers and not very compatible with nearly all algorithms and programming methods, but anything that can be made to fit their model can get extreme performance (and even bitcoin mining wasn't a great fit, but makes a good example of CPU vs. GPU power) CISC vs. RISC really belongs in the 1990s. I think there is a quote (from early editions) in Hennessy and Patterson (once "The Book" on computer architecture, especially when this debate was still going on) that any architecture made after 1984 or so was called "RISC". A quick post defining "RISC" (or at least where to place real processors on a RISC-CISC continuum by a then leading name in the field). https://www.yarchive.net/comp/risc_definition.html [tl:dr Indirect addressing was the big problem with CISC. Any complexity in computation is a non-issue to RISC, they just want to avoid any addressing complexity.] As magnemoe mentioned, early CPUs had limited transistor budgets (modern cores have power budgets - most of the chip isn't the CPU "core"). CISC really wasn't a "thing", just the old way of doing things that RISC revolted against. Even so, I think the defining thing about CISC was the used of microcode. Microcode is basically a bit of software that turns a bundle of transistors and gates into a computer, and you pretty much have to learn how to make it to understand what it is. It also made designing computers, and especially much more complex computer wildly easier, so was pretty much universally adopted for CPU design. Once CPU designers accepted microcode, they really weren't limited in the complexity of their instructions: the instructions were now coded as software instead of separate circuits. This also lead to a movement trying to "close the semantic gap" by making a CPU's internal instructions (i.e. assembly language) effectively a high level language that would be easy to program. The Intel 432 might be seen as the high point of this idea of CISC design, while the VAX minicomputer and the 68k (especially after the 68020 "improvements") are examples of success with extreme CISCyness. The initial inspiration for RISC was the five step pipeline. Instructions wouldn't be completed in a single clock, but spread over a "fetch/decode/execute/memory access/write back" pipeline with each instruction being processes on essentially an assembly line. So not only could they execute an instruction per cycle, the clock speed could be (theoretically) five times faster. Not only did RISC have the room for such things (missing all the microcode ROM and whatnot), it was also often difficult to pipeline CISC instructions. Another idea was to have one memory operation per instruction, any more made single cycle execution impossible (this made memory indirect access impossible, something that would make out-of-order much more viable). [Note that modern x86 CPUs have 20-30 cycle pipelines and break instructions down to "load/store" levels, there isn't much difference here between CISC/RISC] "Also, One of the features of many RISC processors is that all the instructions could be executed in a single clock cycle. No CISC CPU can do this." This is quite wrong. First, RISC architectures did this by simply throwing out all instructions they could that would have these issues and thus have to use multiple instructions to do the same thing (and don't underestimate just how valuable the storage space for all those instructions were when RISC (and especially CISC) were defined). Early RISCs couldn't execute jump or branch instructions in a single cycle as well, look up "branch delay slots" for their kludge around this. Finally, I really think you want to include things like a "divide" instruction: divide really doesn't pipeline well but you don't want to stop an emulate it with instructions (especially with an early tiny instruction cache). Once pipelining was effectively utilized, RISC designers included superscale processors (executing two instructions at once) and Out-of-order CPUs. These were hard to do with the simple RISC instruction sets and absolutely brutal for CISC. VAX made two pipelined machines: one tried to pipeline instructions, the other pipelined microcode. The "pipeline microcode" was successful but still ran 1/5 the speed of DEC's new ALPHA RISC CPU. Motorola managed pipelining with the 68040 and superscaler execution with the 68060. That ended the Motorola line. *NOTE* anybody who had to program x86 assembler always wished that IBM had chosen Motorola instead of Intel. The kludginess of early x86 is hard to believe in retrospect. Intel managed pipelining with the i486, superscaler execution with the Pentium, and Out-of-order (plus 3-way superscaler) with the Pentium Pro (at 200MHz, no less). It was clear that at least one CISC could run with the RISCs in performance while taking advantage of the massive infrastructure it had built over the years. Once Intel broke the out-of-order barrier with the Pentium Pro not to mention the AMD Athlon hot on its heels, the RISC chips had a hard time competing on performance against chips nearly as powerful, plenty cheaper, and with infinitely more software available. From a design standpoint, the two biggest differences between RISC and x86 were that decoding x86 was a real pain (lots of tricks have been used, and currently a microcode cache is used by both Intel and Ryzen) and the x86 didn't have enough integer registers (floating point was worse). This was fixed with the AMD64 instruction set that now has 16 integer instructions (same as ARM). The CISC-RISC division was dead and the RISCs could only retreat back to proprietary lockin and other means to keep customers. While that all sounds straightforward, pretty much every CISC chip made after the 386/68020 era was called a "RISC core executing CISC instructions". Curiously enough, the chips this was really true tended to fail the hardest. AMD's K5 chip was basically a 29k (a real RISC) based chip that translated x86 in microcode: it was a disaster (and lead to AMD buying Nexgen who made the K6). IBM's 605 (a PowerPC that could run x86 or PowerPC code) never made it out of the lab, although this might be from IBM already being burned by the "OS/2 problem" (emulating your competition only helps increase their marketshare). There really isn't a way you'd want to combine "RISC and CISC" anymore, RISC chips are perfectly happy including things like vector floating point multiply, and crypto instructions. The only CISCy thing they don't want is anything like indirect addressing (something not hard to code with separate instructions that can be then run out-of-order and tracked). Here's an example of how to build "CISCy" instructions out of RISC ones and wildly increase the power/complexity ratio. To me it is more a matter of making an out-of-order much more in-order, but you might see it in RISC/CISC terms: https://repositories.lib.utexas.edu/bitstream/handle/2152/3710/tsengf71786.pdf I'd also like to point out that if you really wanted to make a fast chip between a 6502 and an ARM1, a stack-based architecture might have been strongly tempting. The ARM1 spend half the transitor/space budget on 16 32bit registers alone, and I'd think that doing that with DRAM might have worked at the time (later DRAM and Logic process wouldn't be compatible, but I don't think this was true at the time). One catch with using a DRAM array for registers is that you could only access one operand at a time, which would work fine for a stack. Instructions typically take two operands and write to a third operand. The oldest architecture were accumulators (the 6502 was also an accumulator architecture) and would have a single operand either combined with the accumulator (single register) and the output would replace the accumulator or the accumulator would be written to memory. A stack[ish] would be an improvement on that with the accumulator being replaced with a "top of stack". CISC machines would allow both inputs to come from registers (or memory) and write to a register (or memory) [with the exception that the output would be the same as one of the inputs]. One of the defining characteristics of RISC was that they were load store: instructions either worked on 2 registers and outputed to another register (without the CISC requirement that one be the same) or load or store from memory to/from a register. The point of all this "single instruction operand" business would be that it would be compatible with the DRAM array (which then could fit whatever registers you needed into an early CPU). The downside would be that it would barely tolerate pipelining, and completely fail to go either superscaler or out-of-order (dying with the CISCs). But for a brief window it should really fly (much like the 6502).
  6. "Greatest swordsman in all of Spain" likely meant "best at unarmored rapier/smallsword fighting", which is a bit different from armored fighting, let along sword and shield. I'd expect a surprising number of warriors to favor the spear in such situations, especially if it is easier to get the blade out of an enemy after running them through (which isn't something you typically need to do when defending your title of "greatest swordsman in Spain). I also wouldn't call such combat "pike vs. pike" unless the Aztecs either had steel or some other spear tip capable of piercing at least some type of armor (Cortez couldn't get through a breastplate either, but almost certainly a gambeson). And the sword would only make sense if he could have a shield made (not sure he'd fight better sword and shield vs. spear, but perhaps it would raise morale to have such a swordsman leading your army). Matt Easton posted a video today that is amazingly on topic (but I doubt he is a space nerd as well): https://www.youtube.com/watch?v=1XcuZbMi0mM
  7. I remember a description claiming that blasting a piece of PVC was a good way for cubesats to move around. I think the idea was to vaporize the PVC (not quite as advance as turning it into plasma) and use that as rocket exhaust. Presumably using an electric arc to vaporize a bit of PVC.
  8. It is entirely possible that the Great War (WWI) solidified poor "best practice" in aircraft design. Aircraft went form wildly experimental to engine in front, one set of wings (as the biplanes slowly died) somewhat behind the engine and a tail at the end of the fuselage. A lot of great ideas where put aside to standardize on this form. I'd expect that the modern explosion of computing and communications (especially the internet) had a technological increase that put WWII to shame. Moore's law holding for nearly 50 years of exponential improvements is not something that even a prolonged war can match. While some may point to the Cold War as a great example of military spending pushing tech, it might help to look at exactly how (at least in the US) defense systems were constructed, especially compared to the commercial world (which was busy obeying Moore's Law). The defense world was controlled by MIL-SPEC, and had to use MIL-SPEC parts. These parts required a full bureaucratic system to allow anyone to use such a part, and couldn't be improved without a similar bureaucratic approval. Once a manufacturer was certified to manufacture such a part, there was little reason to drop the price. By the 1990s, the US had completely dropped this system for COTS, which was building defense equipment from off the shelf parts and then using whatever engineering needed to meed the MIL-STD requirements (basically hit the equipment hard enough to know if it survives a wartime attack). The timing of this shows that by the end of the cold war, that instead of the massive DoD budget improving the commercial world, the "tail was wagging the dog" with the DoD having to buy far more advanced commercial products if it wanted state of the art equipment (and them try to get them to pass MIL-STD). - PS: This was my first taste of design engineering, and I did this between 1997-2001, so have some idea how it went... The US Civil War seems to show little advancement on the battlefield, but I suspect that the logistics experience (especially in the North) provided plenty of ready-made executives for gilded (and pre-gilded) age corporations. While there *were* repeat-fire rifles used in the war, they were used pretty sparingly and only for the biggest attacks (you *really* didn't want to be issued one). It seems the Indian Wars had much quicker advancement in rifle design, quickly working toward a repeating (and rarely-jamming) Winchester Rifle. I'm guessing that the US Army bought rifles in smaller batches more often during the Indian Wars, and wasn't as picky about specific requirements (Civil War rifles were likely stuck with the Minie ball). -- Note: most of my Civil War knowledge comes from tramping around battlefields somewhat near Washington DC. I'm sure other members of this board have far more detailed knolledge).
  9. As far as I know, the most widely used alleged "bug free" program out there is TeX. TeX is a relatively small program (at least compared to modern games) and written by a single computing giant (Donald Knuth) and basically not updated with feature after feature after finishing it. You might look into exactly what it took to make TeX bug free, and how impossible it would be to map that to KSP (although lack of further updates makes it theoretically possible, just that it would involve an infinite number of further bugfixing updates). There might also be small bug-free in the industrial controls world and similar places. That said, the code in Boeing 737-MAX is likely "correct" in that it meets the spec, it is the spec that is broken. So many "bug free" programs out there might well be "broken at spec" while faithfully executing the spec. Bug fixing is a great goal, just don't count on getting rid of a significant amount of them (you'd have to fix Unity as well).
  10. Asteroid steel seems the most obvious, although if you have enough time and solar power, lunar aluminum would make sense. I'd assume that for a long time, off world hull construction would be for spaceships were shielding is far more important than mass, so hollowing out an asteroid would make a lot of sense. "Those with 2g or higher." Sorry, chemistry doesn't care a hoot about gravity. Forging might be easier under some considerations (although probably not others), but you really don't want to build you hull in Earth's gravity well, let alone something with a 2g acceleration. Space construction will be done in zero-g until energy (and lifting into orbit) is essentially free.
  11. I'm pretty sure that the higher (and therefore more volume "consumed") the satellite is, the less likely it is to hit anything. I suspect requiring insurance to cover anything you hit (the FAA required Spacex to have plenty of insurance just for the "hopper" test) would have the actuaries get the real requirements much closer to reality. Of course, this is more how the USA works, not sure if other countries would be interested.
  12. Not a siege tower. There's no reason to make a space elevator roll, and typically (anything not due East or West) will have poor consequences. More like a defensive tower than an offensive one.
  13. One thing you seem to ignore is the shear size of an Orion. You could make an unmanned one small, but in general both the pusher plate and the rest of the vessel require a lot of inertial mass. Think roughly the size of a WWII battleship (or small modern carrier) and you might grasp the size of these things. Putting it on magnetic rails and zipping it off to space is clearly off the table. Who knows, maybe someone may try a fuel-air pusher plate done at a small-scale. According to the last thread that eventually became a pusher-plate thread, that may be possible (at least for the first stage, you'll need a more conventional vacuum stage after that).
  14. As far as I know, SpaceX exists entirely as a service company. Blue Origin may sell rocket engines, but as far as I know, SpaceX only provides launch services. I only hope that doesn't bleed over to Telsa, he has enough communications issues there without using "ship" to mean anything other than delivery cars (and later trucks).
  15. All I've heard is that it is smoke and mirrors plus a few protocol changes. The "new frequencies" simply don't have the range to be used as normal cell tower frequencies, they are more like wifi ranges (mostly thanks to also being absorbed by water molecules).
  16. Sorry, they left out the actions of the magnetosphere when making those calculations. All the residue from all the nuclear explosions within roughly geosync orbit will come back to Earth (of course that means safer isotopes than when the bomb went off, but they are still coming back and wrecking havoc on Earth). You can mostly ignore that by polar launches (i.e. launching out of Antarctica), but launching out of the Mohave is a no-go. And you aren't landing an Orion, unless that "bomb ignited with a laser" works a lot better than the NIF can do. In general, the energy of nuclear weapons don't scale much with cost, so you might as well use the biggest bombs you have. And that means a big Orion. And that means landing is right out (nevermind that it means all residue fires straight to the Earth instead of the other way around).
  17. Oddly enough, the turbo-compound engine were also in considerable use in WWII. The idea was that you used the exhaust from your piston engine to drive a turbine, which helps drive the propellers. This increased gas efficiency and ("brake specific horsepower") and thus range, at a cost of extreme complexity and maintenance costs. Simplifying the whole structure into a turboprop was a big driver in that direction. My understanding is that the real driver for buying a turboprop over a piston engine are the maintenance costs (A&R mechanics are expensive). Increased fuel efficiency is great, but you save more on longer time between overhauls. I would assume so, at least until Material A was heated to a uniform temperature. Until then, the rest of the bar on material A's side will act as a heatsink drawing heat away from the connection point. As material A's temperature increases this effect will get less and less (to the point the effect may switch, as material B's side will remain cold but is a lousy heat sink). Warning: I have no formal education in thermodynamics beyond learning enough in physics class to understand the Carnot cycle. On the other hand, One blue three brown (a great youtube channel devoted to explaining math) is doing a multipart series on the equation that governs the topic you are asking about
  18. Nuclear thermal rockets were giving Isp (via heating up hydrogen) of 800s in the early 1970s. I've heard that Isp ~1200s should be viable now for NTRs. Metalic hydrogen would have the same issues, and could be cut with LOH to get the most thrust for your given metalic hydrogen (at the cost of Isp). I'd hope that's enough to get you a SSTO. I wouldn't ignore fuel-air explosions. It would be a quick and dirty means of getting an air-breather up. The real issue is that an air-breathing Orion SSTO isn't just going to LEO, it has to go to basically LTI (not really, but the difference in delta-v between where the magnetosphere ends and escape velocity is pretty small) before firing up the nukes (assuming you can convince people to let you put them on board). Does LOH even work as a fuel-air bomb? I'm pretty sure you could get kerosene to go boom, but I've never heard of LOH (probably because chemical Orions would be the only reason you would try it). You should also be able to spray the LOH in such a way to get a highly effective shaped charge toward the pusher plate. The two real contenders for "first SSTO" would have to be a space elevator and an Orion. Once space elevators are built, I can't imagine much R&D wasted on SSTOs (although you might get enough Isp that it would be a non-issue someday). Of course, space elevators easily require as much unobtanium as any other SSTO, but simply have more return on investment than any competitor. The Orion could be built by anyone who really wanted to explore space directly and had the political backing to do it. Metalic hydrogen might squeak in, but I'm guessing early reports were overly optimistic.
  19. Belly lander? Any particular reason? Or just make a Starship-like rentry a wee bit easier (except you'll need much more powerful landing rockets for your Earth landing). Non-nuclear bombs? You get a lower Isp/less energy than current rocket fuels I'm afraid, the chemistry simply works that way*. The exception would be using fuel-air bombs behind a pusher-plate. They would have "less Isp/energy" than current air-breather fuels, but that can still be stupendously high (it better be, you'll still need tons of dry mass on any pusher plate). Nuclear power without a pusher plate? Not enough thrust to lift off. Except they both look wildly different than 20th century "AC" and "DC" engines. I suppose that old-fashioned electric motors would still make some sense (for AC motors less than 20hp and DC motors where you don't care about longevity or efficiency, and even less power maximums. As in you won't pay more than a few cents for a modern DC engine controller). Induction engines work far better if you create the AC on the fly (in a way that looks just like making a DC power supply, except you can also produce current going the other way) and "DC" motors typically have the "DC" provided in exactly the same way (except it goes directly to the magnets instead of creating the magnetic field via induction). Supposedly you should be able to create a pulsed jet with no moving parts (I'd assume it would only work in a narrow airspeed range), and that could be used in the modern world (and require some extreme supercomputing to develop). Otherwise I don't foresee pulsed jets being practical (even then it probably wouldn't be all that practical. Probably too noisy for general aviation, too inefficient for the big boys, and only useful to model aircraft makers who really want a jet engine (which is were I'd expect to see classical pulse jets anyway). * the difference is power vs. energy. Bombs deliver a great deal of power** all at once, while convention fuels take awhile to burn (until you add LOX to the mix, that's the reason rockets can explode). Also bombs have to contain their own oxidizer, which is typically lower in energy density than LOX). ** I doubt my physics teachers would approve of defining power in a way that ignores work, but assume a pusher plate if that makes you happy.
  20. KSP 2.0 is said to still be using Unity, which implies less hardware requirements than your typical AAA game. No idea if they plan on replacing the physics engine (almost certainly not if remaining on Unity), but that would at least give them the options of using double precision and more threads. I'd assume that you want a few cores (although an i3 with 2 cores and HT might be enough) and a fairly high clockspeed. If they want to include the colonies as shown, there will have to be some drastic changes to the physics model. No computer designed could run that many kerbal objects through the unity physics engine without grinding to a complete halt (I'd assume that anything planted in the ground will be "physicsless" until something bumps into it: presumably any crashes would use a model similar to crashing into KSC).
  21. Eh? I've often heard this argument (although it isn't quite as bad as movies "ruining" the book). The old game will still be there. Of course, as a Steam game I'm only hopeful it will still be for sale (and considering the $60 price, it may well not want the competition). The best we can hope for is a last stable version with a more or less full suite of mods (that last bit is critical, if the modders all jump on the hype train while Squad slowly updates KSP 1.x, we could have a highly fractured community all over the map (depending on which mods you won't upgrade without). This isn't an online game. You can still go to Duna even if everybody else is on KSP 2.0. You can still go to Eve (but you still can't lift back off) even if Squad/Steam turns the servers off (thanks Squad and Harvester for that traditional lack of DRM). The retro-gaming movement has shown that good games still have life in them no matter how far the industry has moved on. There are three basic possibilities for KSP 1.0 after release of KSP 2.0: 1. KSP 2.0 completely fails, community keeps KSP 1.0 alive (highly unlikely but it has happened. I think Asheron's Call 2 died a few months after starting. There must be other examples...) 2. KSP 2.0 is merely adequate, forking the community to various degrees. KSP is a great game: trying to capture lighting in a bottle twice (with completely different developers) is going to be tough. While KSP has many places that need serious improvement, I'd be shocked if KSP does the sandbox quite as well, and that is what makes and breaks KSP. 3. KSP 2.0 is clearly superior, and the community moves en masse to KSP 2.0. Actually I assume that all new players will start on KSP 2.0, so Steam statistics will probably show this to be the case regardless while any grognards clinging to KSP 1.0 will clearly claim that case #2 is "obviously" happening. In any event, KSP 2.0 will likely both need new mods, and have more obvious places that need expansion, so will probably get the modding community in any event.
  22. That the US patent office advertises using "shield patents" for reason #3 tells you quite a bit about how broken the US patent (and any similar system) is. Also it's been like this at least through a couple Bush administrations and two Democratic administrations as well, so much of the political machinery holding it together is bipartisan (I'm guessing anyone who can afford effective lobbyists can afford enough patent lawyers to usually get an edge against the competition).
  23. I'd assume a "real" definition includes lift>weight, but that might include the Apollo capsule (the drag *should* disqualify it, but I doubt that is formally defined). I'm pretty sure that it was designed to skip through the atmosphere to a higher orbit and then come down again (all to reduce maximum heating). I don't think that was ever done in a real mission (they angled it, but not enough to leave the atmosphere). Of course, Apollo had a much higher velocity than any other returning spacecraft, so had an amazing lift advantage just by moving the center of mass away from the center of drag.
  24. 2020 launch SLS test launch 2022 launch SLS crewed? SLS block 1b test? 2024 launch SLS block1b crewed + unknown support craft Delta IV heavy and Falcon 9 are presumably available for shipping whatever is needed to TLI, but would need a contract *right* *now* to build/integrate the parts for a 2024 launch. Of all the innovations that have come out of SpaceX, why did NASA/Boeing have to pick up "Elon time"?
  25. I doubt they have bet the company on Starlink (doesn't Google have money in it as well), but they have more or less bet the company on Starship, and Starlink is its only current job (although presumably Starship could launch multiple birds into reasonably close orbits at Falcon9 prices).