wumpus
Members-
Posts
3,585 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by wumpus
-
How dangerous is a NERVA during its lifetime?
wumpus replied to Elthy's topic in Science & Spaceflight
and it leaks. A quick googling of "IR telescope hydrogen cooling" and "IR telescope helium cooling" seems to imply they typically use helium (and your above comments apply better to helium). The Hershel Space Observatory used helium for just this use, except that it ran out of He after 4 years. The James Webb project looks like it will run with hydrogen cooling, but not so much before then (and appears to have moved to H2 due to insufficient rare He isotopes). Reading a paper on its cooling makes keeping a H2 tank cool in Martian orbit seem considerably easier: three parasols will keep the tank at 40K (at 1AU, hopefully a bit less out at Mars) and presumably you would only need a peltier to cool it down somewhere between 22-33K (the James Webb telescope needs sub-Kelvin. NERVA hardly does). Nevertheless, it involves a lot of dry mass to haul to Mars. And it will still leak. http://cmbpol.uchicago.edu/depot/pdf/white-paper_w-holmes.pdf [paper on James Webb cooling] http://www.nasa.gov/pdf/373665main_NASA-SP-2009-566.pdf [NASA 2009 paper that details plans for NERVA trip to Mars. Admits that no zero-boiloff H2 containers existed in 2009.] -
Nuclear Energy. History, Ecology, Economy.
wumpus replied to Alias72's topic in Science & Spaceflight
The catch is that there isn't a good argument for building for a "base load" in the US right now (I'm less sure about other areas). You have solar power for the highest load times. You have wind when it comes. You have natural gas that can be turned on and off quickly when these aren't sufficient. The huge coal/nuclear plants of yore just aren't necessarily needed in this situation. And if you are paying for 40 years worth of electricity up front, that power had better be *necessary*. It's the uncertainty that kills it. Most of it comes from superstitious dread of evil spirits released inside of atoms (which I remain convinced was partly funded by power execs full of FUD from "to cheap to meter"), but now you will have the money guys uncertain about a profit. No idea how you build something as expensive as a nuke plant when the money guys are uncertain about a profit. -
How dangerous is a NERVA during its lifetime?
wumpus replied to Elthy's topic in Science & Spaceflight
One dangerous thing about NERVA is that while it can take you to distant planets, nobody really knows how to get it to bring you home. NERVA uses hydrogen. It has to, otherwise you don't get enough exaust velocity to bring your ISP up to sufficient reasons to use NERVA (presumably helium *should* work, although much less efficiently: but I think you would need to crank up the temperature to chemical rocket levels. My guess is that things [i.e. the nuke parts] would get *really* dangerous at that point.) Hydrogen leaks. Hydrogen also typically boils off: the X15 + B52 carried 2 and a half tanks worth of H2 for the X15. One tank to fly with and the other tank and a half to refill the tank as it boiled off. Current NASA plans to Mars include a "zero boiloff" H2 tank, but so far no such thing exists (presumably you "just" need to keep the tank under 33K (H2's critical point)). Even if you do that, expect to lose at least 1% of your H2 each month. That Isp of 800 looks at lot less good when you suddenly have a lot less fuel. -
Nuclear Energy. History, Ecology, Economy.
wumpus replied to Alias72's topic in Science & Spaceflight
One huge issue with nuclear power is that you essentially pay nearly all the money for your electricity up front, and then get your electricity over the course of 40 years or so. A bean counter might describe it as a redonkulous-sized energy option. The price of solar-panel generated power is steadily dropping (and since it is powered by semiconductors this drop might appear inevitable). The price of wind-generated electricity has also dropped for awhile (I was quite impressed when the difference between coal-generated and wind-generated power was negligible compared to the assorted overheads that appeared on my electricity bill. This must have been 10 years ago at least). While these don't directly compete with nuclear (nuclear is considered a "base" load), it would be hard to argue building a huge and expensive nuclear plant to generate power more expensively than can otherwise be had. I don't expect anyone to commit to funding a nuclear plant unless they are certain that they won't be stuck with a white elephant that can't generate sufficiently expensive power to cover its own investment. This won't happen until the price of solar panels bottoms out (and with sufficiently obvious reasons it won't start going down again). You might call this a case where the best is the enemy of the good. A better idea might be an inherently slow moving tech (which can only evolve as fast as nuke plants are built) being surpassed by a quick moving tech than can evolve as fast as solar panels are built. Nukes might still "win", but I wouldn't go near them until the solar panels have "failed". -
High End Cpu and Low End Cpu Difference on KSP?
wumpus replied to ThePULSAR's topic in KSP1 Discussion
Pre 1.1, absolutely* no difference between i3 and i7. Post 1.1 unknown difference between i3 and i7, but likely only occurring for spacestations and rockets of hundreds or thousands** of parts. The amount of GHz should matter a whole lot more than the number after the "i". All that tells you are the number of cores and the number of threads (and they are different for mobile and desktop, so good luck comparing the two). Note that I am using an AMD 8 core chip notorious for poor single-core performance. KSP, a notoriously single thread game, does pretty well on it. If you plan on building whackjob (a player on this forum who posts youtube videos of his oversized rockets) sized creations, you might want to think about how CPU speed effects KSP, otherwise it doesn't matter. Don't forget GPU speed. I suspect that a modern i3 (or i7) chip comes with enough built-in GPU to run KSP, an older laptop might have issues. I know my fathers old AMD (dual core athlon-based) laptop is completely incapable of running KSP and I blame the GPU (which was a complete non-issue in buying the thing, so no surprise there). KSP doesn't stress GPUs much at all, but you will need *something* to run the graphics on. * I'm ignoring the fact that i7 should have significantly more on chip memory than i3. An i7 might hit lag at 106 parts instead of 104. ** thousands is very, very optimistic. -
How long would it take to build today's technology?
wumpus replied to Endersmens's topic in Science & Spaceflight
Follow some of the above links to see why rebuilding today's tech might take hundreds (or thousands) of years. You need a blast furnace: easy? You get to make the whole thing out of bricks (you can't reinforce concrete. You can't really find anything better than brick. You get to hand mold and fire the bricks nearly individually). You dig the foundation with shovels. Any roads built to bring coal and/or ore also are dug by shovels (any guesses why Pittsburgh is a mountainous horror for modern drivers but does contain 3 rivers (and very expensive bridges)?). Once that is built you can *start* by building low quality machine tools (to build medium quality machine tools that might somehow cut the improved steel you need for 19th century machine tools...). The sheer amount of infrastructure needed to build each level boggles the mind. The biggest difference is that during actual history 90% of the population was out in the fields producing food, presumably this thread requires wildly less population than the original method. Amdahl's law (which states that computers [really algorithms] are limited by the serial portions and that adding more parallel hardware stops increasing speed far earlier than you thought) applies here as well. You aren't progressing up the tech tree until you have finished *all* of each level of said tree. James Burke [see Connections] has built an entire career out of pointing this out (well worth seeing if this isn't obvious to you, if you know it better you will find some really iffy connections). Want to build a printing press? You need paper (it doesn't work all that well on sheepskin). You need fine metal casting (Guetenburg was a goldsmith...). You need a new type of ink (lampblack, cottonseed oil and gum arabic in roughly equal quantities will do). You need a [grape] press (probably the easiest part, often seen as the "invention" by those who never looked closely). Keep looking deeper and you only run into more issues that have to be engineered out, and expect to run into issues that absolutely require an intact infrastructure to build (the Wright Brothers knew they were not flying until they had at least an 8 hp engine under 200lbs (it actually made 12hp)). Get a 25hp engine at that weight and you can start to make a brick fly... Personally, I'd expect to have vacuum tubes before transistors, but the existence of a 1920's transistor and assuming a textbook explaining how they work might make it possible. Presumably you could make vacuum tubes with 18th century tech, but slowly building up the tech to generate electricity would be a killer. My guess is that you absolutely *need* that electricity for your chemical industry: that would be the reason it gets built. Whenever a computer gets built is dependent on the tech level the power is available (also assuming a bias towards computers that our hypothetical population probably wouldn't have). -
Anything can be rocket fuel, if you try hard enough?
wumpus replied to Dman979's topic in Science & Spaceflight
Oddly enough, I don't think anyone has managed to use kerbals as rocket fuel. Judging from youtube, it is not for lack of trying. -
Rules of Thumb for Building Cheap and Cheerful Rockets
wumpus replied to Norcalplanner's topic in KSP1 Tutorials
Is Kerbal Engineer buggy, or is it just my installation (which should be current, but I had problems with it)? My copy has different values for TWR in the VAB and on the pad, with the pad values looking more accurate. I was trying to get data justifying higher TWRs, when I realized that what I thought my TWR was (as set in the VAB) was likely way higher anyway." "RoT 3.6 - For serial staged LFO rockets, the upper stage should have thrust between 1/3 and 1/6 of the lower stage." Usually (in non-kerbal guides) I see this written as "mass of upper stages should be around 1/4 of the lower stage". Going by thrust will have much higher mass, but should make up for it with much higher ISPs. A better rule of thumb (not seen enough in the kerbal community) is to keep the delta-v between stages roughly equal (which obviously doesn't come into play in C&C designs until you are using enough kickback SRBs to have much control over your stage 1 delta-v). Note that equal delta-v stages is only a rule of thumb and not always optimal. Recovery also throws a monkey wrench into everything (assuming "cheap" equals "less cost" during career). One way to work it is to use SRBs for the first stage (jettisoned first stages landed before hitting 3000m seems a fools errand, although I'm curious if it works), followed by a recoverable second stage: stage 2 needs airbrakes, , and enough fuel/delta-v to get itself into orbit (once payload has taken off) and back to KSC. The key here is that once the payload is launched the remaining booster is light enough that circularizing takes very little fuel to get back to KSC. On the other hand, this gets sufficiently tiresome to burn me out of KSP for awhile (mechjeb is both too inefficient to land like I want it to land (low fuel usage) and too efficient to always hit the landing pad for 100% recovery when it does work). -
How long would it take to build today's technology?
wumpus replied to Endersmens's topic in Science & Spaceflight
Are making transistors (who have some unbelievable purity requirements) really that easy? I guess it depends on how far along you have your chemical industry. I think there is a patent on the transistor from the 1920s. The kicker is that the patent describes an incorrect process: all working transistors from that era included contaminants (presumably the doping "real" transistors need). And for the ~Z80 level integration, I'd prefer a 6502 but you really want to stick a stack in there. Not necessarily a full-blown Forth chip like I alluded to, but would probably crank up C as well (at least the C that fits in a few kbytes). The followup chip would be strongly stack oriented (mostly so you could have all sorts of on-chip DRAM for whatever registers you need. Stacks mean everything but one register need only one R/W port). Unfortunately, this is a dead end as well and it is off to ARM/Alpha land a few more shrinks down the road. I still think the key is to skip as many disasters as possible: non-transistors alternatives to core, move to DRAM as soon as possible, understand most of the issues behind floating point. Understanding the break points between structured and OO code (don't ever use spaghetti code, and switch to OO when your data>>code). Don't use anything like x86 (mainly the segments, but really everything). Don't use anything like DOS. Don't trust anything from the 'net (i.e. don't use anything like windows* [or at least any of the "run anything it sees" model splattered across all windows editions]" * while I'm writing this from Linux I almost always play KSP under windows. Not sure which version it was, but I suspect my nvidia drivers had more bugs than 64-bit windows (yes, ran it for awhile) KSP. PS. Note that most of these "don't do that" are all about using the benefits of hindsight (there was no excuse for releasing activeX in 1996). As much as the 8088 segmenting model was hated, if it was just stretched a little further (over 8 bits instead of 4) it would roughly lasted until replaced by 386 flat addressing. In 1977 64k was a ton of memory (and using memory in 256 chunks at a time would seem wasteful), so they didn't (I suspect that the real reason was the extra 4 pins or so, motorola made a similar "mistake" ignoring the upper 8 bits of addressing words. Quite useful on the 68000, incompatible on the 68020. While I recommend jumping to stack/Forth chips as soon as possible, it badly limits pipelining and executing multiple instructions (possible with completely new architecture, but not really worth it). Who knows, maybe it will get the idea of multi-threading started early enough in the industry (these small fast stack chips should be much smaller than deeply pipelined multiple execution units). Note that this should also skip the "close the semantic gap" issue: when assembler was still fashionable, it made sense to make architectures that could be easily programmed in assembler. This lead to lots of ways to address memory. It turns out that the real benefit to RISC (reduced instruction set computing) should really be called (reduced instruction set complexity) and that the complexity of the addressing modes (so loved by assembler writers) were the biggest (and really only insurmountable) one. Basically you would go from stack chips to modern Out-of-order chips written in C-ish (obviously the string library would be sanely written, malock-free would presumably be more careful (I don't know the whole story on this) and other pitfalls more carefully avoided (I'm guessing that C would be designed to compile under Objective-C and C++ would never get used). -
Anything can be rocket fuel, if you try hard enough?
wumpus replied to Dman979's topic in Science & Spaceflight
Remember, threats are verboten. Is OP a test engineer? If so he is hypergolic with chlorinetrifloride. Cl3F is famously described by John Clarke as: PS. According to wiki, John Clarke was not only sufficiently badass to be familiar enough with F3Cl for the above quote, he also influenced the writing career of L. Sprague de Camp (highly recommended) and Fletcher Pratt (who I only really remember from writing the Compleate Enchanter with LSdC). Amazing. -
How long would it take to build today's technology?
wumpus replied to Endersmens's topic in Science & Spaceflight
[How big is the population? My guess is the question implies some sort of spaceship lands with zero tools (God/Vulcan made the first blacksmith gloves...) but with a full library, living quarters, food supply (quick, trick it into supplying vegetable oil for a diesel engine).] Except that to build an ENIAC you need the ability to make reliable* vacuum tubes and a lot of wire. To *run* the ENIAC you need ridiculous amounts of electricity (said to be enough to dim the lights of Philadelphia). You wouldn't build an ENIAC: you would make sure you had magnetic core memory first, and use that instead of the SRAM-like vacuum tube disaster that ENIAC used (you also get to put your software into the core memory instead of using wire jumpers like ENIAC used. You then make something like a PDP-7 tuned to feed some sort of vector floating-point processor (assuming you need math tables for whatever reason. If you are heading straight to recreating the internet, maybe some sort of SMT design for multi-users). This pretty much means that you don't care about existing schematics/PCB/IC layout. While schematics might be on computers, they typically assumed a full computer-controlled PCB manufacturing process plus pick and place (I'm furiously ignoring a bit in the 90s where that wasn't quite true in a lot of places, including where I worked). You don't want a 8088 in much the same way you don't want an ENIAC because the 8088 was a gawdawful kludge. You would go from something like a Chuck Moore (the Forth guy, not the Intel co-founder) Forth chip (for absolutely minimal transistor availability) straight to something like ARM or Alpha. Reproducing Moore's Law would largely depend on population. Obviously, the whole question should be man-years, not just years. Easily the biggest reason Moore's law has worked the past 40+ years is that we have been buying every transistor that the Silicon Valley (and other places) can make. If you can't spend billions on a fab, you don't get a fab (and haven't gotten one for years). Of course, you could easily declare ".28um is good enough", in the 20th century a smaller chip was faster, cheaper, and used less power. Nowadays it is pick just one (usually low power). * reliable vacuum tube is pretty much a contradiction, but at some point the thing has to run long enough to make progress. Without using vacuum tubes for memory you should need a lot less tubes. -
Software engineers and the rest of the world.
wumpus replied to PB666's topic in Science & Spaceflight
Also note that for those who run mainframes for legacy software and not reliability, it is largely an internal IBM business decision to sell [actually I'm pretty sure it's still only lease] a mainframe over supplying the software stack to run on emulation. They were selling "AS/400" [not exactly a mainframe, but with a similar market] hardware that was a Power emulating AS/400 when powerPC was new, I'm pretty sure every AS/400 sold since was emulated. But that niche makes IBM a nice chunk of change. -
Note that multiple cores should have some effect with 1.1. Just don't expect miracles (like my AMD8320 catching up to an i3).
-
How long would it take to build today's technology?
wumpus replied to Endersmens's topic in Science & Spaceflight
Quite possibly infinity. You might have "all of today's knowledge" today, you won't have it all tomorrow. You can't even write it all down (on what? Can you, personally, make paper? How about with only what you can find while camping?). Every time somebody dies, that much tech is lost (nobody is learning even the old tech). Toss the green revolution. Toss fertilizers. Toss tractors. Toss the trucks needed to move the food to the people. Expect 90% of the population to die in the first season, followed by another 90% of that within the next few years. Don't expect the remaining 1% to be doing anything but staying alive and heroically trying to re-create an 18th century technology base. So now you have a population that can likely whip up any mid-20th century design they need, say a kalashnikov or a 50s chevy with hand tools (only slightly exaggerating). The real kicker is going to be the resources. Back in 1900 when oil was discovered in Pennsylvania, it took little more than drilling a water well to get oil. Since then, oil that can be recovered for less than $50/barrel has already be drilled and mostly pumped dry (survivors in oil country will have it a little easier, for awhile). Iron ore won't be an issue: simply "mine" a landfill and harvest the metals for recycling/reforging, but there is no way to "recycle" oil. Unless this happens after a worldwide solar PV grid is made (and one largely locally based, not requiring too complex an infrastructure to run), there won't be a second industrial revolution.' I'm still willing to believe that a population that can survive a collapse could get back to 20th century tech (presumably with power it would take longer to get back to present due to the lower population (and thus fewer engineers), but it would pretty much follow the curve). The effects of the collapse of the state likely have to be seen to be believed: consider the fall of Rome or the havok wrought by small pox on the american indians (oddly enough, much "higher" civilizations in North America had already collapsed on their own, but what was lost was shocking). If we don't get back to the 20th century in a generation (due to fighting/lack of property laws/whatever between those with food/fuel/metals). There's also the issue that whoever hoarded meth precursors will likely be king. -
You've seen the xkcd cartoon about knowlege of orbital mechanics? If you have to ask the question, your graph is flattening out. Fix any errors that may have crept in via Kerbal's unobtanium lithosphere with real-solar-system. Expect to require a ton more realism mods (you need a ton more delta-v, the current fuel tanks are unrealistically heavy).
-
Probably a reasonably large crystal. Hard to get attached to something at the typical molecular size. Of course after "blowing up" (read two foot flame) cyclohexane in my first chemistry lab, I'll always remember that one.
-
Feynman's complete lecture series are now free.
wumpus replied to PB666's topic in Science & Spaceflight
Some notes: The lectures are gratis not libre. You are stuck reading the webpage instead of a handy ebook. The webpage is rather cumbersome with a nook (have to try it with a more advanced tablet). It is still likely the best physics resource you will find anywhere (with the obvious exception of a paid paper/e-book copy of The Feynman Lectures). -
Would SpaceX ever sell it's Rocket engines for other customers?
wumpus replied to fredinno's topic in Science & Spaceflight
There was some talk about a number of projects for some lunar xprize. One falcon9 was way out of the budget for such things but by pooling their money they could buy one together. Presumably, one team could do well enough on a Falcon[1] rocket, but they don't sell those anymore. I strongly suspect that part of the reason that they don't sell it is that much of the tooling no longer exists (modified for merlin 1.2 or so). Building a merlin .9 (or whatever powered the original falcon) may be essentially impossible [read more expensive than a full falcon9] and there is likely no guarantee that a merlin 1.2 can launch a falcon 1 (consider that the difference between the last failure and first success was changing the time between separation and ignition, the details would likely be disastrous). Then there is that whole issue about asking a company that exists to go forward to Mars to take a number of steps backward for a very small budget. -
Software engineers and the rest of the world.
wumpus replied to PB666's topic in Science & Spaceflight
Eh? The old division bug happened because both the original hardware design and the verification proof both had mistakes. From memory, the verification engineer manually proved half the lookup table correct (must have been doing an inversion before multiplying) then proved the rest by comparison to the previously proven numbers. Unfortunately, the second part wasn't quite correct. Considering that they were designing the most complex processors in the world (they started the Pentium [the one with the division bug] and the Pentium Pro [that could keep up with most outrageously expensive RISC chips] at the same time), I'd say they would have already started building a team of the best and brightest in automatic software verification (verification takes *way* more engineer-hours than design) with or without said bug. - - - Updated - - - Why are you still whining about "elitist bias"? If you are so much a better coder that you can produce your great work while ignoring the last 30-40 years of software design, just read the 3603 pages and write the code. Or you could actually listen to those who tell you that managing megabytes in assembler was an absolute disaster and that doing the same for gigabytes might not be the best way to proceed. Why not write it in Z-80 code while you are at it if you prefer that scheme? People don't buy AMD64 machines to run programs written in Assembler, so why in the world would Intel care about assembly (and bother to make CPUs that were easy to write assembler in)? They've only shipped a few billion CPUs since the last major assembler work was written (no clue what it was, probably a MS-BASIC derived thing kept alive since it contained the last code Bill Gates wrote), assembly is strictly an afterthought (small exception: The Linux 1.x kernal was "written in C that may have well been assembly..."). And just out of curiosity, just how in the world do you expect assembler to help your memory localization problems? Because that [and hard drive localization problems...] is more or less everything that slows down a modern computer. Cycles (and cutting a square root down) are free. You can take 100 times (for "small programs", expanding to infinity for longer ones) longer to write your great work in assembler than python, but if the program has to keep accessing memory instead of cache, the thing will grind to a stop. -
Ever wonder what would have happened to Japan if they hadn't have used so much nuclear power? All the flooding would have been contaminated by all the coal ash left over from burnt coal. You would have a measurable increase in the cancer rate, as opposed to the scare stories repeated ad nauseam. The catch is that when building nuclear power plants, you are betting enormous amounts of money that solar power [presumably plus batteries, although most of the draw is during the day] will *stay* more expensive for decades, while it is presently falling fast and could pass all of those in the next few years.
-
What is your favorite stock engine?
wumpus replied to Mad Rocket Scientist's topic in KSP1 Discussion
Have to love the kicker (SRB). Started with way too much reliance on RT-10 ("can of boom") but economics, nerfs, and the general usefulness of the kicker pushed it ahead. It probably remains even after the post 1.0.x nerfs to SRBs. Yes, it counts. Just look under the engine tab and you will find all the SRBs. -
It might not be a threat to thorium (since it is still aways from production), but the biggest threat to building a nuclear plant has to be the price of solar. Building a nuclear plant means paying a huge amount of money over way too long to provide power for the next n decades. The price of solar PVs have been dropping drastically (nearly to the point of coal). It would be extremely silly (but pretty much standard for the industry*...) to be half way through building a nuclear power plant only to know that the electricity won't be worth the cost of finishing the rest of the plant (never mind the sunk costs). Yes, I know that nuclear is a base power generator and solar is not, but the fact remains that nuclear power simply requires that you sink the cost of decades of power all at once. Should the price change, you now have a low-value generator and some monsterous debts. Gods help you if you already started generating power (in which case you go bankrupt early, produce power, and go bankrupt again due to decommissioning costs (which I strongly suspect are due to an unholy union of superstitious anti-nukes with the power company brass that were terrified of "too cheap to meter**"). * largely assumed due to the number of plants canceled. ** when the nuclear regulations were written, the power industry were basically regulated monopolies with a cost-plus structure. Raising costs [over the long term] meant raising profits. Whether this is still true today or not, the regulations remain.
-
Macross Missile Spam -> The only way to go
wumpus replied to SomeGuy12's topic in Science & Spaceflight
I've always assumed that any stealth battlespaceship would be unmanned (making cyberwarefare that much more dangerous), and would have little more than a radio receiver active while dormant (thus the need to rotate and acquire the target once activated). Hiding them during/after launch seems next to impossible, hiding crew changes just makes it worse. Radio (recievers) barely take any power: anything capable of radiating away ship-killing laser blasts can dissipate the IR trace down to next to nothing. Making the laser, power supply for laser, attitude control, radio, any navigation abilities (not to likely) all work at 3K is probably a much harder problem (although by this point space engineering is almost certainly common enough to solve this with easily available off the shelf parts). I'd assume that the reason you don't have a total kill first strike is that you can't find all the ships sneaking around (especially outside the elliptic). I'd expect the amount detected to be pretty small. My guess is that you have to detect them as they launch. You can sneak all you like as long as you stay in one orbit. -
Macross Missile Spam -> The only way to go
wumpus replied to SomeGuy12's topic in Science & Spaceflight
Is it? This seems to assume that "battlespaceships" (spacecraft with weapons) will have to do more than orient their weapons on a target. If the assumption is if moving gives you away and allows the enemy to destroy you with a beam weapon, then DON'T MOVE. Just fire your beam weapon from wherever you are. You might have a ton of decoy "ships" pretending to acquire targets, but at interplanetary ranges the ship will have fired before you see it move (due to speed of light issues). I'm assuming that not only is there no way to tell an asteroid from a ship (pretty trivial), but also that spacefaring societies can launch such ships without (ussually, but just not always will work) having them detected. Presumably manufactured inside an asteroid or something (presumably anything moving from an oort-based orbit to the asteroid belt will get carefully checked with "active RADAR" to determine just what it is supposed to be, if simply blown up by any side aware it isn't theirs). Once one side decides to start a war*, all known targets are attacked with all weapons believed to be compromised plus enough additional sacrificial weapons needed to destroy all known targets. Once this attack is detected, surviving weapons and enough sacrificial weapons are used to destroy the now known weapons (note, if the weapons are dispersed much beyond the orbit of a planet (or moon), they won't be able to coordinate strikes but more or less attack by predetermined zones of responsibility). This cycle continues until one or both sides die or surrender (asking for a draw within minutes of a holocaust seems outside of human psychology). Expect losers to send a gotterdammerung message to remaining undetected (possibly occluded from the battle) weapon ships to hold fire for maximum damage (the net effect would be something like dealing with land mines long after a war). The more obvious gotterdammerung weapon would be waiting in the Oort cloud, ready to divert comets (preferably iron stuff, but will use a snowball if has the right orbit) at targets. * if things favor the first striker as much as it looks like here, expect a very unstable situation with a war happening well before all ~10^30 km^3 of interplanetary space can be sensed at sub-km scale. Then again, this means most "spacebattleships" won't be detected. -
Macross Missile Spam -> The only way to go
wumpus replied to SomeGuy12's topic in Science & Spaceflight
"just burn a few sensors" To be able to consistently detect [small] IR signatures, you are going to need *lots* of sensors *everywhere*. Burning the ones near you just give away your position (but may allow you to take an unknown course. Doubtful, now that other sensors are pointing their long range sensors in your direction). Burning the ones near a target just means they dump more out to detect you (the ones out deep are irreplaceable, but vastly harder to get them all then mere stealth spacecraft. Remember: space is big. You don't get sub km coverage over billions of km by just "a few sensors".