LaytheAerospace
Members-
Posts
183 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by LaytheAerospace
-
I am saddened by this but, I'm getting bored.
LaytheAerospace replied to LostElement's topic in KSP1 Discussion
I'm a couple thousand hours in. At this point I hop on for patches, but I'm being very careful not to get burned out prior to 1.0. Part of my strategy is playing exclusively sandbox mode. Played career for the first couple patches it was available, but now I'm treating it like an entirely new game launch so I have something significant to anticipate, and not just waiting for patch notes that tickle my fancy. So far it's working. -
I think my first successful Laythe rescue picked 12 Kerbals up in a 60t lander, which barely made it back to orbit. These days I fly planes literally everywhere. Typical weight for a landing configuration (pair of rapiers) would be around 12-15t. Interplanetary configuration (pair of LV-Ns) is more like 20t.
-
My 100km equatorial orbit is usually absurdly crowded. You can actually see debris from just about any point it's so thick. Given that I just reset my save after each patch and rebuild my ships, it's never been much of a problem. Anything I'm sending to another planet parks at 605km, which I also don't bother to clean up but is big enough that nothing ever enters physics range.
-
Given my name, you'll probably be unsurprised to know that I'm FAR + Planes all the way. My goal is to run my entire space agency on spaceplanes alone. So far my efforts to build a true heavy lift plane have all failed. We need functional cargo bays!
-
I am a leaf on the wind. Watch how I...
-
What is the worst situation you've ever recovered from to complete the mission? "Complete" could mean the original mission parameters, or simply getting Jeb home alive. My best was the first docking test of my "Space Truck Mk II", a 15t spaceplane with a robotic arm on it that can aim and fire a magnet with a 50m reach. Everything went great, until the docking. I managed to hit the shift key and throttled up the engines, slamming into my refueling station. Both the plane and fuel depot were destroyed, but Jeb's cockpit remained intact. Bill to the rescue! I fueled up a second Space Truck Mk II and sent Bill to intercept Jeb's wreckage. The debris field had largely dispersed by the time he arrived, making it easy to approach Jeb's capsule. The robotic arm, for its part, worked fantastically. Bill closed to about 10m from Jeb, lined up the arm and snagged him on the first try. With Jeb's capsule pulled in tight to the cargo area of the ship, Bill flew to my space station proper, where I keep a single plane fueled up for occasions just such as this. A short EVA later, and Jeb was safe, ready to fly home in style. Bill then proceeded to complete the original mission goals, successfully docking with the station. Both Bill and Jeb landed safely at KSC, textbook runway landings.
-
Meh, it's only fanboyism when you refuse to change your opinions based upon the evidence. I'm just an ....... that's obsessed with CPU power . For the record I was a huge AMD fan right up to the Phenom generation. I even cheered when they beat Intel to 1.0 GHz, and was horribly disappointed when Intel's first 1.0 chip trashed theirs in benchmarks. As far as a GPU upgrade, I recommend anyone who's lost start here: http://www.tomshardware.com/reviews/gaming-graphics-card-review,3107.html
-
The Star Trek star field effect has always irritated me, not just the speed, but the distance. Many of the stars are close enough to appear as well defined spheres! The Enterprise must be traveling through the galactic core to see so many stars that closely.
-
The original subject was asking about improving performance in KSP. The OP made the common mistake of assuming he needed a new GPU to do that (makes sense, it's how you improve performance in 99.9% of games). The GPU issue having been put to rest, the discussion moved on to the component that actually matters for KSP, the CPU.
-
Binning is one thing. Re-releasing a year old $150 chip as a your new top of the line chip (that you hope to charge unsuspecting customers $800 for) is something entirely different. Intel's top of the line chips are all very much different silicon, and have been for as long as I can remember. And AMD has always been the worst about this, selling low binned quad core chips as tri core with the offending core disabled, throughout the Phenom era. So don't tell me everyone does this. AMD is alone in what they did with the FX 9590, and have a history of binning more aggressively than their competitors. Not at all applicable in this instance. The chips were already on the market, you can't go back in time and cut them down to differentiate them from your re-release. AMD gets no credit for this in the case of the 9590. And sometimes they try to re-release their $150 chip at $800 to trick people into thinking it was better than it really was so they can pay 530% more for the same performance. I'm not trying to be snarky, I'm just trying to point out that AMD, like every other company on the planet, is motivated by profits. They aren't binning chips so you have a chance of getting a better chip at a discount, they're doing what they think will maximize their return on investment. Attributing it to kindness, or "so people might get lucky" is patently ridiculous. They want you to buy their more expensive, higher profit chips just as much as Intel does. Saying AMD's best ever desktop CPU is only on par with a chip Intel released 4 years ago using less than half the power at 2/3 the clock speed with half the cores is faint praise indeed. And the differences may have been a mere 15-20% against a 2500k, but a modern Intel chip is much faster. http://www.anandtech.com/bench/product/836?vs=1289 http://www.anandtech.com/bench/product/836?vs=288 (in case you meant to claim that the 4770k was only 15-20% faster than a 2500k, which is also not true) Single threaded CPU benchmarks have the 4770k as much as 100% faster than the 9590, and we're comparing them at stock settings. When the 4770k's much greater overclocking headroom comes into play, the difference is bigger still. And it's not like AMD is winning the multithreaded benchmarks, despite it's 33% clock speed advantage and 100% core count advantage. It should be winning by a minimum of 100% in multithreaded integer workloads, but it's not because its performance is garbage. So you're going to tell me that AMD is doing better improving performance by re-releasing their $150 chip as a new high end part a year later? Or that Intel has its enormous performance advantage over AMD because they've advanced performance so much less than AMD did over the same period of time? If not, then this criticism of Intel in defense of AMD doesn't make a lot of sense. You should be criticizing AMD for failing to improve performance at all, not Intel for steady 20% gains one generation after the next. I'll leave you with a reminder that we've already benchmarked KSP. There is no debate to be had over which chip performs better in KSP. Intel wins by a large margin, even at lower clock speeds.
-
I tend to go a little overboard on the AMD bashing, to preemptively stop any pointless debates about whether or not AMD chips are really that much worse than Intel. It's a habit from the Tom's Hardware forums, where there's a nonstop stream of fanboys on either side of any debate. Anyway, I did a little research on the OP's chip. It seems the 8320, 8350 and 9590 are all the same silicon (fun fact, AMD initially planned to sell the FX 9590 for $800, despite it being the exact same chip they'd been selling for $150 for a year). The only differences is the 8350 and 9590 have been "binned" (AMD pulled the best chips aside to guarantee performance) and run at higher stock settings. The differences in benchmarks are likely due to differences in popularity of the chips. More 8350s means the best 8350 benchmarks are better than the best benchmarks for the other two chips. The best 8320 is, theoretically, identical to the best 8350 and best 9590, due to random chance. However you'll see a higher average on the 9590 because of the binning. So, you don't have a ton of overclocking headroom, because you're effectively already overclocked from 4.0 to 4.7 GHz, and AMD chips are already running ludicrously hot (220W, compared to 84W for a 4770k). However, you do have a binned chip, so you should be able to take it over 5.0 GHz without too much trouble. Average overclock on an FX 9590 with air cooling, according to HWBot, is 5.135 GHz, or about 10% over stock. It's not much, but if you can manage it without upgrading your cooling system, then it's literally free performance. I think buying a new CPU cooler for this is probably a waste, though. You just don't have enough headroom to justify investing any money into overclocking. Your next opportunity for significantly better CPU performance is likely going to be the next CPU launch by AMD. Unfortunately, AMD recently announced that they aren't going to have anything for the enthusiast CPU space, and the FX series of processors is at an end. Their focus is increasingly on power efficiency for low cost APUs and mobile. The power desktop space, I'm afraid, has been completely abandoned by AMD. Really, though, they haven't had anything that could legitimately be called a "top of the line" CPU in a very long time. They're still trying to catch up to the 2500k, three generations later. Those benchmarks are at stock settings, by the way, with the brand new 4.7 GHz 9590 struggling to keep up with the four year old, 3.3 GHz 2500k.
-
Didn't know that, haven't been following developments in AMD chips since they got utterly demolished by Sandy Bridge, even though they came to market almost a year later with a product that claimed higher clock speeds and more cores. For reference, here's a database of single threaded CPU benchmarks. Notice that there are a total of zero AMD chips in the list of previous world record holders. That's right, AMD has NEVER held the top spot in this CPU benchmark, ever. Here's multithreaded. AMD has some spots here, but only their latest and greatest Opteron, and in every instance an Intel chip regained the throne mere days later. All told, AMD chips have held the crown a combined total of less than a year over the last ten. I'd also like to point out that it only took a pair of Xeons to beat four Opterons. For comparison, this is the FX 9590 and here's the 8350. My favorite part is how the 8350 does significantly better than the 9590 on a bunch of benchmarks. AMD can't even compete with themselves, it seems. AMD chips are cheaper for a reason. You get what you pay for. [Edit] Forgot about this, the KSP CPU performance database.
-
You are wrong. It's a CPU issue. Physics is done on the CPU exclusively. Physics load is a nonlinear function (polynomial, my guess) of the number of parts loaded. Approaching stations causes more parts to load. Few computers can handle more than 1000 parts gracefully. I'm running a 3770k at 4.9 GHz (probably top 0.1% of desktop CPU power) and 1000 part ships are SLOW. KSP is an extremely CPU heavy game. What's more, it's a class of problem that doesn't multithread. There's nothing that can be done to change this, it's a well known property of systems in which the future state depends on the present state (you must calculate all previous states before trying to calculate a future state). Which brings me to... This is a problem. AMD chips are much slower than Intel chips at the same clock rate, and the architecture shares floating point units between pairs of cores, so if both cores need to do floating point math, they simply can't and have to stop dead while they wait for the other unrelated workflow to finish. Games, particularly games like KSP, use large amounts of floating point math. AMD has run a very successful marketing campaign where they've convinced large numbers of people to buy their inferior products, because it has more cores. The problem is, most applications can't do anything with more cores. Applications which scale well with additional cores are fairly rare, and generally Intel chips outperform the AMD chips anyway because of the massive difference in efficiency and the fact that your chip effectively loses half its cores when presented with floating point workloads. It should really tell you something when AMD's top chip claims 8 cores at ~5 GHz and loses benchmark after benchmark to Intel's 4 core 4 GHz chip. So what can you do? Your best solution is probably to overclock. AMD chips overclock quite well, and gains are fairly linear. Increase your clock speed 25% while holding everything else constant, and you should see about 25% better performance.
-
So, so hard to choose. I have fond memories of both. When science first got added to the game, my first real mission (aside from the obligatory Mun return) was a Duna return. This was, of course, back when you could spam experiments to gain science. I hauled every single science part to Duna, hitting Ike on the way, keeping the massive array of antennae busy for hours while they transmitted the experiments back so I could reset and do more science (man I'm glad they changed that). The landing on Duna didn't go as planned (never get aerobraking right on Duna), I was trying to land at the base of the big mountain, but ended up a quarter of the way around the planet instead. Jeb had been looking forward to using his jetpack for some altitude assisted low gravity exploration, but ended up just grabbing a surface sample, planted a flag and returned. Eve, on the other hand, is a harsh mistress. I was severely misinformed the first time I planned a mission, landed at sea level a short walk from the shore (always my preference because the terrain is nice and flat) and planned for returning in a simple unmanned SSTO lander running on a single 48-7S. The thrust succeeded in making the lander very slightly taller as the loading on the legs decreased, but otherwise the lander didn't budge. I christened the spot a permanent base, and sent Jeb to plant a flag in a much, much bigger return vehicle. Ladder didn't reach the ground and you can't jump/jetpack to save your life on Eve, so I had to send a second ship with a modified ladder to bring him home. To date, that cluster.... of a mission is my only successful Eve return.
-
Steam kept advertising it to me. I cynically thought it couldn't possibly deliver what it claimed, so I ignored it for a couple months. Snagged it on a 40% sale without having watched any gameplay videos or done any research. An hour later I bought another copy for a friend, and have bought two more since then.
-
Agree. Also, I'd actually advise avoiding Crossfire entirely. I'm using a pair of 7970s right now. Most games work fine. Some go bat.... crazy, especially on a 120 Hz monitor. End up with crazy things like V-Sync thinking it needs to maintain 240 FPS instead of 120 (Skyrim, Dark Souls II), games that load only a black screen (Skyrim), games that crash over and over again (Final Fantasy XIII-2, Universe Sandbox), games that are unplayable due to visual artifacts (To the Moon) and countless other irritations. If you want to use multiple GPUs, I strongly recommend going with Nvidia. Their drivers are far more stable, and SLI tends to outperform Crossfire, especially in the area of microstuttering (makes the game feel choppy regardless of framerate due to inconsistent frame delays).
-
NEAR spaceplane - not working?
LaytheAerospace replied to KITTYONFYRE's topic in KSP1 Gameplay Questions and Tutorials
It's a much bigger issue with FAR and NEAR (I presume, I've never used NEAR). With stock aerodynamics you could bolt wings on the SPH and fly it to orbit with enough thrust and air. Looks like it's behind it to me, but not by much, and it won't stay that way very long. Also, the OP should probably scale back his ambitions a bit, and try a smaller plane to get the hang of how things work first. A quad engine plane hauling a pair of LV-Ns to orbit isn't the hardest thing in the world to build, but it's not all that easy, either. A 10t plane with a pair of rapiers mounted behind tandem FL-T800s is dead simple to build and fly, and is easy to balance the CoM so it doesn't shift as the tanks drain. -
NEAR spaceplane - not working?
LaytheAerospace replied to KITTYONFYRE's topic in KSP1 Gameplay Questions and Tutorials
Tail fin is way too small, and you need a rudder. Also looks like you're also going to have CoM issues as your tanks drain. Keep in mind that they drain front to back, so your CoM is going to move steadily back. Make sure you've got a healthy distance between the CoM and CoL at all fuel levels, or you're going to have a bad time. -
For KSP your video card doesn't matter much. It's a CPU problem, not GPU. That said, Crossfire carries a lot of issues with it. But a $100 card isn't going to be an upgrade over an HD 78xx, either. I'd recommend you save your money. Dropping $100 on a video card when you already have an HD 78xx isn't going to make much of a difference, if any.
-
Difficulty/realism mods + challenges. Your first challenge is to install FAR and land a rover from LKO without parachutes, engines, aerodynamic parts or anything else to slow down your descent. The rover must be able to drive away after landing, but it doesn't need to be intact. With practice, I managed to "land" at just shy of 100 m/s, driving away about 10% of the time.
-
A question about building my own PC
LaytheAerospace replied to gutza1's topic in Science & Spaceflight
Depends. Do you need to finish processing all the data before the application continues? Say, reading in large resource files included in games? Loading an executable? Then, yes, you must wait for the entire read to finish. If you're pulling in a database, or something like that, and only need to display a page of data to get going again, then no, you don't need to pull the whole file into memory. It depends on your workload and software. Of course the size of the cache is a factor, but this is irrelevant to the point I was making. A perfectly sized cache without the data you need is useless. Desktop workloads have very high cache miss rates. This is my whole point. I don't care how great your cache is at resizing itself, if it doesn't have the data I need, it's not helping. That useful threshhold gets larger the more diverse the access pattern is, and how big the files are. Desktop workloads (in particular gaming computers) present a worst case scenario, thus file caches are of questionable value. And SSD access times are four orders of magnitude faster than the human threshhold of perception. That's a lot of cache hits before you make a noticeable difference. Keep in mind that I'm not talking about a workstation workload, I'm talking about a gaming computer and all that entails. It's not so nefarious as you think. Making life easier for programmers means they can spend more time doing the things that matter, rather than spending scads of time and money on micro-optimizations. Software is bigger and slower than ever, but it also does more than ever. It's not just inefficiency. -
A question about building my own PC
LaytheAerospace replied to gutza1's topic in Science & Spaceflight
I'm not your pal, friend! -
A question about building my own PC
LaytheAerospace replied to gutza1's topic in Science & Spaceflight
The read has to finish before you can use the data you're reading. Transfer rate matters. Also, I've been harping on cache hit rate for a long time now. Why don't you explain to me how you're managing a decent cache hit rate on a gaming workload, now that you finally acknowledge that it's a factor? Sure, you need to size the cache correctly. But if it doesn't put the thing you need in the cache, this is totally irrelevant. A cache that's sized perfectly and contains none of the data you need is useless. Deciding what goes in the cache is a much harder problem (and is much more critical to performance) than deciding how big the cache should be. I can't believe I even have to say this it's so obvious. I never said prediction was impossible, I said that on the desktop the access patterns are too diverse to utilize a cache effectively. If you access 30GB of different files on any given day in an effectively random order, and your cache is 4GB, it's just not going to cope. Caches have to make best guesses about access patterns that in all likelihood won't hold up, because there's an unpredictable human calling the shots, not thousands of people hitting a server where individual choices don't have any real impact. And access times are already very, very good with an SSD, on the order of a tenth of a millisecond. You have very little improvement to be made, with a high cost in memory and actual loss of performance in the form of paging due to running as close to the cap as possible. This is what makes caches not useful on desktops, while on servers where a small number of resources are vended repeatedly, they work great. And in every message since, you've been arguing over whether or not 8GB of memory is enough for a gaming computer. You even said you were going to build a new gaming PC with "just 16GB" of memory, because "games don't need much". In every post of mine I've made it abundantly clear that I'm speaking about a gaming computer. It seems you've found your way into a totally different conversation that only you're aware of, and are arguing with points nobody's making. I mean, you even called up the idea of research grants in a thread talking about building a gaming computer. You're pretty far off in left field, buddy. -
A question about building my own PC
LaytheAerospace replied to gutza1's topic in Science & Spaceflight
If you'd stop trying to teach me what terms mean, and listen to what I say, this would go much more smoothly. I'm well aware of what disk caches do. I'm also well aware of how well they work for desktop computers, which is not very well at all. You seem to assume that everything you do is accelerated by the disk cache. It's not. Your cache hits are rare, not the common case. And if the cache is keeping huge amounts of data around that you haven't touched for days, then it's a really, really awful cache and it's not doing anything useful for you. And "huge win" is more than a bit of a stretch. There's very little to be gained on a cache hit on a desktop. Modern SSDs easily manage transfer rates in the hundreds of megabytes per second, with seek times of 0.1ms or better, limiting the potential gain to the ratio of the size of the read to the speed of your SSD. A 1GB cache hit, which is impossibly large, would save you ~2 seconds on a good SSD. Cache hits for anything under 50MB are below the threshold of human perception. Especially on a desktop, reads are very, very small, which means you're just not going to notice the disk cache except in rare cases, or if you have a tendency to launch the same large application a few dozen times a day, instead of just leaving it open like a normal person. What? It's not about predicting peak memory usage, it's about predicting exactly which files will be used and when. You can have a billion gigabyte cache, but if the thing you want to read isn't cached, it does nothing. On desktops, it's much, much harder to predict access patterns and get a useful benefit because reads are small and diverse, and access times are dominated by seeking, not transfer rates. On gaming computers, disk caches are near useless, because load times are dominated by the CPU, the files wouldn't have fit in the cache anyway, and reads are very rare to begin with. Windows has had the feature for literally 15 years. It's just not turned on except for server operating systems, because it doesn't provide a useful benefit to the end user, while inflating the hardware costs. This isn't some revolutionary Apple thing, they're a decade and a half late to the party and are doing it wrong. You really think a disk cache makes you 5% more productive? Prove it. That's a huge difference, and if you're going to throw numbers around like that, you need to back it up. Also, I could have sworn we were talking about building a gaming computer, not a professional workstation built with money from a research grant. Why don't we stick to relevant topics of discussion? Or, if you insist on talking about a completely different class of computer, designed for different workloads, I'll do the same and start talking about how no cell phone needs 16GB of memory, because you're getting it with federal assistance and they won't pay for that much. Sound fair to you? -
A question about building my own PC
LaytheAerospace replied to gutza1's topic in Science & Spaceflight
You're overestimating the impact of your disk cache, especially on desktop workloads. Cache hits are fairly rare, rarer still for games which just load their assets into memory and avoid the disk like the plague. Disk writes aren't sped up at all by a cache, because it's write through, not write back (god I hope it's not write back). Disk caches work great for things like databases, where you can keep frequently accessed data in memory for rapid access. But on a desktop, a few gigabytes of file cache isn't going to do much, if anything, because access patterns are much more diverse. And at any rate, an SSD has a typical seek time of much less than 1ms, which puts any potential difference far below the level of human perception (100ms). So now we're down to whether or not it will speed up applications which frequently read from the disk. And it will, with varying success depending on how consistent your access patterns are, and how much of the performance is dominated by disk reads. Generally, disk reads aren't a bottleneck, because programmers are well aware that disk reads are expensive and avoid them at all costs. There's a lot less to be gained than you think. *facepalm* Windows is not optimized for low memory, it's just not wasting it. I don't need a page file, because nothing is ever paged to disk. I don't have delays switching applications, because nothing is ever paged to disk. The operating system doesn't have to optimize my memory usage, because nothing is ever paged to disk. My OS X laptop, which has that disk cache you love so much, pages things CONSTANTLY in my experience. If I wanted to cache my disk in RAM (yep, Windows has had the feature for 15 years), I could. I just choose not to. You don't have that choice, which results in needing much higher amounts of memory for the same level of performance. Your operating system is paging things to disk precisely because it IS dealing with low memory situations. It's running out, so it has to go use the disk. My computer never runs out, so it never does that. Every allocation succeeds, every read is for a block already mapped in memory. The memory manager has the easiest job ever, and making it more complicated would have no benefit. You're trying to spin a glaring weakness as a strength, and Windows strength as weakness. Windows doesn't page anything out because it's memory manager is inferior, while OS X chugs on the same amount of memory because it's so much better? This is absurd. And my computer has never resorted to swapping, ever. I know because it can't. At best, your computers could equal mine, while using twice as much memory. Also, how were you able to determine that it had never used the pagefile? Does it show total page faults since boot? I don't spend a ton of time in the Activity Monitor, never looked for such a statistic. *double facepalm* Err, no. At that level, you're getting the worst value for your money, and every additional dollar spent is worse value still. Beyond about $1000, diminishing returns starts to kick in pretty hard. You need to step back and look at what you're paying for, and what you're getting for your money. Most people don't live in ultradense metro areas, and don't care about what fraction of their floor space the desk uses up. They do care about the real cost of the computer. Talk about amortizing the cost over its lifetime all you want, it doesn't make the sticker price any smaller. You're also ignoring that cheaper computers also amortize their cost, so the value proposition doesn't change when you consider amortization. It's also worth noting that people don't actually get more expensive housing to accommodate their computers. That's a fixed cost, and using it to hide the cost of the computer behind a large recurring cost is just obfuscation. You could use the same argument to say that the cost of a car is unimportant, because you'll spend more on gas and maintenance in its lifetime. But clearly, the cost is important, or everyone would be driving fancy cars. I don't want you to take this the wrong way, but it seems to me that Apple has done a very good job on you, convincing you that the cost of the computer is unimportant, that the most expensive computers are the most economical, that needing twice as much memory for the same tasks is a good thing.