Jump to content

sagittary

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by sagittary

  1. Personally, I'd like to see probes (and stations) serve a different purpose than manned science missions. Probes would be more for communication and data integrity - they enhance and provide info for manned missions and they allow manned missions to rely science and information back to Kerbal. Any science they perform would not the same as manned science - it would be either a limited version of such a mission or it would be something that ONLY probes could do (it'd probably be impractical and costly to have made Hubble a manned station for instance or make the Mars Orbiter manned). And that said, I don't think cost or materials should really be a major balancing point - rovers aren't really any less costly persay... it's more the cost of life and training. You can lose a probe or have it fail and that's okay. You really can't afford to lose an astronaut. Stations, similarly, should be designed so they aren't simply 'bigger buckets' of science. But rather, they expand the options available for any resident kerbals. Science they gain and gather would be unique to them. EG a kerbal could take science to it and process it. The science would remain the same but there might be a chance to find station-based science. Stations might also allow for non-science based benefits similar to probes - a station that's well run, well funded, and is constantly supplied with science might open up or boost PR and what have you (reducing cost, raising program awarness, etc).
  2. To clarify on physics without going into too much game engine details (or I'll try not to)... There are two ways physics can be handled - a fixed time step or a variable one. In the latter, the time between frames is used as a 'step' for each physics calculation (how far did I move in the time it took to render the last frame). In the former, that step is constant (how far did I move in 0.01 seconds); this also means that between any two given frames, zero, one, two, maybe more physics time steps can occur. Regardless of method, any time a physics update occurs, the game needs to know how much time needs to be 'caught up' so that physics match what is being rendered (time accumulates between physics steps and then is 'used' until there is less time than a time step). The catch is that if a long time occurs between steps, the work that is needed to be performed may be a very large amount - which in turn may mean that even more time is spent updating and thus delaying the next step. The behavior at this point and what Max Time Step means changes at this point. In a variable time step, if we don't do something, this may mean that delta-time (like delta-v but with time), may be very big. Dramatic changes in time step or large chunks of time can screw with physics. You move a lot more 1 second than you do in 1/100th of a second. This in term can mess up physics calculations since you're losing precision; if your acceleration changes over time, your speed if you simulate a full 1 second will be very different than 0.01 second since that acceleration can only be updated at the step. To prevent this, we can set a Max Time Step such that no matter how long it's been since the last update, we never simulate more than that step. In a fixed time step, what will happen is that there will reach a point where so much work/so many time steps have to occur to catch up that it actually takes longer to do that work than we have available in that frame. For instance, if we run at 30 frames per second, this means that any and everything that we want to do for that frame needs to, in total, take less than 0.03 (repeating) seconds or the next frame will be delayed. So say our fixed time step is 0.01 secs. Delta-t for frame 1 is 0.03ms, so we run at least 1 time step. But say we have a lot of calculations or there is slow down elsewhere in the system and our Delta-t is 0.1 second. To catch up, we might need 10 steps. The problem is, doing those 10 steps might very well take more than 0.03 seconds... so we slow down even more. To combat this, we can limit ourselves to simulating a certain number of steps (a certain amount of time). Bringing it back to the topic, changing the max physics time step won't actually produce any difference unless and until there's actually a large amount of slow down to begin with. Also, given the physics dependent nature of the game, it's highly unlikely that the game drops physics calculations - it more than likely caps calculations and drops frames instead. From when I goofed around with it a while ago, the larger the max time step, the bigger chunks of time the game will attempt to simulate when slow down begins to happen. This will result in crazy physics due to errors. The smaller the time step, the more physics will remain accurate but at the cost of frames. As an example of where physics become crazy (and the math on this is probably way off but the point is illustrated), imagine you're starting from a stand still and slowly increasing your throttle from 0 to 100% (from say, 0 meters per second to 10 MPS) over 1 second. If your physics step is every 1 second, your position at 1 second is 0 (starting off, you have 0 acceleration so you didn't move) while at 2 seconds, you've moved 10 MPS. However, if we simulate smaller chunks.... at 0, acceleration is 0 so we move zero. At 0.5, our acceleration is 5mps, but we've moved 0. At 1s, our acceleration is 10 mps, but since we already had acceleration/velocity going into this step, so we move 5 meters and are at position 5. You can see where, as we increase the step, we get a better representation of actual physics.
  3. This is also engine side. Simply because it has (or doesn't) the capability doesn't necessarily translate into whether it's effective for the specific game - different games have their specific requirements that may make some capabilities more or less useful. This is why different games have different frames - if it was a matter of hardware/engine capability, every Unreal Engine game (eg 80% of modern games) would run functionally identically. This isn't to say that PhysX wouldn't help, but it's also unwise to think it'd be a magic cure-all. And at some extreme level, any solution isn't a universal solution so much as just pushing the part limit higher before it hits a bottleneck. The ideal situation would of course be that the part limit is higher than any current computer available for the next 6 years but that's an impractical goal to strive for.
  4. I'll help by way of analogy. Let's say you have a 2D platformer where you kill things by jumping on top of their head (Mario). This is a fairly simple conceit. To actually do it though requires: multiple states for Mario and Goomba, collision detection, behavior & movement, health, environmental collision, determining direction of collision, etc etc etc. The point being that while conceptually, multi-core physics (or any other wish list feature of KSP) may be easy - it may even be a known and solved solution to a problem - the actual implementation and design is often far more complex than it seems. Not to mention that KSP is a game meaning it has to do anything and everything complex in real time (with the unique situation that everything is also always moving). Let's say we're shooting for 25 FPS. This means to avoid slow down, EVERYTHING (well, not really) has to be done in 0.04 of a second. 40 milliseconds. Any longer than that and frames are dropped. With higher FPS comes less ms per frame with which to do things before some sort of slow down occurs. Work can be spread out over time for some things (AI for instance) but some things can not. Physics being one of them. Consider a 3 part ship. How do you what part C is doing? To determine that, you also need to know what parts A and B are doing. And so if you want C to update fast, you also have to update A and B fast as well. What about an oft proposed solution of clumping things together and treating it as a single object (physics or otherwise)? Well, yes, while it might reduce the physics part of the situation, this doesn't actually eliminate the need to update A, B, and C. Say you have Object X that represents the group of A/B/C and serves as the origin point within that group ( so A is 1,1 for instance or whatever from X). You spin X, A should spin automatically, right? Yes, but... to actually place it and move it into the world, you still need to turn what 1,1 means into what it means world space. So you're still transforming from world space to local space; you're eliminating as much work as it may seem. This idea of highly dependent information is also why physics (or other systems with lots of conceptual moving parts) can be somewhat difficult to multi-thread. It can be really easy to get into situations where calculations are waiting for something to be calculated before they can finish at which point, things just slow down. Say you had 30 guys boxing boxes and loading them on to a truck - if you only have 1 truck, it doesn't matter how much work the 30 guys can do upfront, there is still a bottle neck.
  5. This question was asked a long while back. The math turned out that around 130K, give or take, was a good all-around compromise between delta-v and getting somewhere else. That said, specific missions may benefit more or less from different heights.
  6. Were the solar panels opened? If not, they're not going to generate power.
  7. It's not a game you play for fun in the traditional sense. Every day, you do the same thing with more or less restrictions and that really just amounts to paper work. But that's kind of the core conceit. The burden of paper work combined with the desire to sustain your family. The morality of choices - do you allow someone to go through that just wants to see their kid even though they may not have the proper paper work or do you deny them entry and get paid?
  8. General response: The issue is not methodology - as evident by this thread alone, everyone's got a solution to the problem. The issue is manpower, resources, and computing power. Squad is a small team with a limited budget - any solution they'd realistically be able to use has to be done within that budget without completely stalling or taking away budget and time from other equally if not more important features. On top of that, due to the physics heavy nature of the game, any solution also needs to be computable in real time with minimal delay. Considering most people can detect 100 ms of delay in games with relatively light data packets, this is a tall order. There is a reason games with MP components devote entire teams and even outsource that part of games. It's not the solution that's hard, it's the implementation that takes a long long long time to get right.
×
×
  • Create New...