Jump to content

K^2

Members
  • Posts

    6,025
  • Joined

  • Last visited

Everything posted by K^2

  1. Do you understand the difference between sufficient and necessary? But also maybe, yeah, if you don't understand statistics, do stupid things with numbers to get a nonsensical results, get called on it, and go with the response of doubling down on it and then demanding that if you can't be educated on the subject on the spot it doesn't count, then maybe you get what you get? I don't know, that seems fair to me. It's ok to participate in a discussion that involves topics over your head. That's why a lot of us get involved in communities like this. But expecting that you understand the discussion, and if not, getting angry at people who do, and pretending like it's them who is making stuff up? That's just a tantrum, and it isn't a job of anyone here to take care of them like they're little children. Oh, and yeah, congealing around the positions of other people who are as confused as you are, because that's the one thing that unifies you? Not as good of a strategy as it might sound. Herds are strong against predators. Nobody's out here to eat you. You either learn to recognize people who understand the subjects you're not strong in, and learn to take their advice, contributing yourself to the ability of your own expertise. Or you get left in the cold, because people who know what they're doing don't have time for that nonsense. Just a general advice to anyone trying to both-sides this.
  2. Or at least, it will! You just need to put a little bit more into it, and then it'll definitely pay out! But seriously, it's a complex topic. Someone's always getting the blame for the losses at a corp, so while the company overall is going to steer clear of doubling down on losses, whoever thinks they'll get the blame might very much be willing to bet everything on it working out eventually. That can lead to some bad cycles. I don't think any of that applies to KSP2, though. T2 is both happy to cut any losses there, because it's a fairly small percentage of total operating expenses and the leadership will look on the net performance of the game as something to adjust for, and to re-invest into restarting the project later, because it's still an untapped revenue source.
  3. Formula that is too advanced for a particle physicist, but NH4Cl managed to unlock its secrets after thinking about it for a while. Not at first glance, of course, it's way too advanced for that, but on the second try. Forget integrals of motion. Comparing too ratios, that's the advanced mathematics that needs more than one look at it to wrap your head around it. They'll crack quantum gravity any day now. I honestly expected nothing else. Thanks for making my point. I've nothing further to add.
  4. I'm pretty sure I aged from reading this,.
  5. You probably think that every document request under the Freedom of Information Act is a demand to speak with the president, right? Inside trading laws exist to avoid, among other things, situations where the board makes decisions to benefit their own portfolio, rather than the investors in general. This is why they require the board to disclose any information relevant to persons who are invested or considering investing in the company, in so far as it can influence the worth of the company. Board, generally, doesn't want to share the information and will often try to dodge the responsibility. You have recourse against a company you believe not to be sharing relevant information even without owning any stock. SEC investor complaint system is simply a streamlined option that lets you obtain the information that the board wanted to be dodgy about. And if it goes through, no, you're not going to get to speak with the CEO, or even having the CEO publicly share the information. You'll get some sort of a statement from the PR branch, which they were totally going to make anyways, regardless of any SEC complaints, they swear, that just so happens to contain information relevant to the complaint. It's a standard way of applying pressure in a bureaucracy. It's exactly the same process as writing to your senator if your paperwork is stuck too long in some federal agency. Your senator's going to do jack, but some staffer somewhere will send a standard form e-mail that reads, "There was a complaint about the case #. What's the status?" To which they'll get a standard e-mail saying, "We just processed it this morning, so there isn't a problem, here's the new status," and a day later you get an update that your paperwork's moving again. It's a crap system, because it means nobody does what they're supposed to until there's a complaint, but at least you have a way to get it moving again, and this is the basis for everything in US from your local city council, to the federal agencies, to corporations that have their HQ here. If you live in US or plan to and you never had to deal with this, take notes. Nobody in the gov't/corp is going to tell you that you have these options.
  6. Honestly? Somebody could probably apply pressure via an SEC complaint, but I'm not sure their investors care about that. Some of the "We will not elaborate further," replies during the Q&A session of the earnings call can probably be argued to be in violation of the insider trading laws. The argument to be made is that it is information that would impact stock prices, and T2 leadership being stock-owners themselves have an unfair advantage in regards to trading compared to the general public, since they do know the true situation of these studios and the projects they were previously in charge of. If anyone has T2 stock (possibly as part of your 401k), you can try filing an investor complaint with the SEC, but the details are entirely over my head, so maybe run it by somebody who understands the regulations a little better first.
  7. Plenty of games available on PC and other platforms have sold more across the platforms, but I haven't been able to confirm anything over the PUBG's 42M on PC (75M total), which is a somewhat outdated stat, but the game's also free-to-play, so I'm not even sure if that should count. Minecraft and SIMS games definitely have fewer than 40M on PC, but again, easily surpass the mark across all platforms.
  8. You say that, but the best selling 8 bit game of all time came out in 1996 with 46M copies sold, and no PC game has yet reached these numbers. 8 bit games continued to be made and sold very successfully well into the 16 and even 32 bit eras.
  9. Oh, gods, that's a blast from the past. Although, I think a more exciting way that code came from magazines was using flexi-discs, typically known as vinyl data. Speaking of which, while in US people usually think of C64 when they think about BASIC listings and games from tape/vinyl, in Japan this was yet another Famicom expansion. Family Basic came with a cartridge containing a BASIC interpreter and sprite sheets, a keyboard expansion, and the keyboard had the audio jacks to connect to a tape recorder. And yeah, they had magazines with game listings which you could type out and play on your Famicom system. It's kind of upsetting that neither the FDS nor Family Basic ever came to the NES. But again, Nintendo were terrified of another games market crash, and avoided anything that could let US studios make more games like fire.
  10. Is it strange that of all the ET-related flashbacks, I had this one? Yeah, I remember when G4 was actually airing this on TV. Is this really what early 2000s TV looked like? I'm always surprised when I look up older footage I remember seeing live. It's part true! They unearthed the landfill in 2014, but some more digging (both literal and metaphorical) revealed that it wasn't Atari trying to buy-back and literally bury the E.T., but rather they just dumped the unsold stock from a Texas plant that had to be closed, which included a mix of games, but E.T. was a fairly significant chunk of that. About 10%, reportedly, and given how many titles Atari was pumping out at the time, the fact that 10% were just E.T. is frankly an impressive statement. Investigation was part of the Atari: Game Over documentary. Atari 2600, from where it came from, to its hardware, to all the absolutely wild games, from timeless classics, to garbage, to games that one does not mention in a polite company for a list of awful reasons, is such an amazing arc in the history of video games, that I think any end less dramatic than the 1983 crash would have felt anti-climatic. And the sheer wide-eyed panic of Nintendo you can still see in the elements of the NES design, both external and on the board, is absolutely awe inspiring. To fail so hard as a gaming business as to bring down the entire industry across two continents, and to force Nintendo who were releasing games on floppies in Japan , literally letting people to "rent" games at a kiosk by writing a new game to a floppy, to invent the CIC... All with a piece of hardware that released the same year as Star Wars. Some failures are so epic as to become their own legends.
  11. To be fair (and pedantic, and as a joke, and just so that we still have a topic,) this doesn't answer the question of where is Nate. Personally, I'd go on a vacation. Layoffs suck. If you can afford to take a break after one, you should. And I know a lot of the former Intercept employees had to start applying for other jobs right away, but I hope they'll at least have some time between the interviews to decompress a bit. It's most definitely not the shiny part of the game dev career, but being able to use the quiet times between interviews to relax, despite knowing a limited runway on funds before you need to have a job or face serious problems, is kind of a skill we end up having to acquire. So if you're wondering, why nobody's talking, yeah, NDAs, but also, they were just laid off. Last thing they want to think about is KSP2 or Intercept.
  12. To which you replied: So you have accepted that quote as the definition, and were simply confused to why that is the definition. Going back to what you're asking of me: There is the definition, and that is the definition, as accepted by you already in this thread. Since you are now telling me that "why" isn't the issue, the question is answered. So now we're just waiting for this part, right? Right here, as previously quoted. This appears right below the numbers providing the range. Make sure to follow to the full post for the context. I stand by my summary.
  13. I think there's more than one thing that's going on. Modern approach is to use velocity constraints with Baumgarte stabilization for drifts. Baumgarte does work a bit like a spring coefficient, but the velocity constraint already has damping built in. So with a sensible choice of coefficients, any perturbations should naturally decay, rather than lead to more oscillation for any single joint. Unfortunately, systems of joints can still misbehave. The worst case is usually a light object sandwiched between two heavy objects. Unfortunately, I just described every single stage separator and decoupler, which immediately becomes a problem in KSP if you don't add additional joints to stabilize it (such as multi-joint connectors, autostruts, etc.) A very good read on the topic is Erin Catto's GDC presentation from 2009, Modeling and Solving Constraints. Every modern physics engine I've seen goes back to this talk. At a minimum, Havok and Chaos do, as well as Crystal's and Blizzard's internal engine implementations because Erin Catto worked there and was instrumental in making sure these engines worked. Now, not every engine uses impulse exchange as their iteration method, but all iterative solvers are going to behave similarly. I think Erin really likes impulse exchange mostly due to its logical simplicity compared to pure linear algebra methods. Crucially, the PhysX version that ships with Unity predates the industry's switch to this as the main method. So their constraints handling might be a little different, and I haven't looked at the code for that specifically. Though, it might be interesting to try and dig up the source for a relevant PhysX engine and to make sure. In general, even the older physics engines had to solve the same fundamental problem of enforcing constraints. And even if you start with position constraint (rather than velocity constraint) and work your way from there, you are still building a damped harmonic oscillator, but your coefficients might be less "magical" and require more careful tuning. So we're still dealing with what's ultimately a damped harmonic oscillator for each individual joint, but what can still fall into some sort of a bad feedback loop between multiple joints. And we know that mass ratios are a problem for the PhysX joints as well, so whatever the solver is iterating over, it's not that far different. So that's one part of it. Even with a good solver, there are bad configurations that you need to learn to avoid, and for something like KSP that means either merging some rigid bodies together (e.g., if you made decoupler and whatever it's permanently attached to into a single rigid body) or doing what KSP1 does and adding additional joints in a way that avoids the unstable configurations. The second part is, I think, what muddled the situation for KSP2. And here we're back to logical connection vs actual joints. I have seen a number of times when a ship spawns in (either at the launch site or from a save) with some parts detached. And it's one thing to just watch some part of your ship drift away, and another is if it's some internal part with collisions that ends up doing a ragdoll-spaz inside the ship. I don't know how many of the KSP2 physics explosions are due to bad joint configurations and how many are due to a part getting loose and spazzing out. I've definitely seen both, but I wouldn't be able to identify each particular case. And I think that might have been why Intercept kept having these issues creep back up, because there are several different bugs they're trying to fix that can be reported as one bug: rapid disassembly without any obvious cause. All of this is kind of self-inflicted, but I do sympathize. Again, it's a hard problem, and unless you happened to have worked on these exact problems before, it takes time to catch up on all the terminology and required reading.
  14. The solver approximates tension in joints iteratively. To avoid iterating many times per frame, the result from last frame is used as a starting point, and only one-two iterations are done to correct for changes. If you reloaded a scene, these tensions are set to zero, and if that's far from what it takes to keep the rocket together, for whatever reason, first few frames will over-correct, resulting in excessive forces. Have you noticed how big rockets do a bouncy-bounce when they spawn in on the pad? And how if they survive that, it's probably alright? Same principle, but for a complex enough build, you don't even need gravity. Something somewhere is probably under a bit of tension just due to how the distances work out, and that will ripple out. With the unfortunate enough configuration of the joints and joint strengths, insta-kraken. Still happens even in KSP1, albeit, rarely.
  15. Except that we have a concrete case with concrete numbers. If the number you claim is "close" doesn't fall within the confidence interval even when expand it to 99.75%, trying to substitute another meaning for "close" is the real slight of hand here, making this entire response the very definition of demagoguery.
  16. "I make arithmetical mistakes on purpose, you'll never understand that." Yeah, no, you're right. That's the "Enemy will never expect us to run through a minefield," kind of logic that I'll never understand. And given that you're now claiming to make an intentional algebraic error while building a strawman of an argument that was a strawman to begin with which you've made out of a statistical interpretation that was stretched to statistical insignificance... You do remember that this started with a claim that 240k-570k is "1%" of 3.6M-6.2M? With your argument coming down to "500 is 0.1% of 10M," which you know is wrong, and that very fact somehow proves your point. Because entropy is logarithmic. Does that help you? No? You don't understand why entropy maximization gives you the highest confidence when dealing with error bars? Would you like me to explain all of information theory to you in a forum post now? You asked for the definition. I've given it to you. Now you're moving goal posts to "But you didn't explain!" Enrolments in your local community college are open if you actually want it explained to you. And do you go telling professional athletes that they are awful people because they insult your athleticism by being faster and/or stronger than you, or is it just the intelligence that you're so insecure about? I spend ten years studying theoretical physics, my research was in particle physics, structure functions of mesons specifically. Then I went on to work in game development, and have done work in simulation and animation, resulting in two patents to my name. Working with numbers and models is what I do. This is what I have been doing professionally for over two decades, and have been training for longer. I understand statistics at a level you would take many years catching up to at best. If you find that insulting to your intelligence, that's strictly a you problem. Thinking that you can be as intelligent without putting in an effort as somebody who is using their intellect professionally is just entitlement. I can't make it simpler for you. If you really don't understand that you don't get to magically be as smart as people who worked on being smart, and you keep demanding to be treated as equally smart anyways, you'll have to get used to disappointment. I'll give you a definition. I'll give you a simple formula to use, because I like being helpful. But if you are then demanding that I must explain it to you until you understand why these things work, or otherwise I'm just insulting your intelligence, I have no polite response to it. So just no.
  17. There are two parts to it. One is just making sure the two models match and the other is stability. The first part's solvable. You just have to be thorough. It's very clear that there are situations where you would load a game in KSP2 and some connectors are marked as decoupled in logic, but the physics joint is still there. And that's just a programming error. The fact that there are lots of such errors stems from some bad architectural choices, but even so would be resolved iteratively after some number of patches. The second part is stability. A solver has to have an internal state, because otherwise convergence will take too many iterations for every frame. And you can in theory dump that internal state and restore it from a save. It's not even hard if you have your own custom engine. But neither Unity nor Unreal give you easy access to it. So instead, when you start the simulation, the internal cache is zeroed out. If you had a very wobbly rocket and you saved the game and loaded, you might come back to a rocket that's rapidly disassembling itself, due to the forces exceeding joint limits before the simulation had time to settle. There are ways to improve on that too. You can run simulation at a much shorter time step until it settles, for example. It would cause the game to be slow-motion for a few seconds on a state save, but I think that'd be very much acceptable. So it's a problem, but hardly game-breaking.
  18. 0.1% of 10 million is 10,000, not 500. You are trying to pull a strawman already, by taking absurd numbers, and you're not even doing the algebra on that right. That approach is not going to work for you if you can't do the math that goes with your logic. [snip] I have already done so at least twice in different ways. When dealing with fractions, you take the ratio and you compare these. If 0 < x < y < z, compare y/x to z/y. That's all I did to place 3.8% closer to 10% than to 1%.
  19. The math does get a bit hairy. Not at a point where it should scare anyone with a physics degree, but it can be rough for a lot of game devs in general. And even if you have a Ph.D., this is niche. Meaning, you should be able to follow along, but building it from scratch without the experience? That is iffy. The constants of motion for inverse central potential are E, L, ν0, ω. (Energy, angular momentum, anomaly at epoch, and argument of the periapsis.) Note that for a given gravitational parameter μ, combination of E and vector L gives us the standard orbital elements a, e, i, Ω and vice versa. (Semi-major axis, eccentricity, inclination, and argument of the ascending node.) So the constants of motion representation and orbital elements representation are essentially equivalent, and you'll probably swap between the two depending on whether you're doing physics or updating positions or any displayed values. So far pretty standard. The fun part starts when you apply thrust F. The E and L updates are trivial and depend only on the position relative to the center of attraction r and velocity vector v. dE/dt = v·F and dL/dt = r⨯F. But then the update for ν0 and ω actually get quite messy. (Left as exercise to the reader, because I don't want to.) The advantage, though, is that if you're doing a one-body SoI, you're done. This form automatically conserves energy and momentum - the only change to these two comes from the thrust equations, so long as we keep deriving the local cartesian coordinates from the orbital elements. So for the entirety of Kerbol system, you have a perfect sim. If you do have a restricted three-body, such as the Risk twins, and you want to use the above, you'll probably still want to pick your primary SoI in whatever reduced system makes sense, then apply additional forces as correction. So it'd be as if there was a source of external force that isn't the ship's engines. Here we're back to having concerns about the conservation laws. In practice, the orbits get chaotic enough that you probably don't care, and you can just use an RK4 and save yourself sleep. But if you really want to, the symplectic integrator approach from the previous article is done in generalized coordinates, so you can apply it to the constants of motion of the unperturbed Hamiltonian to derive something truly precise. This is the point where we should chat about the warp, though. Maximum warp of KSP is 105. Alpha Centauri is 4.3ly from Earth. The interstellar distances are reportedly "realistic," which in the most generous interpretation means 0.4 Kerbin light-years at realistic c of 3x108m/s. So most generously, we're looking at something like 1014m. A 1g torch ship would traverse this in ~73 real world days. At 105 warp that's about 1 minute. Which is fine, but only if you really can torch the whole way, your frame rate holds, and there's nothing at significantly higher distances. In other words, 105 must be supported in KSP2 as an absolute minimum, and needs to be rock solid. Ideally, you want to be able to push it to 106 while maintaining a 1:1 simulation. So, say you're running a busy 103 objects (ships, satellites, debris) and you want 106 warp. If you wanted to maintain fidelity equivalent to a 30Hz simulation, leaving you with about... 0.1 instructions per object? Ok, so 30Hz is clearly a no go, but do we need to? Obviously, no. For any object without external forces (so not in Risk sytem and with engines idle) it's sufficient to check for next SoI intersection and set a reminder for then. This part is actually an interesting code challenge in itself. Clearly, a lot of objects are just never going to go anywhere. A satellite near Kerbin with PE above atmo and AP below Mun is going to stay in that orbit forever. But something interplanetary might clip into SoI of another planet at some point in the future. Should you always predict this? Personally, I'd ignore it for debris above certain warp and just say that if you have no crew/cores and you're confined to current SoI, ignore inner SoI intersections. That leaves us only with solving for the motion of things that are either under power or might go under power (due to crew, probes). And this is where the method you use for integration matters. A full symplectic integrator in constants of motion coordinates might get into thousands of cycles per step. This will likely be your limiting factor. So the question becomes, can we cut this further? And yeah... Remember how I said that E and L updates are trivial? They still are, and they're still what we most care about. For objects under thrust (or other external force) you can simulate in E, L, r, v coordinates. Note the redundancy. You want to update r and v as a Verlet, and renormalize to E and L, while the latter are updated from force integration only. This will have a precession drift, but that should be tiny compared to true precession due to applied force and it won't yeet your objects out of the system like flees as integrating only r and v would. As a final piece, apply variable time steps to this approach. What that last bit gives you is that anything far away from a gravity source will travel in mostly a straight line, where you can do coarse steps. And anything going around a heavy object won't stay there for long, because if it's applying enough thrust to make you go to fine steps, it'll either escape the system or crash into the massive object. Either way, you won't have to care about it for too long, and your overall simulation rate gets to stay high. With this approach you really can have tends of ships under power with thousands of pieces of debris running 106 warp and not causing half of your satellites to disappear because they crashed into something they shouldn't have. It does still mean that the physics will have to be frozen on the ships themselves, just like in non-physics warp in KSP1, and you'll have to get clever with resource use, and how that's updated. (And not whatever unholy nightmare was plaguing both the KSP1 and KSP2...) But all of these problems are now independent of the coordinate systems and how we do the integration. They're just something KSP2 team would have had to deal with to support ships that continue using up fuel as they're traveling under warp. So hopefully above gives a bit of an outline of both what KSP2 team should have been doing with orbital physics, and why they probably didn't think about it. The physics experts the team had knew basically nothing about the game development. And the game engineers on the project were Unity devs, for whom it's very natural to think of every rocket as a collection of scene nodes with mono behaviors. And yeah, you can't do that in a game they were trying to make. I hope they were going to figure it out eventually, but I can see why they didn't from the start.
  20. Yes. Because "500 people got the game" is closer in every practical way to "Almost everyone got the game" than to "Nobody got the game," which is where your percentages land. Logarithmic scale is the only correct way to compare the fractions, which is what we're dealing with. That's how you get statistically significant data out of what otherwise would be just noise. So your "not well defined," applies to armchair philosophers only, while for anyone who actually had to work with data, especially vague data with large uncertainties, the concept of "closer" is very much strictly defined here. The fact that it leaves some lay-persons confused is a problem for the lay-persons. Being uninformed about something doesn't make you less wrong when you say something incorrect. In fact, that's usually the cause.
  21. It looks like that's what they were trying for. The most critical failure was overengineering the ship/world data without adequate testing along the way. They ended up with some configurations being usaveable, others resulting in conflict between logical configuration and physics configuration, and others yet breaking the editor. Between symmetry modes, constraints, and the difference between save-state data and live data, it turned into an unworkable mess. I don't know who screwed it up, but somebody should have escalated problems with it a lot earlier. The other problems were milder. Relocation problems and KSC "teleporting" was probably still related to the same world state issues. And the only other major problem was that whoever implemented flight physics didn't understand how the integration errors in orbital mechanics work. Which, to be fair, is niche topic. I know a lot of competent physics engineers who think that they can always throw an RK4 at the problem and get consistent integration. Whereas, in space engineering it's a known problem that none of these standard methods conserve energy or momentum when integrating gravity. There are symplectic methods for gravity, but they come from papers like this one, and are generally tricky to implement. The two physics engineers they have had did not come from the field of physics where they'd likely be aware of these kinds of problems with the methods they were deploying. As a result, you had various decaying orbits persisting throughout the development, with many attempted fixes. Everything else seemed to have been on track to resolution. I know the rendering was still way over the budget, but that's kind of typical for a project at a stage where you're still tuning a bunch of things to see what even works for the overall look of the game. On the net, everything salvageable. I'd probably start the world/ship state from scratch and ensure it works correctly on every step. I'd implement LoD system with intermediate rigid bodies for the ships, to greatly reduce the joint count and to make sure you don't have large mass differences between the two sides the joints hold together (it's a known limitation of all modern physics engines). And finally, I'd replace cartesian integration of orbits to the one in Hamilton-Jacobi formulation, which lets you keep things on rails unless an external force acts on them. This works pretty well even for a restricted 3-body problem. Coupled with a symplectic integrator, the solution can be almost perfect for anything you can encounter in KSP. It's a few months of work for a good team, but you really can't throw rookies at it. You need someone who has done physics in games to TD and architect it. That's a small-ish pool of people, so it won't be cheap, but better drop 7 figures into that then just keep burning it with a team that has to stumble into the correct solutions over a much longer period of time.
×
×
  • Create New...