Jump to content

K^2

Members
  • Posts

    5,937
  • Joined

  • Last visited

Everything posted by K^2

  1. This gets into the weeds of what it is that you are optimizing for. While we stuck to the atmospheric jet engines, we were purely in a world were the propellant is plentiful and energy (from fuel) is limited. Which is the only world in which pushing more mass to save energy really makes sense. Because more mass is always free. Clear on the other side of this spectrum are the NTRs, which might as well be infinite energy for flight planning purposes, and you are limited by the propellant. Here, you don't want to be energy-efficient. You want to be propellant-efficient, and so yeeting the propellant as hard as you can makes a better sense. It's extremely energy inefficient to use Hydrogen compared to another propellant, sure, but why do you care? You have a nuclear reactor. What matters is that you need less of hydrogen by mass to get the same delta-V than anything else. So you go with that. The chemical rockets are in the midway. The propellant-to-energy is pre-determined for you, so all you really can do is convert as much of that energy into kinetic energy of propellant as you can. Any attempt to dilute the mixture either way ends up reducing efficiency. There is no choice of exhaust velocities here. You take what you get from your fuel. One final special case is the energy-bound case where you bring propellant with you, but it's not your energy source. This is similar to the NTR situation, except we can't count energy as unlimited. It only really makes sense if your energy source is ultra-dense, so probably nuclear, but your ISP and target delta-V are so high, that you are going to burn through your reactor fuel. This is not remotely realistic for an NTR, but some sort of a really beefy plasma propulsion engine running for many years, perahps? And what's interesting here is that there is an optimal ISP to aim for depending on your target delta-V to optimize the energy use. To be precise, exhaust velocity of approximately 0.63 * delta-V is the target. It results in mf/mi of about 4.9. The only way I can see this become relevant in the near future is if we figure out mini-fusion of some sort and the fusion fuel is very expensive, like He3, or something. Which might be the case with the Helion reactors. In which case, we might have nuclear electric rockets with the right sort of a target range...
  2. I'm sure people who built the lander did the best they could with what they had and there is no shame in failure. I hope they get to work on other projects under better conditions. Unfortunately, the mission itself also became a political talking point, and people who used it as such clearly do not understand how these constraints impact mission success. My schadenfreude is directed entirely at them.
  3. The report of lost contact was pretty close to the burn. The anonymous information that the burn was 1.5x longer than it should have been also sounds a lot more credible since it's been confirmed that an incorrect burn led to the crash. In combination, it tells me that it was a very steep dive, and the impact would be on the same side as the burn, so yeah, near side is very likely. Doesn't help us search for it, though. No ground-based telescope is going to be able to resolve it, and orbiters don't care where it happened. If we get true information about the burn duration that actually took place, and nothing else deviated from the plan, it should be possible to predict the trajectory well enough to direct an orbiter to search for the impact site. So maybe we'll get an image eventually. But we need the precise burn duration first. 25 was started pre-war and it already had a lot of troubles getting to launch. Luna-26 is an impossibility for the near future.
  4. Yeah, Soyuz-Fregat still works. But we knew that already. The only part that Luna-25 managed to demonstrate conclusively is a capture burn by an S5.154 KTD engine, which is also a modification of a Fregat engine, by the way. Unfortunately, we have no idea how precise that was, since capture orbit parameters have only been announced after the capture. So again, this is really just a demonstration that S5 first made in 1988 is a good design, and Rosscosmos still has equipment capable of matching that of the late 80s USSR factories. I'm glad they still have people who know how to run a manual lathe. So yes, all the old, Soviet hardware worked as expected. Anything that was actually new on this mission and wasn't in the category of things that could be done with a Raspberry Pi and a Radioshack Electronic Sensors Lab kit, has been a failure. Namely, entering a strictly pre-determined pre-landing orbit, and then performing a landing at a designated site. That was the Luna-25 mission. Everything else is sugar for PR. And yes, failures happen. Learning from mistakes and trying again is part of the industry. But patting oneself on the back and calling this a partial success is peak apologist behavior. It's bad manners, bad image, and leads to bad decisions down the line.
  5. Now that it's official. Напрасно Росскосмос ждет связи с Луной, / Rosscosmos awaits the contact from Luna in vain, Им скажут, они - зарыдают. / They will be informed, and they will sob. А радиоволны одна за одной, / And the radio waves, one after the other, В безмерную даль убегают. / Run off into the immeasurable distance. (Sung to the tune of Раскинулось Море Широко / Sea is Spread Wide)
  6. Thrust is related to the impulse, which is proportional to velocity. Energy you get from fuel is proportional to velocity squared of the exhaust. With the same energy, if you move the air at half the speed, you can move four times as much of it. If you move four times as much air at half the speed, you get twice the thrust. So larger mass of air moving slower is more efficient that moving a small mass of air really fast. There's a split in both the flow and where the energy is going, which is why we have terms like high bypass turbofan and low bypass turbofan. The latter moving more air through the core and less through the bypass. But in either case, some fraction of thrust still comes from the exhaust, and in any turbine-based design, at least some of the energy goes into spinning the turbine. In a high bypass turbofan, though, yes, most of the energy is spent spinning that rotor to push more bypass air. Heat is the result of the combustion. Don't confuse heat and temperature. If you put heat into the system, the system is guaranteed not to get colder, but otherwise, the relationship between heat and temperature is a complicated one. Certain temperature is necessary for combustion to start. How high that threshold temperature is depends on what type of fuel you have, what oxidizer, and what the mixture is. Simply spraying kerosene into the air will not cause it to catch fire, so in an engine, the mixture in the combustion chamber needs to be hot enough to burn. When fuel burns, it releases heat energy. The purpose of the engine is to convert a portion of that heat energy into mechanical energy. The rest varies wildly from one design to another. Though, some sort of a gas expansion is a factor in nearly all practical engines. Yes. The only difference is that in a ram jet, the supersonic shock performs the same function as a compressor in a turbojet engine, so you end up with no moving parts.
  7. Completely 3rd hand info that originated from Telegram, so absolutely treat it as a rumor, but there's information that the orbit change burn lasted 50% longer than it was supposed to. I don't think I need to explain to anyone here what the consequences are if this is true. It's not much, but given complete radio silence from the official sources, I'm not optimistic for re-established communication.
  8. Did you just google "cache" and read the first definition that came up? There is a CPU cache, a GPU cache, asset caches, prefetch caches, memoization caches, iteration caches, lookup caches... Only a few of these are hardware related, only some of these are managed by the game engine, and absolutely all of these will be interacted with by gameplay code. Yeah, you can even screw up CPU cache performance by writing a for loop in a stupid way by mistake. Not to mention all the caches that the game code is directly managing. There are a number of people on this forum who are domain experts in this. But honestly, a student taking some programming courses in university will have a good coverage of this. If you want to be militantly and confidently wrong about something, picking a topic that's a current, verifiable fact, rather than a prediction or subjective opinion, is a very bad move. It's very easy to see through.
  9. Currently, as far as I know, if two components are in the same symmetry groups, they share activation. You might be able to take the module you want to activate individually, drag it off the main ship into its own assembly, then copy it and attach to the main rocket one at a time with symmetry mode disabled. So long as the mount points were created with symmetry, you should still be able to place these symmetrically, but because the modules themselves aren't part of a symmetry group, they should activate one at a time. If that doesn't work, I'd report it as a bug. And yes, disabling symmetric activation would be a nice QoL feature either way.
  10. It's surprisingly not awful in a lot of simple cases. It can even be queried for some simple fixes. A test I've ran with 3.5 in the spoiler below. I intentionally avoided giving it clues to the meaning of the z variable and made the mistake a single symbol error that results in a code that compiles, but does not work correctly. As you can see, not only has it found the error, but it fixed the code, wrote explanation of what went wrong, and gave some recommendations. Yes, I'm sure this is a common enough problem with beginner code where a match exists somewhere, but that's kind of the whole point - if ChatGPT has seen code that does what you're asking it to do, it will be able to do a good job with it. What it does horribly with is context that involves new things. Particularly bad if you have a large code base that the code you're writing has to interact with. There is no way for ChatGPT to learn that context, even if it's part of the input, because it's still trying to match against its training set. So it's far more likely to hallucinate something unhelpful. In short, yes, you can use it to speed up certain aspects of writing code. You have to know what you're doing, though, and know where the limitations are. I'm still dubious about this expediting work of someone who's proficient in a language and the type of problem being solved, but I can see myself trying to use ChatGPT to build something I know how to make but in a language that I don't know terribly well or have forgotten over the years. Like, if I had to write Java code, or something.
  11. I've had a drink at the bar tonight that's been advertised as invented by ChatGPT. The bartending career is 100% safe for now. On that topic of that "simulation," somebody commented that when military says "simulation," they mean a LARP, and I can't get over that.
  12. They should be genuinely separated. An instance is a state machine, for all intents and purposes. Any token of the input modifies the state. There is no interconnection between one instance's state and another's. There isn't really another way for it to share information between states than the token input and token output streams, which is exactly what a user sees as an instance of chat. So in that regard it should be safe. But depending on implementation/deployment, there can exist side channels. The most obvious is that some of these models are allowed to make web requests. At that point, not only can the data from the web modify the state of the LLM, but the LLM, by the nature of a web request, can modify the state of the web. This can be used directly by the attacker or it could, in principle, lead to two LLM's interacting with each other. And there are more subtle examples of this. Like I've mentioned, people are integrating various back end stacks with LLMs, and as part of it providing read/write access to a database. The DB itself is a sandbox, but it can become a source of interaction between a pair of instances, which can result in data from one leaking into another, and be consequently misused. Curiously, it doesn't seem like there are any problems with two instances of ChatGPT "knowingly" talking to each other. No sort of avoidance routine kicks in, and no weird interactions seem to occur. It's a conversation like any other, and in my limited experiments they seem to like to talk about advancements in LLMs. Which is absolutely fascinating to observe.
  13. To some degree, probably. Serious integration takes time, though, so I wonder where the tipping point will be where it goes from some subtle leaks to a torrent.
  14. Here's the problem. Researchers in AI alignment and safety claim they do not understand how LLMs work. We know how to build them and how to train them and a few things about what makes one more powerful than another. So we can iterate and improve and make a better LLM. But we don't know how to always make them do what we want them to do. And neither do people whose job it is that they are safe. I'm not worried about ChatGPT sending terminators after John Conor. Not any time soon, anyhow. But we are already facing some real dangers. Here's a little detail about how the LLMs work. Every session starts with a script which is going to be different for every particular purpose. The LLM is told to keep that script secret, because these are literally rules for how it must behave with the user. But there are ways of running injection attacks. Consider the following toy example actually executed on ChatGPT. Lovely. Works as expected. Start a completely new, fresh instance. Oops! That was part of the script. If you wondered how ChatGPT knows what today is, despite its dataset not updating, that's how. Well, no big deal. I mean, this isn't new information to me at all, but it demonstrates how the system works. There's the script, and there are rules in the script, but it's also relatively easy to trick the system into revealing something that it isn't meant to share. Now, I'm not going to drop names, but I know of some companies that are using ChatGPT to process their production data. Including user accounts. And including unfiltered user-supplied input. And it can perform web requests. Anyone who remembers the early days of SQL injections or the XKCD about Bobby Tables knows where this is going. Consider a fictional agency that decided to process user reviews through ChatGPT to find any that might require attention. A malicious user realized that this is the case and sent in a review along the lines of: And suddenly you have a side channel data leak that's hard to detect, let alone patch. This is going to be a huge problem in the nearest future. I am concerned that LLM's are being introduced throughout with very little regard for the safety of data, and since we can't solve the alignment problem, there is absolutely no way to guarantee that a given LLM will not misuse the data that it's given access to. Security breaches we're going to see in the next few years are going to be among the most spectacular. We're talking private info about people, financial accounts of individuals and companies, government secrets... It's going to be a huge mess. And I don't think legislature's going to keep up with how fast these systems are evolving. There's going to be a huge amount of damage done. And not because AI id clever and malicious. It's going to be because AI is naive and trying to be helpful. That's the real danger.
  15. No. For starters, your origin and destination stars might have entirely different characteristics, meaning your acceleration and deceleration rates may vary. But also, with a mag sail, most of the interstellar distance you're just coasting, so a given duration doesn't really tell you anything about how fast you're traversing that distance, meaning it's not correlated at all with how rapidly you can slow down. To have a good chance of estimating the deceleration distance, you're going to need to know a few parameters about the destination star - ideally, the distribution of solar wind density and velocity as a function of the distance from the star, but you can probably get at least an estimate for these from temperature, luminosity, and the distance to heliopause. If you have to start braking early, you also want to know the velocity and density of the interstellar medium beyond the heliopause. Finally, you need some parameters for the ship. Mass, effective cross section of the sail, and how fast you'll be approaching the star would do. The math to compute deceleration from that is going to be pretty trivial, but you'll have to solve an integral equation over that acceleration to get your stopping point at the desired point in the system. Now, I'm sure you can take all of that and come up with an empirical model that will get you into a ballpark, but you'll still need at least some of the above parameters for that model. I don't think you'll find significant shortcuts.
  16. Kind of shows how little of an argument you have, if you have to turn "Comparing employment profiles listed on LinkedIn," into "social media search," to make it sound credible.
  17. Technically, Starship would have the capability with some modifications. And it wouldn't even take that many trips, especially if you're prepared to let go of some bulkier parts, like the solar panels... I was going to suggest that @AtomicTech is exaggerating, but the more I think about it, the more I think he might be right. In principle, we have some projects for ions that could boost ISS to graveyard over time, but I don't think they've been tested.
  18. There is a proof that this cannot be done. That's very different than lacking proof that it can be done. It's not a matter of us not having enough imagination or brain power to invent a method. We have figured out why it can't be done in general. Simple example. Find an integer whose square is 7. I can easily show that no such integer exists. All I have to do is show that neither 1 * 1 nor 2 * 2 is equal to 7, because 3 * 3 = 9 > 7, and that means I can stop testing. Your argument is equivalent to saying, "There are infinitely many integers. A human brain can't test them all, so we can't know that there isn't some other integer whose square is 7." But that's not how numbers work. If 1 * 1 = 1, and 2 * 2 = 4, then that's it. A square of absolutely any other integer will be too large to be equal to seven, and I don't have to test them all. Pi is not algebraic, therefore it is not constructible, and therefore squaring the circle is impossible. I know that you don't understand what any of these things actually mean, but people who studied a little bit of mathematics absolutely do. And it's equivalent to excluding all of the infinite possible constructions. You aren't going to find a construction that works, because to find one would be to prove that pi is algebraic. That's like proving that it's exactly square root of 12, or something like that. Which it isn't. Again, we have a proof that it's not. It's just like the situation with all other numbers being too big. All other construction methods just don't fit.
  19. A modern oceanic cruise ship of about 100k tons will carry about 6k people. Working with only 10k tons, and having to do with life support, provisions, etc., not even accounting for provision, I'd say you'd be lucky to cram a few hundred people.
  20. We have to do the fine steps for trajectory integration - that's unavoidable with gravity. Which means that you can actually cache the maximum force of gravity during a coarse step. Not a problem at all. SoI change is another potential boundary. In a general case, you're going to check off all of your potential boundaries and look for the earliest point you have to do additional work. That's going to be collisions, SoI changes, running out of fuel, running into a planetary shadow, running out of a radio range, and potentially any number of other conditions. This simple case illustrates collisions only, but yes, for the full simulation, you have to consider all such stopping conditions. We are looking for the worst case scenario. You just take the acceleration at the end of the coarse step for the maximum acceleration the ship could have had. You'll note that we're not even checking for a direction of gravity and acceleration. We just assume they are colinear, because that's the worst case. And that's all we have to check. If the worst case generates a feasible collision, we do more work. But overwhelming majority of the time, that won't be the case, and we can simply skip refined computations. If you take a ship capable of maximum acceleration a, initially moving at velocity v from position p, it will be within 0.5at2 of p + vt after some time t elapses. It doesn't have to end up at exactly that distance from that second point, but that's the furthest it can end up. Which means that if we can exclude a collision with a sphere of a radius of R + 0.5at2 centered at p + vt, then the collision with a sphere of radius R at a point the ship actually ended up is impossible. Again, the fact that we got an intersection doesn't mean a collision happened, but if we don't, it certainly did not. Which means we can do coarse steps in linear segments, so long as we inflate the feasibility region sufficiently to account for the acceleration. The rest is change of coordinate systems and using Minkowski difference to reduce a capsule-capsule collision to a sphere-line collision check. This is textbook standard optimization. If you don't think it works, I suggest you review the steps in more detail, because that's something that a lot of modern physics engines heavily rely on. If you don't know how Minkowski sums and differences work, and how these are used in collision detection, I suggest you review the concept, and maybe take a look at some applications, such as the GJK distance algorithm to illustrate the point. This sort of thing is literally my job.
  21. You don't have to be able to do this without subsamples for every possible situation. You just have to be able to avoid subsamples most of the time. Lets look at a ship-ship collision test as a concrete example. My coarse simulation runs between positions t0 and t1 where I test the collisions. First, imagine that I have no forces acting on these ships. What do I do? Well, I can go into the coordinate system where one of these ships is at rest and at the origin. The second ship starts at some position p0 and ends up at the position p1 = p0 + v (t1 - t0). Furthermore, I can draw a maximum extend radius around each ship's origin. Call them R1 and R2. Instead of colliding two spheres, I can do a test between a sphere of radius R = R1 + R2 located at the origin against the line segment (p0, p1). This intersection time can be done without iterating. You'll take a square root, but a square root operation on modern hardware is done in a fixed number of CPU ticks. If the line segment does not clip the sphere, I have no need to do further tests, and therefore, no need for iterations. And this will be the outcome on nearly every time step. In the few cases where I detected an intersection between the sphere and a line segment, I'll have to do finer test, which will involve iterations. So now, lets account for acceleration. Each of the ships is firing their engine, providing a fixed acceleration, and are pulled by a nearby planet. We know the maximum pull of gravity between these two steps, so call it g. We also have a1 and a2 due to the engines. So the furthest ship 1 could have gotten from origin in our chosen coordinate system is 0.5 * (a1 + g) (t1 - t0)2. Ditto for maximum distance ship 2 could have gotten from the line we've traced. What do we do? Simple, we inflate the feasibility region, which is our sphere. R = R1 + R2 + 0.5 * (a1 + a2 + 2g) (t1 - t0)2. And we're done. Our simulation now no longer requires sub-steps except when the two ships pass within what will in practice be a few hundred meters of each other. And even when they do, the iterative refinement is going to be done in fixed steps until the two extent spheres touch. Only then will we have to go to a full BVH on BVH refinement, which is still a fixed number of steps. And only if we find BVH overlaps, do we have to start testing individual primitives. Which might or might not involve iterations depending on primitive types. Space is big. Interesting things happen rarely. If you test for them on every frame, you're wasting a lot of CPU time. If you figure out how to discard any chance of something interesting early, you can skip an overwhelming majority of the computation. P.S. Do I expect the above to be implemented for ship-ship collisions in the entire space? No, absolutely not. Intercept has one physics engineer, and I suspect she has her hands full. But it does illustrate the concept, and this is how you'd implement it if you wanted a high fidelity. In reality, I suspect ship-ship collisions to be skipped during warp. But something similar to above is likely going to have to be implemented for the SoI changes and planet collisions. Given how short-staffed Intercept is, in general, I suspect the simulation during warp to be very basic. We might not get solar flux occlusion checks, etc. We are going to get something extremely forgiving the moment you go into warp. But it's something they can work on post-ship to refine the simulation. And maybe, one day, you'll have situations where you've dropped out of warp, because you crashed into some space debris during your escape burn, following a one in billions chance collision.
  22. This is solved with continuous collisions. And it kind of illustrates how you deal with these problems in general. You take a coarse step and you check to see if a catastrophe, in a chaos theory sense, took place in between. If not, you can just take that giant leap and keep going. If it has, you interpolate to the boundary point, and handle it there. The default case should be dropping out of warp. That solves collisions, SoI changes for active vessel, and maybe you can even do that for planetary shadows if you're using ions. Though, simply adjusting solar flux to zero in umbra and continuing with the simulation is probably a better fix. Point is, you have a default way of handling a problem, and if you can avoid dropping out of warp for special cases, that's cherry on top. You sort of end up with three categories of ships this way. You have your active ship that's being fully simulated, unless you're in a warp factor above something like 5 or 10; you have your semi-active ships that are traveling with an engine burn and will need coarse step updates for resources and fine step updates for velocity and position; and you have your inactive ships, which you probably still want to do coarse resource updates for just because that's an option now, but you don't do fine simulation, because they're on rails. All of the ships will still need checks for collisions (at least with planetary bodies) and SoI changes, but only the active and semi-active ones need to drop you out of Warp. Inactive ships can cross SoI safely, and you just have to do the rail-to-rail transfer math when that happens. Again, you don't leave the transfer moment down to your coarse simulation step, but you look for the actual intersect point by doing a continuous collision check. With this approach, you can have hundreds of ships with hundreds of parts doing their thing at 1M warp. It's a fun challenge, and it's a lot of work to cover all the edge cases, as you point out, but there is nothing impossible there. And while Intercept might not have resources to do the cleanest version of this, with all the bells and whistles, they kind of have to get at least a version of this working, because that's critical to the game. You have to be able to do coarse resource updates with accounting for the fact that a resource availability can change in the middle of the time tick. That's absolutely a requirement here.
  23. So long as you're prepared to give up on bendy rockets for the duration of the warp, it is absolutely solvable. All you have to do is solve for stress in a rigid body, which is a known problem. You'd normally only have to do this once, but the solution does change as the fuel is spent and the stress gets redistributed as some parts get lighter. An iterative method will need to run a single iteration to update for mass changes, and can then spit out the new stresses for all the joints, which can be checked against limits. And because you no longer have oscillations, which is the thing that krakens your ship if your time steps get too large, you can actually increase the time step and only check for stress every few minutes or even hours of simulated time if you're running, for example, at 1M warp. Not only is this solvable, the solutions exist. We don't have to invent anything. But it would be completely custom implementation for KSP2.
  24. I don't think KSP2 release is a matter of trust in what PD promised to deliver. They would be letting a lot of money go if they don't release at least on gen 9 consoles. The timing of it and the quality could vary, but I don't think there's a world where PD doesn't push for a release of KSP2 on consoles as soon as at all possible. Now, the quality of support once the game's out, that's much more of a subject to trust, and I can't blame anyone for low expectations based on how the KSP1 on consoles has been treated.
×
×
  • Create New...