Jump to content

K^2

Members
  • Posts

    6,181
  • Joined

  • Last visited

Everything posted by K^2

  1. Which isn't actually correct. Closed timelike curve (CTC) is a feature of GR. Albeit, there is CTC conjecture stating that stable metric allowing for CTC cannot arise from positive definte energy density. Meaning, you need at least some negative energy to go back in time. Two notable pounts here are that it is a conjecture, meaning no complete proof exists, and that it only applies to stable solutions. Unstable solutions to Einstein Field Equations allowing for CTC are known. Naked singularity case of Kerr metric is best known example. So while we're pretty sure you can't create anything stable that permanently allows time travel, catastrophic events allowing for brief window are a definite possibility. Parameters for such hypothetical event are not known, but if Kerr case is anything to go by, something like black hole collision at a minimum.
  2. The only mainstream(ish?) show I can think of that did this correctly was Babylon 5. Fighters were stationed on rotating section that provides artificial gravity to ship or station, so they were just released and effectively "fell" away from the carrier. By the time fighters need resupply, combat is effectively over, so they are collected through the main bay in an orderly fashion. And, of course, the way hyperspace works in the show allows fighters to follow capital ships during retreat without having to dock, so that removes the need for slamming the deck right before the emergency jump, like in BSG and the like.
  3. Ah, but there is immediately a problem with that. Two events that are simultaneous in one coordinate system aren't in another. Just as heir apparent on Earth becomes monarch the instance Queen dies on Mars, the UK Embassy on the Moon swings around its orbit and happens to be moving away from Mars at nearly 1km/s. From their perspective, heir apparent became monarch about 5 nanoseconds before the Queen died. That's not a lot of time, but it's still high treason. They declare that a traitor cannot possibly have become a monarch legally, and call for different successor. Australia, moving in opposite direction to UK on Earth's surface, sees a smaller time difference, but agrees on the premise, setting up an international incident, with UK and Australia having different monarchs as head of government while still claiming to be under the same monarchy. I think, to fix this, you can work with "There must always be a monarch," but then you have to concede that exact identity of monarch is frame-dependent. Which can add to confusion about orders given, as now not only is it hard to measure whether they were given before or after, but it might actually be different in fact depending on how rapidly and in what direction the person happens to be moving when receiving these orders. Fortunately, I don't see this making a practical difference until we start building much faster ships.
  4. I did add the "in practice" clause...
  5. No. Entanglement doesn't affect communication speed. It can be used to push more data through a channel that exists, but can't be used to establish a different line of communication. See No-Communication Theorem and Quantum Teleportation for a bit more detail. The actual problem is that light speed is a local limit due to space-time curvature being a thing. Curvature is all about distances between two points, and that can be path-dependent. Gravitational lensing is a good example. Light travels along the optimal path between two points. So if you see an object, a length of the path that light took is a good definition for the distance between objects. So what happens if light from a distant qusar passes by a galaxy with a black hole in its center and you end up with something like Einstein Cross? Now you have five different images of that quasar. Five different paths light was able to take to get to you. Five different lengths all being candidates for distance. Some of these are going to be shorter than others. Even if you chose the smallest value as "distance", the galaxy in between moving across can shift that ratio. You might measure the distance, send a message, and by the time the message makes it to the lensing galaxy, it would have moved out of alignment enough to allow for an even shorter distance, resulting in the message getting to you faster. Is that FTL? It's a stretch, but it gets you what you want, that is message getting faster than you expected based on measured distance and speed of light. If we are talking pure theory, there is no real limit. You can fold the space-time in such a way that any two points end up arbitrarily close to each other to send the message, then unfold the space-time again. In practice, of course, we have no such capability and based on our current understanding of physics, might never have that capability. But natural events that make space-time behave in very strange ways do exist out there, so a blanket statement about speed of light limit on communication is technically false.
  6. Yup. Wasn't trying to contradict you. Just expanding on it.
  7. In theory, but only if it can really provide clean 400W. Some of the cheaper brands become very unstable at their peak rating, and you can reliably count only on about 3/4 or less, and that's cutting it awfully close with your proposed setup. Given that PSU failures can cause damage to your other components, I personally wouldn't want to risk it. If you can spare the cost, I would go for something in 450-500W range from a brand with good reputation and a model with good reviews. Also, when looking at PSU reviews, don't just look at the score. Look at the actual bad reviews. I've encountered one model a few years back that had very few bad reviews, but nearly all of them were, "started pouring smoke," or "caught fire." Even with best reviews, you aren't guaranteed that your PSU won't die on you, but fire is definitely a failure mode you want to avoid.
  8. Yeah, there isn't a big enough market for it to offset the costs. If you are in a food business, anything that makes your food obviously engineered is likely to drive sales down, not up, and few will pay extra for a novelty like this in their food. We do have non-food novelty GMOs, though. You can buy GloFish in Walmart in most states in US in the pet section. They are species of common aquarium fish genetically modified to produce fluorescent proteins in many different colors. They tend to lumines a little even under common lights, but they are actually glowing under a black light. They started out with some zebrafish populations (danio) that was modified for research purposes, found a marketable use for them as aquarium fish, and have since expanded into other species and multitude of colors. I know that some places, notably California, have strict laws prohibiting sale of live GMO, so you won't find them here, and I don't know what the situation is in other countries, but GloFish are still the most accessible novelty GMO I'm aware of.
  9. This statement should always come with the words "locally" or "in flat space-time".
  10. It varies greatly with what you want to call "learn", but even in the loosest sense, there are some languages that take getting used to. Prolog is my favorite example. Programs are written very backwards compared to how you think about it. Rather than thinking about sequence of steps to get to the goal, you bring together all possible ways of getting there. Not to mention that actions are encoded as challenges, effectively. Rather than saying, "Print this text," you say, "Prove that you can print this text," which, of course, program does by printing. And that makes sense within the language paradigm, but doesn't jump out at you naturally.
  11. You kind of contradict yourself. Problem isn't that shot to CNS isn't an instant kill, which it basically is for any practical round, but that hitting CNS is hard for all the reasons you list. That said, it's the only guaranteed way to stop the target dead, both literally and metaphorically. I'm not going to comment on hostage scenario, because I have neither training nor relevant expertise, but in more general context of firearm use, there is a reason why things like Mozambique Drill exist. Shot to the torso is the most reliable, as it presents the largest area with high stopping power and is the hardest to accelerate unpredictably, being tied to center of mass. But the stopping power of a torso shot is primarily due to pain response, so there are both psychological and pharmacological possible causes for this to fail to stop the target in some rare cases. If the first two shots to the torso have not stopped the target, the third and forth aren't going to either, and you might as well try and place a head shot. Best case scenario, you are wasting a round on a target that's already incapacitated and is going to bleed out anyways. Worst case, you might as well roll the dice on hitting the brain, because if the target is charging you with melee weapon, they'll probably do significant damage to you if you don't.
  12. Ray tracing you get with RTX is generic and can be used for anything you want, including processing sound reflections and, if you really wanted to, calculating trajectories for projectiles. Moreover, there is nothing special about how the rays are actually traced, and almost identical ray-tracing shaders have been used in mainstream games for a while, typically limited to screen-space effects. Screen Space Reflections (SSR) are done with a kind of ray tracing technique, where the ray is traced against the depth buffer. The actual bit of new hardware in RTX and upcoming RDNA 2 cards is dedicated BVH traversal hardware. This is critical if you want to be able to cast a lot of rays against level geometry. So for example, if you wanted to see a reflection of something that isn't otherwise in the shot, the ray traversal has to extend past the camera frustum, and then you have to rely on BVH to test for just likely intersections rather than trying to test ray against every single triangle in your entire scene. This is what takes ray tracing from niche use for a few FX to "Holly crap, we can render the whole scene with rays." In practice, however, you're limited in how many rays you can cast per frame, and that limit gets more strict as you increase the scene complexity. There is some poorly-optimized geo in Control, for example, that really tanks the frame rate in just specific shots. So while a game with simple geometry can be fully rendered in real time with ray tracing, if you grab any modern AAA game, you'll have to budget your ray casts. This is why most games currently use ray tracing for reflections, soft shadows, and global illumination (GI). These are areas where you can see the biggest improvements with fewest rays. It's not because the hardware is particularly good at doing these specific tasks. The graphics card doesn't care what you use a ray intersection test for. It just finds the nearest piece of geometry along the ray and gives you surface information at impact point. With all of that in mind, back to the original question. Yes, you absolutely can set up a compute shader that runs ray tracing for every projectile in flight in the game and reports impacts pretty much instantly. You'd probably want to set it up as part of your general rendering flow, which means you'll technically be getting results with a 1 frame delay, but the impact on performance will be absolutely negligible. But unless you have millions of projectiles in flight on every single frame, there's just no reason to do it. Performance impact of tracing projectiles is already negligible in a modern game. On average, an FPS game is going to do more ray casts to see what ground everyone's walking on to see what FX should be played. And yeah, that can also be offloaded to GPU. In fact, because of how efficient these cards are at traversing BVH, you can offload a lot of collision detection to GPU. The problem is, most games aren't CPU-bound these days. So if your CPU is already not running at 100%, why would you move work over to GPU which is usually the bottleneck? And @wumpus makes a good point about multiplayer. Except it's even worse. No self-respecting FPS is going to do hit detection on the client. Anything you don't want people cheating on has to be verified by the server, and most game servers don't even have a graphics card. So anything to do with core game logic still needs to live on CPU, and outside of that, there isn't a whole lot you need to do on client side that isn't rendering, usually. One more note addressing something that @wumpus said about DLSS. I would argue that while nVidia has been pushing that tech heavily, the reason they have hardware for it on RTX has way more to do with denoising than supersampling. It goes back to having limited number of rays to cast, so nearly all of ray tracing is stochastic. For example, say I want to cast a soft shadow made by large glowing sphere. A typical technique would be to cast a ray from a point on the screen you're trying to light towards the glowing sphere. If you hit the sphere, the point you're drawing is at least partially lit. Otherwise, it's at least partially shadowed. But how much shadow and how much light? Well, that depends on how many rays hit. In theory, if you could cast hundreds of rays from every point, you'd have a pretty good light map. In practice, you sometimes have to live with just one ray per pixel on the screen, and you cast it towards the random spot on the glowing sphere. That makes penumbra hell'a noisy. And it's the deep learning denoiser that turns this dither map into a nice soft shadow you see on the screen. And within these constraints, denoising does incredibly well. In fact, major animation studios are starting to switch to denoising for their production. Last year at Siggraph, I got to see the noisy frames from Toy Story 4 taken from the stage of pipeline right before denoiser. That looked so bad! It's like somebody intentionally put a noise filter over it, then decided it wasn't enough, and cranked up the gain. But because denoiser has full access to the raw color, normal, and depth information, it can actually do amazing job reconstructing a softly lit scene with no discernible noise in final frame. So yeah, that's the main purpose of including this piece of hardware on any ray tracing graphics card. In terms of how useful it is outside of that particular task, I don't really know. It sounds like DLSS was just an attempt to sell RTX cards in the market with so few games making use of the technology so far. Hopefully, with next gen consoles ray tracing will become common place. I don't know if it makes games look all THAT much better, but it can save us so much time, effort, and money developing games.
  13. The best result I've seen published so far was actually based on electrochemical reaction changing water potentials in electrolyte resulting in many tiny bladder cells expanding and contracting. It has ok range, strength mostly limited by strength of elastic materials used, and while it's still a little slow, it'd probably be enough for moving at walking pace, and I'm counting faster corrections you need for keeping balance, which is a great improvement over anything we've had before, and there might be room for improvement on speed. The biggest problem holding these from being practical that I can tell is that every contraction is basically a charge cycle of a super capacitor, and that tends to come with a lot of limitations in terms of life time. A thousand charge limit for a battery might not be terrible. Thousand contraction limits on a muscle is garbage outside some really niche uses. But again, that's something that can probably be greatly improved on, so I'm cautiously optimistic about this avenue.
  14. Power is the easy part. The problem is turning it into linear motion. There are three ways people have tried. Mechanical, electrical, and hydraulic. None of them work well for power armor. Lets get the easy one out of the way. Mechanical transmission of power to limbs is a nightmare. You can do it with clutches and linkages, but you're either burning your clutches or aren't getting fidelity you need. It's extremely inefficient and extremely unreliable. It might work for some very narrow range of motions, like if you need to be running in a straight line at fixed pace, but that's about it. It's not impossible in theory to build a power armor where nearly all of the power is delivered through mechanical systems with almost no losses, but that would be the most complex mechanical system ever built by a very wide margin. I don't see it happening. Hydraulic transmission has a lot of good qualities if you aren't interested in speed. It's pretty efficient and will deliver the power where you need it. With the right pump design you won't have a lot of power overhead and can still deliver fine precision in motion. Unfortunately, the moment you need to start moving fast it breaks down. To deliver significant forces at anything like acceptable pressure of the working fluid, pistons need to be chunky. That's a lot of extra weight by itself, but then you start factoring in how much fluid you have to move to fill these pistons. And when you have to move it fast, you are suddenly wasting all your power accelerating hydraulic fluid, and any time you need to start or stop rapidly you have to deal with hydraulic hammer which risks rupturing your already overstressed hydraulic lines. Due to the above limitations, the go-to for military uses has to be electric. And it's an absolutely perfect way to transfer power. In theory. The problem is that there are only two forces that can convert electric power into mechanical. That's electrostatic and magnetic forces. Electrostatic forces would be ideal, but the voltages required increase with the size of moving parts. Unless you have nano-motors pulling countless fibers to actuate your limbs (hm, sounds familiar), in order to generate required forces you need voltages that break down any insulation you might reasonably have on hand. So while this is a great contender for future tech, if we figure out practical nanomachines, with modern tech, it's a non-starter. Which leaves magnetic forces, and these have a tiny little flaw. Efficiency of electromagnets in converting electric power into mechanical is directly proportional to the movement speed of the magnets and/or coils. This is not a problem for rotational motion - just up the RPMs, but it's a problem for linear motion. Linear magnetic drives would be the best solution, but you just can't get the forces necessary for armor to support its own weight, let alone do anything useful. So people go with servos. Take fast spinning motion, convert it into slow spinning motion with a gearbox, then use linkages to get the linear motion you actually want. Yay! It's efficient, it is precise, it can be fast, and it can apply a lot of force. Problem is with trying to get these last two at the same time. Specific power of servo motors sucks, because not only do you need chunky magnets to get good torque, but also a massive gearbox to convert this into even more torque. And all of this adds to inertia of the system, so you need even more torque to get things going... You'll end up with more weight in motors than armor and will probably still be unsatisfied with performance. At the end of the day, servos and hydraulics are still the best we've got. And yeah, there are plenty of projects that use a combustion engine with either one of these. Boston Dynamics does a lot of their testing with batteries, but pretty much everything they build with military in mind is designed to work with a motor and generator. We're already doing more with this tech than seemed possible a couple of decades ago, but in terms of using it for power armor, it's still entirely impractical.
  15. At some point, it ought to make more sense to just have an auxiliary ox tank.
  16. There's much you get to keep with inverse distance gravity, but orbits stop being closed. So if you're in nearly circular orbit, yeah, these will work pretty similar to normal ones. But orbits that aren't circular are going to be pretty chaotic. You still get energy and angular momentum conservation, so objects still stay within a distance range, but you no longer get fixed points where they reach these distances.
  17. What they'll do to the leak is probably not a lot more sophisticated than that, but the trick is finding where to stick the tape or equivalent. Slow leaks are hard enough to find when you aren't inside the pressurized vessel you''re testing, breathing the mixture you're testing with, and surrounded by vacuum of space.
  18. Neutrino detectors are the ones that will give us precise timing on that. In fact, that's the only way we've been able to predict supernovas in the past.
  19. I'm not sure that we would. We've never had direct observations, but according to models, a dying star of that size will switch primary fuel it's burning in the core a few times. That is the only real warning we get, as that's a fairly significant event. General consensus is that Betelgeuse is still burning helium in its core and has enough of it to keep going for thousands of years. When it runs out, the core will begin to shrink, until it heats up enough to start fusing heavier elements. This should produce significant enough changes in the star that we see it. And if it does, we will, indeed, know that the supernova is about a century out. But if we miss it ans we only catch the next event, or if we were completely wrong about what Betelgeuse is burning now, we'd be just years out. The final switch to burning silicon is expected to happen less than a year from supernova for Betelgeuse. The actual final stage where it is definitive is pretty quick. The star burns through the last of its silicon and core begins to cool and shrink. From what I've read, the process is shockingly rapid, taking just months before the core collapses. And because this process begins within the core, I'm not sure we'd even have time to properly register changes in the star's atmosphere. If it was to happen right now, our first and final definitive warning will come from two experiments. NOvA at Fermilab will register a powerful neutrino flux, automatically triggering thousands of emails to be sent to people watching for it. The other is LIGO detecting something very unusual coming from direction of Orion constellation and updating their Twitter. Correlation will be spotted almost immediately, and we will know that Betelgeuse is going supernova. We will be just hours from the star rapidly gaining brightness. At this point, the core collapse has happened and supernova is on its way.
  20. The forum was really cross with me about necroposting when I thought about just bumping the old thread, so here's a new one instead. Why are we kicking this horse again? Well, just this month paper got published finally explaining why the star dimmed and recovered so rapidly. Case closed, mov... And then Betelgeuse goes and starts dimming again. This time, completely unexpectedly, not matching the cycle established over years prior even remotely, and the current trend is just as rapid as when you started seeing all the news posts about it at the end of last year. The star is now rapidly approaching the brightness at which we've seen it at the end of November of last year, when everyone started looking at the skies. It's a little early to start speculating about why this is happening, but even if it's just another mass ejection, that only raises more questions. If the first one was really due to alignment of circumstances at the right point in star's cycle, why is this happening again, when the star in in completely different part of its cycle? And what amuses me the most is that this happened hundreds of years ago, and the light from second dimming is reaching us just as conclusive paper on first dimming is published. That's some astronomically bad timing.
  21. There were some cheaper opportunities in the past. Right now, lowest I've seen quoted for 1U cube is ~$100k. However, you do still need to do some testing, including shake tests and outgassing in vacuum chamber, to demonstrate that your payload is safe to be included with all the other payloads. And they probably won't trust tests you do in the garage. So in practice, including paying some lab to do the tests for you, expect double that. To me that says, "A little expensive," and my cost-perception is badly skewed by living in Silicon Valley. Anywhere else in US that's what, cost of a nice house? So upgrade that to "very expensive" if it's just a vanity project. It's doable, though. We've knocked about an idea of crowd-funding a cubesat launch on this forum before. It quickly outpaced in cost and complexity what we were at all likely to put together, so that got abandoned. Interestingly, compared to cost of the certification and launch, it's actually not that hard to find quality parts to make it into a real mission. Nothing terribly complex, but an actual experiment that you can download data from is actually within realms of possible. The biggest challenge is the fact that to get rad-hard parts at reasonable cost, your CPU is basically 80s tech. You can, of course, throw a Pi Zero into the sat and have it handle absolutely everything, but the MTF is pretty short, and prior practice of various university projects shows that you might get lucky and have it last a month, or you can get unlucky and have it dead after a day. So anything you want to use to communicate with the sat should be wired through rad-hard CPU, and so the most budget thing was to get something like rad-hard 6502 for a few grand to handle stability and coms, with cameras and sensors handled by Arduino or Pi. You can also get solar panels that are just the right size to fit on a side of a 1U cube! Also in a range of a few hundred to a few thousand USD. Compared to launch costs, all very reasonable prices. Of course to make it all reliable in harsh environment of space would involve a lot of work to design, build and test the thing. If you were to outsource that to experts, it will easily surpass the cost of launch in labor. That said, again, if it is just a vanity project, and you were just going to through $200k for a launch and lab tests, you might as well just bolt a Raspberry Pi to that thing with a radio and just hope it survives more than a day and you can download some neat pictures from it.
  22. Because there is radial motion, and we're not looking at the full cycle of particles through their orbit. Lets rephrase the problem. You have a force field that repels things from the center as some smooth, monotonic function of radius. (The function will be different for each body, depending on form factor, but lets focus on just one object in orbit for now.) We can clearly assign a spherically symmetrical potential to it, call it Up . Hamiltonian for a body in this potential is H = p²/(2m) + UG+Up, where UG is due to gravity, and this Hamiltonian is entirely time-independent. If Hamiltonian is independent of time, total energy is going to be conserved by Noether's Theorem. (You can derive that in other ways, but this is simplest.) Yes, there can be some trade between gravitational potential and this pressure potential and kinetic energy. But the total energy is conserved, which means that orbits will be quasi-periodic. To be precise, quasi-periodicity means that for any finite neighborhood around a starting point, the object will pass through that neighborhood after some finite amount of time. This is true for any central potential problem. So no way for an object to just continually gain energy and escape. Another example to try and square this away with intuition, imagine a vertical harmonic oscillator in gravitational field. Yes, as the weight moves down, gravity does positive work on it, and the weight will move further down, but this is perfectly canceled as the weight moves up and gravity does negative work. Lacking damping, you'll get periodic movement up and down with exactly the same frequency and amplitude as without gravity, but with gravity, the equilibrium point will be slightly lower. Certainly, the net work done by gravity through a cycle has to be zero. If planet orbiting a star experiences outward pressure due to solar wind and radiation, that will allow it to orbit very slightly higher at the same energy and angular momentum, but it's still going to be a (nearly) closed orbit that's not going to gain or lose energy over time on average due to that pressure. Now, the fact that solar wind moves at finite and greatly sub-relativistic speed means that object moving towards the Sun will experience slightly more pressure than object moving away, but because the difference will always oppose motion, it basically acts as an additional drag force. Which means that solar wind is going to help dampen the radial oscillation, forcing orbits to become more circular over time, as well as contributing to planets slowly spiraling in due to drag on angular velocity. But this is no different than moving through a static medium, albeit, with somewhat higher drag coefficient.
  23. It's a spectrum. Databases usually take their code pretty seriously. Then come servers and cloud infrastructure. Operating systems have also been migrating into this territory. B2B in general keep their workflows more organized. But then you get more into consumer software, especially apps, and things go a little wild west. Games are actually not the worst. The "**** it, we'll do it live," attitude of game development is at least (usually) balanced by skill and experience of people working on it. There are some tech start ups that lack both the discipline and experience, and then things go bad. The most "serious" company I've worked for was Google, and it was fine, but very boring. I worked on 3D reconstruction, and it's hard to be more back end than that. The most wild was a startup making chat middleware for games. (We got bought out and restructured, so unless you were really into Clash Royale in early 2018, you probably never heard of it.) But mostly I stick to games, because that's exactly the right balance of structured chaos that keeps things exciting while letting you feel like you're making an impact.
  24. It's not that it's an unreasonable expectation in itself. It's... How do I explain game development to a sane person? You've ever seen Wallace and Gromit? Now picture game developers making games like Wallace makes his breakfast. Except, with deadlines and requests from marketing. I don't know if it's environment making the people or people making the environment, but it's a unique industry. And part of it is huge contrast between "This is how you do X," and "These are the specific tools you use for X". The former is at the foundation of almost all design - it can get pretty dogmatic on how things are done. But then you get to actual implementation, and there are going to be almost as many of these as there are teams working on it. There are some rare exceptions. Pretty much everyone uses Photoshop for 2D assets. Pretty much everyone uses Visual Studio for code. There's just a handful of 3D editing tools that everyone is using. That sort of thing. But standard editor for a tech tree? Unless you're working with a specific engine, like Unity or Unreal, we don't even get a "standard" way to draw UI. For tech tree specifically, there are additional considerations. If all you had was collection of nodes forming a dependency graph, where each node contains a list of items it unlocks, you can make a generic drag-and drop UI for it. Problem is, you're unlikely to ever be happy with just moving the tech between nodes. If I make ion engines more accessible, maybe I should make the consume a bit more power to keep things balanced? Or adjust the ISP? How many little corrections do you need to actually make a balanced tech tree? All of these adjustments don't technically have to be accessible from tree editor itself, but they have to be accessible somewhere, and if you plan to distribute the settings, these things will have to be stored together. So a generic solution here doesn't seem like it'd cut it. But like I outlined above, the issue is greater. There are elements that are far more common in games that don't have anything remotely like a standard implementation. If you ever make a game, you will never be satisfied with limitations of your tools or your systems. There will always be special cases you want to handle. Some limitations you'll have to just accept. If you're working with Unity, you're not going to get too involved in changing how simulation or rendering work. But then there's still so much you can do by modifying individual behaviors. Sure, you can probably find a character movement script on the Unity market, but will you be satisfied with what it does for your game? Probably not. And that leads to a culture of reinventing the wheel at every step along the way. We try to be practical as much as possible, but in the end, every game ends up being a Rube Goldberg contraption made up of salvaged components made to do something they were never meant to and a bunch of ad-hoc solutions.
  25. No, it's like expecting a game designer to know how to write a simple script. A skill, I can assure you, they are expected to know. Very few if any. From personal experience, if you aren't going to take the time to learn how to write a simple script for a mod, you aren't going to put in time to properly evaluate, balance, and test something as complex as a tech tree. Design work has a lot of creativity in it, granted, it might be even the main quality. But it also has a lot of critical thinking, logic, and even some fairly gnarly math in form of statistical analysis involved with the trade. Best designers I've worked with had no trouble jumping in and making modifications to the engine code, and I've never met a successful one who'd be blocked by requirement to make a simple mod. If you're looking at someone with no prior experience, but a lot of ideas and desire to learn how to implement them, tech trees aren't the best place to start. KSP isn't a toughest case for a tech tree by a long shot, but even here, this is something fundamental to the entire progression system and how the game is presented. I don't think Squad did a great job with it, and somebody with an overly complex editor with lots of moving parts isn't going to do any better, unless the do have experience designing game systems. So again, no problem at all with it having to be a mod.
×
×
  • Create New...