-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
If object is spinning about a maximum or minimum principal axis, applying normal torque causes precession of the craft. That is, axis of rotation will turn in space, but stay put relative to the craft. Think gyro. Any other possibility yields tumbling, which we want to avoid once experiment is running.
-
Oh, precession stuff is pretty easy. Picture the angular momentum vector as radius of the circle, and torque moving it along the circumference. So given some torque T, to change angular momentum L by angle phi (in radians), you need to apply it for t = T/(L * phi). So a 90° turn takes pi/2 times longer than it would take to spin up the sat to that RPM rate to begin with. This works just like changing inclinations. The difference is that you can't exploit law of cosines, because we need to maintain constant angular velocity for the experiment, so you have to go with normal torque. All in all, if we stick with a dipole antenna, this limits communication windows to an hour or two just after sunset or just after sunrise. Making major adjustments on the day side just doesn't make sense in terms of power usage and heat production. But on the night side, we'll probably be using the coil heat to maintain temperature anyways, so we might as well make attitude corrections. This might still be a better option than omnidirectional. After all, the sat will pass terminator twice in every 90 minutes. If we have more than one station around the world, and we get something like ISS orbit to start with, this is plenty. Of course, all of these will have to be scheduled in advance. But this is just to beam data down. Sat will be able to receive messages in any orientation day or night.
-
MBobrick, did you compute that with assumption that you basically pack a certain volume/weight fraction with coil? That's... Kind of insane for Q = 1. But it's nice to know what the upper limits are, at least. Well done. Keep in mind that we can't simply pack the whole thing with a single coil. We will need multiple coils with different directions. I was thinking of winding a coil along each of the faces. That would give the sat 3-axis control with some redundancy. I was also looking at a much lower weight fraction. Finally, 10W is definitely doable in bursts, but it will cook experiment if used for any significant amount of time, or require a lot of extra power to pump that heat out. All in all, you can probably see how that 1 minute quickly becomes closer to 1 hour for something a bit more practical. We'll have magnetic field and GPS to work with even if the sat is totally blind. The separation is up to the launch provider. They usually have launch tubes that simply kick out 1-3 cubes when target orbit is reached.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Alcubierre Drive corresponds to a specific geometry dictated by Alcubierre Metric. It is based around a spherical warp bubble. For this particular geometry, the bulk of energy density is located in a ring around the ship. This doesn't mean that the drive has to be ring-shaped. In fact, nobody's quite certain what you need to put that ring-shaped energy density there. But ring shaped drive seems intuitive because of that. Can you have a different shape? In principle, yeah. There are infinitely many possible configurations. Alcubierre's was just the first one, and nowhere close to being most efficient. As you've pointed out, elliptical configurations are more common in modern research, since they require less energy. They still result in a ring of energy around the ship, though. But that doesn't mean that the only efficient configurations will result in a single ring. -
Most of the plasma is generated in ionosphere, while most of the braking happens much lower. It'll help, but it's cheaper and lighter to just add a bigger heatshield.
-
Yup. One of the main functions of the fligh computer will be looking after spin axis. This will be acomplished with a suit of sensors and magnetotorquers. We might be able to get additional info from media computer and its camera. Sounds good. I could find creative uses for that extra memory, too. Like, writing to multiple chips at once and having RAM error-checked that way. You can use this to quickly flip the craft, but not change the actual axis of rotation. Conservation of angular momentum means that you need external torque to change rotation axis. Without RCS, magnetic torque is by far most significant, and even that will take a while. demonstrates the concept very well. Since flight computer will be looking after rotation, we could make it intentionally unstable, and quickly re-orient the cube relative to the axis of rotation, but that doesn't seem to be terribly helpful if axis itself stays put.It's also going to be trouble if we have deployable pannels that we cannot retract prior to such maneuver.
-
All of the ones I've found are like that. Which matches specs of original 8051. So yes, rad-hard RAM would be nice. We definitely need PROM that can stand up to radiation. Something old-school, probably. Modern EEPROMs tend to be pretty rad-sensitive. Power isn't an issue, though. I'd write low level rotines in machine code directly, so it will be plenty powerful enough to do everything it needs to. Not agile at all. For large angle adjustments, think more in terms of hours than minutes. The field is pretty weak, and anything that generates subtantial torque fromm it would be too heavy. We'll need either an omnidirectional antena, or multiple dipoles. Might be possible to switch between dipoles as the thing spins though? That could save on power. But it's likely to create dead zones, so omnidirectional might be better despite horrible waste.
-
Just came back from Eastern Europe (long trip), so my brain's a bit of a mess. MBobrick, I'll go through the numbers on transceiver. Unfortnately, I have some blanks in the area, but I can do some rough estimates to at least give a ball-park confirmation. Yes, both Pi and flight computer will need direct access to transceiver. In terms of pre-programmed actions, I think I'd like to have a job queue of some sort that new messages are added to. That way, we can pre-queue a bunch of reports, so if something happens to receiver, we have the sat reporting in every once in a while. But that's something we can look at much later on, when we have a hardware mockup and start playing with software. Speaking of, I think I'd like to add an optional 3rd CPU to the list. I have some PIC MCUs I've worked with before that are fantastic for doing USB communications. I want to have a slot for one on the board. That slot will be empty in the live sat, but the PIC sitting there will be able to mimic all of the sensor input. That way, we can do full hardware simulation on the live board. Finally, on the power requirements. I still have one big item to account for, which is thermal regulation. I'll need a few days to go through all of the configurations we've talked about and come up with the budget. I have a feeling that this will be the biggest "always on" draw. Naive estimates result in much too much power required to prevent the sat from freezing during the 45 minute "night". But we do have spin-stabilization on our side. I'm going to play with different albedo and insulation schemes. I have a pretty good idea what to do with the folding pannels configuration to keep it at constant temperature, but it's also the one that has the most power to spare. I'll have to go through other options to see if they are even viable. Expect full report some time later this week.
-
I would like to support C/S band, but that requires an expensive transceiver and a proper tracking station. If we get budget for it, it would be nice. A good stretch goal? But we need to plan to make do with UHF. That will work as fallback in either case. Cannot hurt to plan support for broadband ops on media CPU, though.
-
Is that GPU good for general computations, though? Is there something like CUDA that could be used with it? At any rate, I am not sure what we need all that power for. Well, other than video compression. The basic ability to use ARM as FPU is nice. But since primaries must be capable of running independent of co-processor, all flight nav and control will probably be fixed point, anyhow. So applications of these caps are limited. But I am sure we can find some secondary function for these. I second optocouplers. Power, there are options on. Transformer requires converter. I would rather just fuse it, and maybe drop in a low pass for interference. Camera need not be rad-hard for the same reason that Pi does not. Not mission-critical, and redundancy is easier/cheaper. And off the shelf camera with JPEG support are cheap and plentiful. Anyhow, I will now look more into Pi/ARM/other options for "media" co-processor.
-
This would be secondary. Not critical.
-
I was picturing this with a camera that has integrated compression chip. 8051 would only have to grab compressed data and package it. But a co-processor approach has merit. So long as it is only powered on for media operations, and no mode of failure, including a total short, can fry the whole sat. In that case, might as well have video as an option, as well as whole bunch of other features. Is Pi the best option, then? I agree that it should be ARM based. But custom solution would allow for better resource cross-use. If you want two CPUs, for example, you definitely do not need two boards. But I do appreciate simplicity of simply grabbing a board and libs.
-
Why the half-measures? Might as well go with Chlorine Triflouride.
-
Shannon-Hartley. We won't have clean enough signal to allow for these sort of bitrates. The sat will be directly overhead for seconds. Most of the data beam would be over distances of hundreds of kilometers. Using HAM-licensed transmitter or similar. We'll have to use error correction as it is, which tends to be complicated with general compression algorithms. The great thing about JPEG is that the 8x8 pixel blocks are essentially independent. Each one is Huffman-coded, but I plan to pack each into its own packet. Packets and essential data will be guarded with error correction codes. The actual JPEG data will not be, to save bitrate, but like I said, worst case scenario, it's an 8x8 block that's missing, not an entire stream. (If you've ever seen a JPEG image that just turns into garbage at a particular line, that's what happens when raw JPEG data is off by even one bit.) I am not aware of any way we could handle a video stream like that without going to a custom solution on that. A 50-100kbps stream, which we'd be lucky to get, is already next to useless. Take one of these that's corrupted on pretty much every frame, and it is useless.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Assuming opration in vacuum, inviscid flow, and massless bell, you want it running to infinity to get 100% of thermal energy turned into thrust. In practice, gains start getting insignifficant pretty fast. But this "infinite bell" approach lets you compute the absolute maximum ISP that a particular fuel mixture can have. Simply find the amount of heat produced, and assume that every molecule of exhaust has equal energy. Then compute total impulse per mass of material. If you do this for real fuels, you'll find that this maximum is somewhat optimistic. (I believe, it works out to well over 500s for LH2/LOX. Don't feel like doing the math right now.) The main reason is that exhaust does not have time to fully thermalize between the degrees of freedom, and that's because practical limitations requrie one to cut the nozzle short. Of course, there is some turbulence and drag as well that reduce efficiency. Operation in atmosphere is a different story. Not only extending bell past certain cutoff doesn't give you extra thrust, but it actually reduces it. Which is why engines designed for operation in atmosphere are so much shorter. -
For external cameras? I don't know. It really depends on what we want to capture with it. It'd be purely for cool points, so whatever we all decide would be more interesting to have. A good looking ordinary still, or a wide angle shot. Given LEO and it being unlikely that we'd be able to get a very high quality sensor up there, we're pretty much limited to taking snaps of Earth's surface.
-
And mixes, as we all know, have much broader temperature ranges, which is probably one of the big reasons for that fuel mix.
-
How accurate is the KSP interstellar Alcubierre Drive?
K^2 replied to SpaceLaunchSystem's topic in Science & Spaceflight
*Sigh*. Nobody is saying that worlds of the MWI interpretation are real. Nobody cares if they are, either, because they are impossible to interact with by definition. Again, the only use of a theory is its predictions. Quantum Mechanics has the most precise predictions out there. Quantum Mechanics predicts that time paradoxes get resolved in a certain way. MWI is merely the simplest way to understand these. But if you are an armchair philosopher who has no grasp of what a theory is or what is the value of interpretations, let alone any actual scientific understanding, then it won't make any difference. You will not understand how any of it works, because all you do is use your own ignorance as a shield. Which looks really silly to everyone else, I can assure you. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
In order to make an invisibility cloak/shield that works in air/vacuum, which have optical density near 1, you need a material with optical density less than 1. In other words, a material in which speed of light is greater than speed of light in vacuum. Fortunately, you only care about phase velocity for optical properties, not the group velocities. And so metamaterials with such property do exist. Unfortunately, nobody managed to make them into something that can actually be made into a solid object. So invisibility cloaks remain firmly in the realm of science fiction. There are plenty of people working on this, however, and some aerogel approaches are promising. If you happen to have a medium with high index of refraction, like water, the problem is much easier. There are plenty of plastics with lower index of refraction, and transparent spheres that make their contents invisible when submerged under water do exist. Don't know why they aren't commercially available, to be honest. Would be a cool thing for hiding stuff in aquariums. -
Picosatellites have even shorter lifes than cubes. But again, it's the question of odds. I don't see a need to go for a different type of CPU, and with 8051-equivalent, we can go rad-hard. Why not cut our risks? Camera isn't a single point of failure. We don't expect it to live very long, but we don't need it to. It's also easy to have some redundancy. Little reason not to have two front-facing cameras, for example. My complaint was that using a camera specifically designed for Pi is a bad idea if it limits us to Pi. Because we can't get rad-hard Pi. But if we can access it from a rad-hard CPU we end up choosing, then I'm fine.
-
That has not occurred to me at all. We're going to be lucky to get 50kHz out of what we can reasonably use for comms. That isn't enough for a video feed. Period. So why would we want video compression, if we can't send even compressed video? Earlier in this thread we've established that over life span of a Cube Sat launched from ISS-like orbit, odds of CPU going dead are significant. Depending on space weather, we might get unlucky and lose it within days. Now, do you really feel like risking tens of thousands of dollars worth of equipment, not to mention launch costs, turning into an orbital paperweight because we went cheap on the CPU? We can buy a rad-hard version of 8051 for $1.3k. That's cheaper than each solar panel is likely to cost us. 8051 is also a breeze to work with. It has a simple enough instruction set, but has more than enough power for what we need. The CPU just needs to decode instructions received from Earth, manage attitude and sensors, and occasionally, beam down data. Among other advantages are low power consumption, just 125mW, and the fact that garden variety of 8051 can be bought for less than $5 each, so we can burn through a dozen of them during early prototyping, or sacrifice a few for any sort of testing we might have to go through.
-
Video compression is useless. We won't have the bit rate for it. And we need a rad-hard option. I don't think there is one for Pi. There are plenty of camera options that will not require a specific CPU.
-
I want to compare precision of doing orbits in Newtonian vs Hamiltonian. Newtonian is trivial integration of all forces along the trajectory. I should probably refocus on that so that we get some basic functionality sooner. But the more interesting case is Hamiltonian. The reason for that is that thanks to Canonical Transformations, it's possible to treat forces as perturbations on central potential, a lot like analogous procedure in Quantum. Unfortunately, in Classical, it's way more algebra. Basic idea is that we write the central potential problem: H0 = p²/(2m) - μ/r. Then we add in other forces, including these due to gravity not being perfectly spherical, as local potential. H = H0 + U(r). The real trick is that instead of using original coordinates, we choose a set of generalized coordinates and momenta Qi, Pi, such that ∂H0/∂Qi = 0 = ∂H0/∂Pi. In other words, Qi and Pi are constants in unperturbed Hamiltonian. These are the constants of motion for a particular orbit. Starting in spherical coordinates, these work out to be (via Hamilton-Jacobi Equation) time of periapsis passage (T), argument of periapsis (É), longitude of ascending node (Ω), energy (E), angular momentum (L), and z-component of angular momentum (Lz). These are easily related to standard Orbital Elements. But the super nifty part is that in perturbed Hamiltonian, these are no longer constants. And they are governed, as one would expect from Hamilton's Equations, by dQi/dt = ∂U/∂Pi and dPi/dt = - ∂U/∂Qi. Naturally, we don't know exact shape of U, but we do know forces. In other words, ∂U/∂r = - Fr, ∂U/∂θ = - r Fθ, ∂U/∂Æ= - r sinθ FÆ. Chain rule insanity ensues. I've been able to verify that dPi/dt equations do work out to be work and torques as appropriate. What I need some sanity checks on are the equations for the Qi elements. Ok, so why all of this? The disadvantage, clearly, is complexity. Both in derivation and in computation. That's going to contribute to errors. However, by far the largest force that has to be integrated over in Newtonian formalism is gravity. And by far the strongest component of that is the spherically symmetric part. And perturbations approach takes that out completely. So while computations are heavier, we are integrating over much, much smaller corrections. So I expect overall precision to be dramatically improved. I'll make a write-up of all of this with detains in the near future. I'd be happy if people to whom it isn't all Greek would take a look and see if there are any obvious gremlins in my math.
-
Game maker isn't bad, but you are still better off using a proper engine. The learning curve with something like Unity is only slightly steeper (it's very user-friendly, and there are tons of tutorials), but what you get out of it is so much more. Your first game will probably be equally buggy and iffy-looking with Game Maker or Unity. But the experience you'll get from Unity is far more valuable. As I've pointed out, many real game developers use Unity. And it's not just indy studios, like our favorite Squad, but even the giants such as Blizzard. Hearthstone is a Unity game. It has a lot of flourish and custom libraries to make it work, but underneath it all, it's just a lot of polish on the same engine you can use at home to make your own game.