-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
I need someone help me do some math for launch optimization
K^2 replied to SaturnV's topic in Science & Spaceflight
Yes, it's basically Pontryagin without discount. What I didn't realize is that a) this is equivalent to Lagrangian approach, and that it resolves a lot of the issues with the later. Sorry, I missed the part of the discussion where you talked about that. This was new to me, so I thought I'd share. There are still a lot of problems with it, and I'm starting to get the feel for why. Usually, if you do reverse shooting with Pontryagin, you start near steady state solution, and evolve back. This problem doesn't have a (useful) steady state. And it also has a lot of regions where it's really, really unstable. So I can't find good boundary conditions for forward or reverse shooting. I do have a completely different sort of a result, however. An analytic approximate solution for optimal 2D ascent with KSP drag model. There are a few main assumptions. Acceleration of the craft should be small, centrifugal effect negligible, and changes in gravity, mass of the ship, and efficiency of the engines is neglected. This is going to hold moderately well up to about 20km, but will get rather bad from there on. The solution requires the craft to move at terminal velocity regardless of angle, with the angle itself satisfying the following equation. cos(θ) = c exp(y/2h)/sqrt(1 + sin(θ)) Here, y is the altitude, h is scale height, and c is some constant that determines boundary conditions. So for example, if you intend to have θ = 45 degrees at 20km, then c = 0.125. Angles at other altitudes can be easily solved for with the power method. (It converges rather well for this formula.) So for the same 45 degrees at 20km example, the following is the chart of ascent angles with altitude. (Apologies for Excel) This is pretty consistent with typical ascent profiles. The question is, of course, what sort of angle the rocket should be holding to at the upper bounds of this approximation to do an optimal accent. I picked 45 degrees pretty much at random for this graph. Though, it should be in the ballpark, at least. With the correct figure, this profile can be patched together with other approximations for higher velocities. Edit: Looking at some of the alterbaron's results, 20km was probably too generous. Centrifugal effects start to alter effective terminal velocity by then. But up to about 10-15km, all of the assumptions seem reasonable. Edit2: I finally got around to checking the above solution against Euler-Lagrange equation. As suspected, it's extremely close to optimal ascent up to about 10km, and the rapidly gets worse thereafter. -
Calculating Aerobraking/Captures with Math
K^2 replied to LexiSilva's topic in Science & Spaceflight
The only reliable ways I've seen involve integrating forces over trajectory. There might be some tools for doing this for KSP on the net already. There is no simple formula you can just plug numbers into. Edit: Here, I'm talking about computing the actual aerobraking maneuver only. For computing how much delta-V you need to shed, see PakledHostage's answer bellow. There are simple formulae for periapsis/apoapsis velocities of hyperbolic and elliptic orbits. -
Stealthy Super Hornets in the pipeline?
K^2 replied to MaverickSawyer's topic in Science & Spaceflight
Where are you getting that from? This sort of thing works for AEWs and SAM sites, because they have the range and radar power to either get out of the way or shoot you down first. For an air-superiority fighter, active radar is just an invitation to be shot down. Most modern systems will let you lock onto enemy's active radar without turning your radar on at all. The active radar on the missile only kicks in when it has closed most of the range or if active radar goes silent. Flying with active radar is a way to get shot down with minimal warning. Hell, pilots started flying with active radar off before they even had passive radars to fall back on for this very reason. If they're going in for the kill, it's going to be seconds between bays opening and missile active lock coming alive. Seconds can be valuable too, but it doesn't make that much of a difference in this case. If a stealthy fighter snuck up to within passive radar range, it got its advantage, whether it gets detected opening bay doors or not. For starters, a weak signature on radar that you can't get an NCTR on is not much to go on. If you get Doppler off it, you might recognize it right away as a hostile fighter. Otherwise, you have to track its range for a while, which can take several seconds as well, if it's not heading straight towards radar. In the end, RWR will still be the first thing defending side is going to be reacting to. So you have offensive, you have first missiles in the air, and you are still flying a stealth aircraft with better ability to shake the missiles. See, now we've gone on form one-on-one combat to AF-on-AF, with the full support. Which is good. That's the only way a real combat between F-35 and Typhoon can happen. But you are neglecting advantages of networked stealth aircraft. Yes, the moment F-35 fires, or perhaps even as soon as it opens doors, it reveals its position. Now it has to run and hide. Difference is that it's networked with any number of other F-35s which now have all the same targets as the first one did, but they aren't on enemy radar yet. While the Typhoons are reacting to one jet and one attack they can see, F-35s are adjusting their position based on that reaction for a second attack. And more importantly, they don't have to run, because they aren't being fired on. You can use this advantage to either try and trap the air superiority, or simply push it out of zone of interest long enough to hit whatever ground targets you need and pull out. That's the modern stealth advantage. Fire-and-forget isn't a magic bullet. Granted, you can be facing completely the wrong way, fire the missiles, and they'll turn around and head where they need to go. But they don't exactly turn on the dime. It depletes the missile's range and gives enemy a solid head start on running away. Given that F-35 would have fired the missiles already, and already be turning about for a run, you are at a huge disadvantage. I mean, granted, if you are going to run right away, you might as well go ahead and fire. No reason not to. But again, we aren't talking about anything like 50-50 here. F-35 got you in sights from a passive range and lined up his shot. You might get lucky with fire-and-forget. Switching to active radar, lining up your shot, and firing your missiles then will improve your odds of hitting that F-35 dramatically. But it also screws up your chances of escaping. Either way, huge advantage to stealthy aircraft once more. -
Stealthy Super Hornets in the pipeline?
K^2 replied to MaverickSawyer's topic in Science & Spaceflight
I don't know where you are getting your information from, but this is absolutely wrong. It might have come from some lope-sided study putting the two under "equal conditions", but this is absurd. The two are never going to fight in equal conditions. In a real combat situation, F-35 is going to fly with passive radar and in full stealth. Now, with CAPTOR in active mode, it can, indeed pick up F-35 from a good range. Unfortunately, flying with active radar is like flying with a bull's eye. F-35 will be able to pick up and lock onto the radar long before it's in radar's actual range. Some of the newer versions do have passive mode. However, it's nowhere near as good, and when you take into account the cross-section difference, it's not even a competition. F-35 will be able to lock and fire before it is detected. Fact is, the first Typhoon pilot will know of F-35's location will be from RWR warning. Now, this does, in principle, create an opening. With the active lock alarms going off, the pilot should, without delay, switch radar into active mode, get the lock on F-35 before it closed its bays. Since Typhoon is in F-35's passive range, CAPTOR in active mode should be able to maintain lock on F-35 even after it closes its bay doors and goes full stealth again. Now, provided that F-35 pilot doesn't do anything to shake that lock, Typhoon pilot can turn as necessary, and fire back on F-35. All of this while two AIM-120 AMRAAM's are already on the way. Now, keeping in mind that low cross-section and passive mode make F-35's countermeasures far more effective, F-35 pilot will actually have a good chance to escape this counter-attack. Typhoon's pilot, in contrast, must sacrifice valuable seconds he needs to escape, and furthermore, make self a much easier target by going active, in order to counterattack. So given a suicidal Typhoon pilot and best possible situation of 1 on 1 combat, this is not even close to 50-50. Given that very few pilots are even going to think about counterattack before doing evasives, and that F-35s are going to operate at least in pairs, where they have huge radar advantage, this whole 50-50 thing is just absurd. Of course. That's a problem with absolutely any radar-guided missiles. But when you first announce yourself on RWR, that's a hell of an advantage. Like you said, it's going to be all about turning and trying to outrun missiles. So when your RWR lights up, do you first check your radar, establish lock, fire back, then run, or do you start running right away? -
Stealthy Super Hornets in the pipeline?
K^2 replied to MaverickSawyer's topic in Science & Spaceflight
This rationale only makes sense in the congress. In reality, it cost a lot of money, hurt military readiness, and set us back a decade if we do decide to replace it with something functional and future-proof. Of course, when the only air force that can compete with USAF belongs to US Navy, things aren't all that bad overall. -
Stealthy Super Hornets in the pipeline?
K^2 replied to MaverickSawyer's topic in Science & Spaceflight
To be fair, F-35 wasn't meant to be as good air superiority as F-22. It only needs to be capable of dealing with Migs and Sukhois, which it can do, so long as it stays out of their range. But as far as cost/quality, it's definitely an expensive turd. I just wouldn't blame guys at Lockhead for it. The specs from the military were silly to begin with. It really is an example of what happens when you design a vehicle by committee. All this really tells us is that this particular project failed. The important bit here is for there to be a failed project, there had to be an interest. And if there were several competing projects, which is likely, they might still have the selected project under wraps somewhere. Unless, of course, F-35 was that competing project, in which case, somebody somewhere made a bad call. F-35 can target and fire on Typhoon before it is a blip on Typhoon's radar. Typhoon also won't be able to go up without casualties against newer Sukhoi's. It's all about BVR, and if you aren't showing up on enemy's radar, while you have them ready for a lock, it doesn't matter that your fighter has terrible flight characteristics. They'll be eating your missiles. -
A note on chaos. Even if human brain is subject to chaos, all it means is that we can't make a simulation that's when running alongside real brain and receiving all the same stimuli gives identical thought process. But that's not necessary. Look at it this way. If all of the aspects of your persona were subject to chaos, then everything you do or think would be completely random. Even if that was the case, which it obviously isn't, then there is no need to simulate a particular person, because all people are completely random and indistinguishable anyhow. Of course, we can note certain aspects of personality in each individual, which tells us that the core aspects of what makes a person that person are not subject to chaos. Sure, when you wake up in the morning and decide which pair of socks to put on, that might be subject to chaos, and simulation will make a different choice. But does choosing one pair or another make you a different person? Of course not. So we are looking at two aspects here. The exact thoughts and choices, which might be subject to chaos, and broader personality traits that make critical decisions which are not. The later can certainly be simulated with finite resources. And that's good enough for us. The only pitfall I can imagine with an artificial brain is that it might end up not being subject to sufficient chaos, and be too deterministic and predictable. But this is very easy to fix by introducing a bit of numerical noise into the neural network simulation, until the simulation behaves the same way as the real thing, qualitatively speaking. The final point is that while both classical chaos and quantum uncertainties might play a role in the thought process, it's not an obstacle for us in simulating human brain. These effects can be replicated, and the fact that it prevents us from making an exact replica just says that we are truly simulating a personality, and not just carbon-copying a working mind.
-
Stealthy Super Hornets in the pipeline?
K^2 replied to MaverickSawyer's topic in Science & Spaceflight
Makes sense. Low radar cross-section is the name of the game in modern air combat. While US were the only ones making extensive use of the tech, it wasn't critical, but Russia and China have jumped on board, and it's not going to be long before 3rd world is going to be able to afford to buy a few such planes. It'd be a shame to have to retire such a magnificent F/A as the Super Hornet just because of its cross-section when all it needs is a face lift. -
Reaching a target orbit with control of nothing but thrust direction
K^2 replied to Zander's topic in Science & Spaceflight
If TWR is sufficiently low, burning continuously prograde puts your ship on a gentle spiral, which carries you from one circular orbit to another. No need to invent anything there. If, however, there is eccentricity change involved, you'll need a more clever control algorithm. But it is doable. So have fun with it. -
I need someone help me do some math for launch optimization
K^2 replied to SaturnV's topic in Science & Spaceflight
Sure, if you can paste-bin it, I'd appreciate a look. This figure also has some good pedigree in the optimization theory. A rule of thumb for minimal delta-V is orbital velocity + drag/gravity losses on vertical assent (4gH/vt), and the later works out to be just shy of 2km/s for Kerbin. So any optimal ascent method should show something in the neighborhood of 4.3km/s for orbit. If you are getting much more than that from the method, then it's not converging properly. There could be local minima in this problem that can throw even a good optimization scheme. -
We are taking apart a dead brain here. Electrical potentials are at ground level. There is no way to do any of this in-viva, so it's not something we can get around. Fortunately, this only resets current thought process. And as we've discussed earlier, even getting just the topological information would be sufficient for recovering all of the personality, skills, and most of the memories. It'd be equivalent from waking up from severe head trauma, but that's acceptable if the other alternative is plain death. Well, based on your numbers I need snapshots with 5nm resolution. An electron microscop can be built to be entirely digital, none of that archaic phosphorecent stuff. Effectively, we can use a modern digital camera matrix almost without modifications. These things have about 2k pixels packed per cm. So a single 1x1 cm cell can image 10x10 micron area of the brain. That isn't much, but since we can control the electron energy as precisely as needed, at least 10kHz is doable on all of this. So a single cell manages to give us 1mm² per second. To give us some room for errors, lets say we cover a 20x20 cm area. That's 400 camera sensors and steering/focusing magnet assemblies to image a single layer at 5nm resolution in one second. 2x107 layers, 2x107 seconds, and we're done in well under a year. All based on existing technology, with a system that's effectively just 400 modern electron microscopes tied into one machine. And we can probably use the same electron beam to evaporae off layers. So again, when I say "matter of scale," I do mean just a matter of reasonable scale. I do make these mental estimates any time I throw out an approach or a figure. I suggest you learn to do the same.
-
Sure. As soon as you tell me how you plan to slice the brain into 6000 layers without disturbing anything at the interface and losing information on cruicial connections. Any known method of slicing is going to cause you a lost of fractions of the micron at the very best. We are talking physical limits here. If you wanted an example of something that's a conceptual problem, rather than an engineering one, here it is. Electron microscopes don't exist? Or are we talking about one with sufficiently large matrix? Because it's just matter of scaling. When I'm saying an engineering problem, it literally means we just need to put a bunch of engineers down, and they'll put together a project. Just want to be clear on that.
-
I'll let you cram these heads every .5mm. That's already way smaller than any AFM/STM head I've ever used or seen. And we'll do that in the entire plane. We definitely want resolution in the nm scale, but if you insist that 1nm is too precise, lets do 5nm. Certainly we can't be doing worse than that. And that agrees well with your "every few nm" estimate. So with heads every .5mm, according to your numbers, we'll be moving across the entire row in 1s. Then we need to shift heads across by 5nm and repeat. And again, we need to cover .5mm. That means to scan a layer with 5nm resolution we need 105 seconds with this incredible AFM array. That's over a day. For one layer. Just 2x107 layers to go. You'll be done in, oh, 60 thousand years, give or take. Have fun with that. I'm going to stick with my electron microscope approach. I keep trying to impress on you just how mind bogingly vast the number of points you need to sample is, and you keep giving me things that improve on it by factors of a few hundred. I mean yeah, this is a huge improvement over trying to do this with a single head, but we're still talking geological time scales. You can try and compress these heads to 100 per square mm. I mean, that's almost as dense as pixels on your screen. (Or denser, if it's a large screen.) And we'd still be talking thousands of years to scan the whole thing. This is far beyond plausible for any sort of a project, and we've pushed the tech far past what can actually be built. I'm fine with it being a long project. But we should be talking years. Decades at the worst. And it's not like we don't have tech to do that. We can literally take a picture of an entire layer at once with an electron microscope. If you really need it to be 3D, there are techniques for getting relief using an electron microscope. That's not a problem. And unlike the AFM, we can get the matrix as large as we want, because matrix can be much larger than sample. If we have to break up the beam and image different portions of the plane with different matricies, that's fine too. We can do that. In fact, we can do almost anything we can do with ordinary microscope and imaging, but at much higher resolution. Electron microscope is definitely the way to go here. It's just a matter of building one that's capable of imaging such large objects with such fine resolution in one go. Which would be ridiculously expensive, but it's purely an engineering problem. Not a conceptual one.
-
You are picturing time in context of GR completely wrong. The two important concepts are map time and proper time. Map time is just a coordinate. It depends on your coordinate system of choice. It's not quite as arbitrary as, say, "Forward" direction in free space, but it can mix with any spacial coordinate to great extent. So it's entirely up to the observer what to choose as the time direction. Proper time is time along your world path. This is literally the direction in space and time that you are traveling in. And while you always travel at the speed of light, direction in which you travel through space and time can change. Which, as an outcome, lets you cover different amount of space per unit of time. Ergo, have different velocity. But just like you can vary your velocity through space, your velocity through time also varies with that change of direction. And that's all there really is for time running slower or faster. Now, as far as coordinate systems go, they aren't tied to any particular objects. We often choose coordinate systems related to some object, but they don't have to be. A larger, more massive object doesn't automatically get a "better", or "stronger", or "larger", or in any whatsoever way different coordinate system. You can choose to measure your coordinates with respect to a black hole or an electron just as easily. (Ok, so there are some quantum effects with the later, but you can take the exectation value.) Finally, time doesn't go "faster" near a light object far from influences of gravity. Nor can you pick an object that's moving "slow". All speeds are relative. And so are the time dilations. Yes, if you are in deep space, and you receive a signal from Earth, it will appear to you that Earth's time is running a touch slow. And that is, indeed, due to Earth's gravity. Gravity is absolute. But it doesn't mean that all of the universe is running slower than you. It's just gravitational time dilation of Earth that's being involved. Wherever you are, you are moving really fast relative to something in the universe, and from their perspective, it's your time that's really slow.
-
You can simulate most of the conditions in the wind tunnel. It's not a replacement for flight tests, but it's a difference between going off figures on paper alone and basing your performance estimates on actual data. So when they get a working prototype on the ground, I'm willing to accept it as a working engine and not just a cool idea. As I've pointed out, I don't think Scimitar has much of a future, but I'm completely with you on SABRE. If they make it work, it has a potential to be a component of many future spacecraft regardless of how well the Skylon does against the competition.
-
To image individual atoms, you need an AFM, which is going to scan the surface a few angstroms at a time. I don't think you comprehend just how long it would take to scan a volume. The sun will be big and red before you're remotely done. The only way to get this done in half-reasonable time is taking full 2D snapshots at a time, and the only tech that can do that with sufficient resolution is electron microscope. I just don't think we have any that have sufficiently large matrix to take a snapshot of an entire slice in one go. Even as is, you'll need to get through 10cm at about 1nm a go, which means you'll have to capture 108 images. At 100 images per second, it will take you two weeks. That should put it into a bit of a perspective.
-
I need someone help me do some math for launch optimization
K^2 replied to SaturnV's topic in Science & Spaceflight
I've managed to advance on the analytical approach. This will probably require a numerical solver in the end, but it could still be the closest to a true answer. Lets start by looking at a simple problem of purely vertical ascent. I'm going to ignore gravity variations with altitude, as well as variations in ISP, and even the change in mass of the ship during ascent. These can be added, but I want to keep the math clean for the purpose of explanation. What we are after is minimizing the use of fuel. It is equivalent to optimizing the integral of thrust. ∫ g + k(y)y'(t)² + y''(t) dt Here, k(y) is drag coefficient at altitude y, such that, sqrt(g/k(y)) gives terminal velocity at altitude y. And since this is KSP, I've set m = 1. Plainly, this is Euler-Lagrange problem with L = g + k(y)y'(t)² + y''(t). Solution, if it exists, must satisfy Euler-Lagrange Equation. ∂L/∂y - (d/dt)(∂L/∂y') + (d²/dt²)(∂L/∂y'') = 0 Fortunately, in this problem ∂L/∂y'' = 1, which goes away after differentiation with respect to time. This leaves us with fairly simple terms. k'(y)y'(t)² - 2k'(y)y'(t)² - 2k(y)y''(t) = 0 or k'(y)y'(t)² + 2k(y)y''(t) = 0 It is fairly straight forward to verify that y'(t) = (c/k(y))1/2 satisfies this equation. This presents us with family of potential solution, one of which, c = g, is the ascent at terminal velocity that we all know to be optimal for strictly 1D problem. So the Euler-Lagrange method works extremely well. Even if we did not have means of finding this solution analytically, we could have integrated over it numerically with a good guess for initial conditions. But as soon as we go into 2D, we end up with a serious problem. The algebra there is significantly worse, but lets just look at the acceleration term in the Lagrangian. L = ... + (x''(t)² + y''(t)²)1/2 Unlike the 1D case, the (d²/dt²)(∂L/∂y'') term of the Euler-Lagrange equation for this Lagrangian is not zero. Worse, it contains y(4)(t) terms! That means that even after doing the monumental work of writing out the full differential equation, it will be fourth order in time. Not to mention extremely unstable to initial conditions. Solving such an equation numerically does not seem feasible. (I have made many different attempts.) But there is light at the end of this tunnel. Instead of analyzing Euler-Lagrange problem, we can construct an equivalent Hamiltonian problem. First, let us re-write the initial Lagrangian in terms of individual variables T(t), y(t), and v(t), where T(t) is thrust and v(t) is vertical velocity. I'm going to define λ and λv as my undetermined multipliers. Also, for convenience of signs, I'm going to maximize integral over -T(t), rather than minimize one over +T(t). L = -T(t) + λ(y'(t) - v(t)) + λv(v'(t) - T(t) + g + k(y)v(t)²) With constraints y'(t) = v(t) and v'(t) = T(t) - g - k(y)v(t)². Substituting constraints directly into the above equation and getting rid of undetermined multipliers brings us back to initial problem, so it looks like needless complication, but it allows us to change the formulation of the problem. Stated thusly, we have are looking at a control problem, where T(t) is control, and y(t) and v(t) are variables. As such, we can find conjugate variables, ∂L/∂x' and ∂L/∂v', and write the Hamiltonian for this problem. H = (∂L/∂x')x'(t) + (∂L/∂v')v'(t) - L Looking after all the math and making a substitution x'(t) = v(t), we arrive at a very nice and clean expression. H = T(t) + λv(t) + λv(T - g - k(y)v(t)²) What's more amazing, the undetermined multipliers have become the conjugate variables to the variables of our problem, as they satisfy the following properties. x'(t) = ∂H/∂λ = v(t) v'(t) = ∂H/∂λv = T(t) - g - k(y)v(t)² Which agree with our constraints. And that means we can apply the Hamilton Equations for the conjugate variables. λ'(t) = -∂H/∂x = λvk'(y)v(t)² λv'(t) = -∂H/∂v = 2λvk(y)v(t) - λ And, of course, the actual optimization problem is stated very simply. ∂H/∂T = 1 + λv = 0 This works because there is no T' or higher order dependence. Other than that, the conditions are the same as in Euler-Lagrange equations. At any rate, we can solve the above by taking a time derivative and using one of the Hamilton equations. (d/dt)(1 + λv(t)) = λv'(t) = -∂H/∂v = 2λvk(y)v(t) - λ = 0 Here, we can make use of the fact that 1 + λv = 0 to simplify it a bit. (I'm also going to flip the sign for convenience.) λ + 2k(y)v(t) = 0 Which I can differentiate with respect to time once more and use the other Hamilton equation. λ' + 2k'(y)v(t)² + 2k(y)v'(t) = λvk'(y)v(t)² + 2k'(y)v(t)² + 2k(y)v'(t) = 0 Using 1 + λv = 0 one last time, we arrive at the final differential equation. k'(y)v(t)² + 2k(y)v'(t) = 0 This is clearly the same differential equation as earlier, with solution v(t) = (g/k(y))1/2. So the Hamiltonian method works. But that's not the best part. The best part is that we've completely gotten rid of the y'' terms from the start. Of course, what we are really interested is the 2D case with all the bells and whistles. And while the equations that the above procedure produces for the 2D Hamiltonian are absolutely monstrous, the terms with highest order are the Tx''(t) and Ty''(t). In other words, the above procedure lets us cast the entire problem in terms of a system of non-linear second order differential equations. The sheer count of terms involved means that there is little to no chance of ever finding an analytical solution, but this is entirely within the reach of numerical methods. Unfortunately, the solution is extremely unstable to initial conditions. This has not improved. Where this is miles better is that the time-reversed solution is stable. What does that mean? In simple terms, we start with the ship in orbit at the very edge of the atmosphere, we run the time backwards on the numerical integration. Thrust and variables of motion are going to be adjusted continuously until the ship hits the ground, and that's going to give us the initial conditions for the launch. To reproduce in the actual launch, we just run a PID trying to maintain program location and velocity from above, and it should perform an optimal ascent to a stable orbit. There is still a lot of work here. Numerical solvers in Mathematica have not been of much help, because they miss a few optimizations I can do, and I have no idea how to point the built-in solver in the right direction. So I'll have to write my own solver, export the monstrous equations into C++, clean these up, run it, and then find a way to test it on real ships in KSP. But what I have so far looks promising. Anyways, comments and questions are welcome. -
"Camera" does mean "chamber" in Latin, yes. Both words came to Russian from Latin. As did a lot of other roots, and most of the grammar.
-
Sorry, Russian for "camera" and "chamber" is the same, and I've learned most of I know about conventional rocket propulsion from Soviet texts.
-
How do you imagine memory works? Certainly, things you learn to do or very long term memory is topological. And very short term memory can be dynamic. But you can memorize something short term, not think about it at all, and then recall it. And that's only possible with altering the weights of existing connections. And in fact, we do know that there are some chemical concentrations that can linger at a particular synapse and affect neurotransmission. I don't know if there is any research that demonstrates conclusively that this is the very thing that does short term memory, but I don't see any other way. And at very least, we should assume that it's relevant. Of course, it might not matter too much. We might be happy enough doing a reset on all of these. At worst, we'll have effect of a person who awoke from a short comma, having lost some of the recent memories. Since procedure discussed here is terminal, I'd guess that it'd only be performed on someone who just kicked the bucket, and given the choice between awaking like I've had a head-trauma-induced comma or just being dead, I'd chose the former. So I would say that it's very likely that we'll need to simulate these, and again, to within an effect of a mind-altering substance, we certainly can, but you might be right that it's not critical that we get these from the scans. Which would make things considerably easier. We could probably build an electron microscope that takes sufficiently high resolution images of the slices. (We have good pictures of dendrites due to electron microscopes, the challenge is taking the picture of an entire slice and process it.) As for going from slice to slice, we'd probably need to find a way to evaporate a thin layer of freeze-dried brain, take a picture, repeat.
-
? 2 Cannon balls are dropped at the same time...
K^2 replied to travis575757's topic in Science & Spaceflight
Energy is not the best way to think about this. This isn't strictly wrong, but the object that falls faster is going to experience greater resistance, and therefore, have more work done by it by drag than the slower object. So energy loss to drag is not equal. Because energy and drag are both monotonic functions of velocity, and in fact, both are roughly quadratic, the object with more mass still ends up traveling faster despite losing more energy, and so you're not wrong, but it's needlessly complicated. It's much easier to just look at accelerations. Given equal sizes and shapes, heavier object experiences higher acceleration, and so ends up falling faster. -
You are thinking of transcribing personality from one architecture to another. But that's not a requirement of the question. If you build an artificial neural net which has the same connections as the brain in question, and load it with the activation functions which correspond to the synapses in that brain, you don't need to know which neuron is responsible for what. You just run the simulation, and let things happen naturally. Of course, to do this you need to both map the entire brain, and measure, very precisely, concentrations of various chemicals at all of the synapses. We don't have the tech to do this yet, but the basic approach of freeze-and-slice is definitely workable here. So it's a matter of time before we'll be able to get all of this information from an actual brain. If you want to do this in-viva, it's a different question entirely. Of course, once you have a simulation running, working out the actual way the information is stored sounds much more plausible. But "decoding" the brain would require way more processing power than just simulating it. So again, this is not something we're going to do for a long, long time.
-
I can't find a good definition of specific impulse
K^2 replied to travis575757's topic in Science & Spaceflight
Because inter-stellar rockets are thought up by scientists who use metric system, and ISP in seconds just doesn't make sense to them. I was going to, but then I pulled up a piece of paper, made a few scratches, and realized that it's right. Happens. That's why checking yourself with maths is always a good policy. -
Again, we aren't talking about SABRE on Skylon. We are talking about Scimitar on an airliner, where low density of LH2 certainly is going to be a disadvantage.