Jump to content

Yourself

Members
  • Posts

    134
  • Joined

  • Last visited

Everything posted by Yourself

  1. It's actually closer to Texas and Louisiana than it is to Florida.
  2. And in the universe where that CPU command sequence is executing is time no longer privileged and irreversible? In that case how would such a CPU order and execute commands sequentially? The fact that there's no "rational" reason for it seems to me to indicate that it's completely natural. The universe need not be rational. And if the universe were a simulation, what is it simulating?
  3. Indeed, if I remember my undergrad classes well enough it turns out that pointy objects actually experience higher heating loads than blunt bodies because there's more heat transfer to the vehicle from the air. Something about oblique shocks not doing as much to slow down the flow in the vicinity of the body (that and the shock is attached to the body, so the very hot fast moving air is right next to it).
  4. Technically atmospheric pressure does the opposite, it lifts you. After all, that's just the buoyant force and it's directly proportional to the density of the atmosphere. For example, on earth at sea level the presence of the atmosphere makes you about 0.1% lighter than you would be in a vacuum. All of this is because the air at your feet has an ever-so-slightly higher pressure than the air at your head, so the net effect over your body is an upward force. Of course it's also possible to get the effect where the atmosphere holds you down. That's how suction cups work.
  5. I'm no physicist, but what I do know leads me to believe that this isn't accurate. Firstly, it's both time and space that warp, not merely space, so that's 4 dimensions to account for. Also even though it's warped that doesn't necessarily indicate that the 4D "surface" that describes our universe is necessarily the surface of some 5D shape. If I remember my differential geometry right that would be called an embedding, which isn't necessary to describe our universe. And, in fact, there's certain things about an embedding of a surface in a higher dimensional space that would be outright impossible for use to measure from within that space. That means that presuming that our universe is embedded within some higher dimensional space postulates certain properties which we cannot possibly observe, so I'd be reluctant to make the claim that it's true. I'd trust more for someone like @K^2 to comment further since this seems more up his alley than mine.
  6. I figured that had more to do with being able to keep up with the generation of planetary geometry.
  7. Even simpler, imagine a parabolic orbit, that encloses essentially an infinite area, but it takes a finite amount of ΔV to change it to a circular orbit, which has a finite area.
  8. Well, humans don't develop in isolation. DNA is mostly just a starting point, from there every interaction with the environment (which means every moment of your existence) influences what you become. To drive this point home in a rather not subtle way: one identical twin could lose a limb in a tragic and amazingly unexpected cake decorating accident. The loss of that arm is clearly not defined by that twin's DNA, but it will certainly become part of their identity then on. Of course nothing so drastic is required to establish a difference between identical individuals. DNA doesn't work by itself, it needs a whole bunch of machinery and environmental support to actually develop a multi-cellular organism and all sorts of little differences can get amplified over time.
  9. The rotation is only stable around 2 of the principal axes if each of the principal moments is distinct (the axes with the minimum and maximum moments of inertia will be stable, while the one with the intermediate moment will not be). This is described by the tennis racket theorem. The rotation of the book at the beginning of the video demonstrates this. The first two rotations shown are about the stable axes, while the third shown is the unstable one (since the book tumbles over a half turn every so often.
  10. Is there a way to invert axes on the indicator? As it is right now the pitch alignment is reversed from the yaw alignment, which actually makes it extremely difficult for me to control the attitude alignment because I have to do the opposite of what I think, but only in the pitch axis. This is also something that aggravates me with KSP's stock control scheme, since the translation controls are reversed in up/down from how they work in left/right (in one axis, they accelerate towards the direction you're pressing, in the other it accelerates opposite the direction you're pressing).
  11. The shell theorem only applies to spherical shells of uniform mass, not rings. The ring will feel a gravitational force due to the central body if it's not perfectly centered. The shell theorem does allow us to treat the central body as a point mass (under the usual assumptions). See here for why this is unstable: http://larryniven.net/physics/img13.shtml
  12. My mistake. You'd mentioned the life support of the ship in regards to the travel time, but the life support travels with the ship, so the travel time from the perspective of the ship is all that would matter in that case. That's why I assumed you were talking about the ship's proper time rather than an outside observer's.
  13. Fortunately it wouldn't be 4 years from the perspective of the ship itself. From the point of view of a ship travelling at c, it would arrive instantaneously no matter where it was going. It'd take 4 years from the ship's perspective if it were travelling at ~70% c.
  14. How else would you expect it to work? If you don't move at all, you fall down and hit the planet. If you move fast enough you still fall down, but miss the planet. The faster you go, the farther you travel in the time it takes gravity to pull you down, so you end up reaching higher altitudes the faster you go.
  15. Can we get them over the nuclear fear too? It just seems like nuclear energy has so much potential. Not really as a renewable, but we're going to run out of oil eventually and I'd like us to keep our options open when it comes to energy production until we get settled with renewables. I just don't have the confidence that fossil fuels will successfully bridge that gap. Also there's a nuclear heated lake nearby and the warm water extends the season for water-based leisure activities quite significantly. I quite like that. It's also a wildlife refuge because parts of the lake never freeze.
  16. It is neither momentum nor inertia because the mass of the orbiting object has almost no impact on the behavior of the orbit (at least in the limit where the orbiting object is much less massive than the body it's orbiting). If you have more mass (and therefore both more momentum and more inertia) gravity just pulls proportionally harder. The net effect is that mass is irrelevant. What keeps an object from crashing into the surface is really very simple: the surface is curved more than the orbital path. To put it another way, the object in orbit falls like anything else does, it just misses the the central body. We don't need to talk about forces or bring any math into this really, we can make this explanation much more intuitive. Let's say you shoot a ball horizontally out of a cannon. The faster you shoot it, the farther the ball will get before it hits the ground, but it'll always take roughly the same amount of time to hit the ground. This is because the force of gravity doesn't care how big or how fast the object is going (only how far away it is), it accelerates the ball just the same. So, since we understand that the faster we shoot the ball, the farther it travels before hitting the ground, let's remember that the ground curves downward. We're not standing on a big, flat plane, we're on a big ball. So it should be easy to imagine that if we just shoot our cannonball fast enough, that eventually it won't hit the ground anymore, because the ground is curving downwards as fast as the cannonball. That's it. That's all an orbit is. You're moving so fast horizontally that in the time it takes gravity to pull you down, the ground has curved away under your feet and you just keep falling.
  17. And perhaps the most important advantage: cloud city was cool as hell.
  18. I'm gonna have to disagree with this somewhat. Point 1: There's really two things going on here that I don't necessarily agree with. The first is that the basis of our morals is somehow hardwired. I don't think this is true. At least, I don't think it's true to the extent that you seem to indicate. I think morals are an emergent behavior that aid in the functioning of groups of humans. Basically the morals themselves aren't what's hardwired (since that would imply that there's a natural morality common to all humans and I think it's pretty clear that this isn't the case), but the mechanism that allows morals to arise might be hardwired. Simply put, we're hardwired to have rules, since that benefits group cohesion and survival, but the specific rules themselves are not hardwired and are subject to change. The second bit of this point is more subtle. There seems to be an unintended implication that morals themselves must be based on instinct rather than reason. I don't so much disagree with this as doubt that it's necessarily true. I don't think you intend to imply it, but the language used kind of carries this meaning with it. Point 2: This may be more of a semantic disagreement, but an AI certainly needs a "body". At least in the sense that it needs something in which it can actually exist and function. It certainly needs physical form and I would argue that that amounts to the same thing as a body. Granted it would certainly be easier to move it between various "bodies". Certainly far easier than it would be to move you between bodies. But if you start to run with the idea of humans moving between bodies, things start to get a lot fuzzier. So, I'm going go with a hypothetical here. Let's say at some point we start repairing brain damage with prosthetics. Small pieces of machinery that are able to interface with and behave like neurons. Maybe we invent some kind of nano machine capable of replacing the brain one neuron at a time. So very gradually the whole brain becomes synthetic. Do you think this will have changed the person? Also, re: neural networks that speak languages, yeah, they sort of speak the languages. They can't compare to native speakers, though. And language changes. Pretty quickly, I might add. A system that doesn't constantly adapt to this through learning is going to find it difficult to communicate. Point 3: They could forget stuff. It depends on the nature of their memory. Certainly we could engineer memory that's far superior to our own. Actually we kind of did that already, it's what we use digital media for: remembering stuff. So we humans essentially already have the capacity to never forget things ever or to access far more information than we could ever use. And that's kind of the natural limitation there. Even if an AI could accurately remember every detail of its own existence, it may not necessarily have quick access to all of it. If it wants to remember something, it has to search for it and since the number of memories any sort of intelligence could possess essentially increases without bound, so too must the resources required to access those memories.
  19. Most large helicopters already have all these things. Most modern aircraft (even GA aircraft) in general already have a pretty sophisticated IMU anyway (the so-called Attitude Heading Reference System, i.e. AHRS). The complexity and weight mostly comes from the servos used to drive the controls without pilot input. Helicopter autopilots are definitely complicated, since there's a lot of cross-axis coupling in the different modes (changes in one control axis require changes in other axes to keep the helicopter stable). The control dynamics also have to be tuned for different flight regimes as well (flying a helicopter in a hover is completely different from forward flight). That said, helicopter autopilots aren't terribly uncommon. They can hover, they can hold altitude, they can follow a GPS flight plan, they can do just about everything. The major reason you probably don't see the sorts of video game controls is that it would be counter-intuitive to every helicopter pilot out there. The skills involved flying that also wouldn't be the least bit transferable to a "dumb" helicopter (and "dumb" helicopters are what pilots usually first train on). Also the fact that these sorts of systems are generally very expensive means there will always be dumb helicopters around because they're cheaper.
  20. "Common knowledge" is not really the first place I'd look if I wanted to find out the current state of the art in neuroscience. Actually, "common knowledge" is about the last place I'd look for any kind of scientific information. And, as far as I understand it, the low-level physics of individual neurons or small groups of a handful of neurons is pretty well understood and doesn't invoke any sort of quantum weirdness in its explanation. The stuff we don't understand is how exactly the individual neuron behaviors add up into actual behaviors. And all of that mystery seems to be well above the scale where quantum effects would be at all important for its operation. I'm always drawn to the comparison of an actual computer. You may use quantum effects to explain the operation of individual transistors or other semiconductors within the computer, but the quantum effects in the end don't have any impact on the actual operation of the machine. They're just a means to an end to make a really tiny switch. What would entanglement explain in the human brain?
  21. [quote name='SomeGuy12']You could in principle pause your application after starting up and copy the data in memory to disk directly. Startup would occur the opposite way. This would take 5-10 seconds for users on conventional hard drives if there is a gigabyte of memory in use (100 megs a second, sequential read) and possible 2-3 seconds for SSD users. I don't know of any applications that do this, although Windows does this.[/QUOTE] In fact that's one of the things we're going to try doing. At least in a sense. It won't be an exact memory dump (which isn't practical for various reasons), but the idea of caching the data in a form closer to what's needed by the application is something we've thought of. There is some trickiness here because the original data can change between runs of the application, so we have to build in some kind of timestamp that allows us to determine when we should read the processed data and when we should read the unprocessed data. But now we're starting to make things more complicated. Of course then you might wonder why we don't just pre-process all the data and just read it that way. That comes with its own costs because now we have to develop a new tool chain to support that. And, if we decide to change our minds about what the processed data looks like later it's not so easy to do that, because we'd have to back that change into our tool chain and processes. I can tell you that anything that is more difficult or time consuming to develop is definitely not going to catch on. I don't think I've met an engineer that didn't want to spend all the time they could making something perfect, but at some point you have to finish something. Not to mention that an engineer's time is expensive. Really really expensive. So the most important feature of just about any system is engineering efficiency. It ultimately all boils down to minimizing development and maintenance time (especially maintenance because it is, by a wide margin, the largest time sink). [quote name='Jouni']GPUs are a good example of this. Five years ago, they were the future. Then we equipped our supercomputers and computer clusters with them and learned their limitations. They were essentially computers with a large number of slow cores and a small amount of fast memory. Programming them was slow and difficult, and the hardware kept changing all the time. In the end, ordinary computers with a smaller number of faster cores and a larger amount of memory were better for most tasks.[/QUOTE] I'd say that future came true. GPUs are the go-to hardware for massively parallel computation, because that's exactly what they're good at, this is why we have compute shaders now; GPUs have gone general purpose. I don't recall there being any particular notion that they'd replace CPUs entirely, but they have taken on a lot of heavy computation that CPUs used to do. Modern GPUs are a pretty critical component of a computer. Even the OS UI is hardware accelerated nowadays. You'd be hard pressed to find a software renderer in anything anymore. In fact, one of the things I'd like to try is developing a world-wide weather model for our flight simulators that runs on the GPU. The GPU is probably the only practical option we have. It may also be worthwhile to move our weather radar model over to the GPU as well. Right now we have to dedicate a a whole CPU core to it and even then we're barely meeting our required frame times.
  22. [quote name='SomeGuy12'] 1. I think a rigid, careful design of the software in a computer system would result in stuff that is a lot more reliable. [/quote] Well, that's pretty much true regardless of the underlying hardware. The problem is it isn't cost effective or practical. Software engineering has deadlines and budgets and that's what cuts into our ability to create reliable and robust software. Another thing to keep in mind is that cost of testing software does not grow linearly with the complexity of that software. It's a combinatorial problem, if I stick two modules together the number of states everything can be in doesn't double, it squares. [quote]2. Applications that are loaded into a chip would start up truly instantaneously, within the next video frame[/quote] I'm working on an application right now at work that takes about 20 seconds to start up. We haven't really optimized it so we could probably cut that time down immensely, probably in the realm of 10-15 seconds. I don't imagine it'll get much faster than that, though, because there's stuff it just [b]must[/b] do before starting and that includes reading through about 1GB worth of data and processing it into appropriate searchable data structures and building geometric primitives for displaying it (on a map). I don't see how your solution can have enough of an impact on execution time or I/O to improve that necessary load time. [quote]3. It would be a lot more secure. It would literally be impossible to do most forms of computer hacking today. Data used internally by one application cannot be accessed by [I]anybody[/I] - not the OS, not other applications, nothing, because the internal data is literally not connected by LUT routing to anything else. (well, not directly - applications could still leak information due to faulty design but they get a chance to process the data before outputting it)[/QUOTE] If it takes input and produces output, it is not automatically secure. Security is hard. Like really hard. It's borderline impossible to do it right because of the combinatorial problem of test coverage. The effort required to test software increases faster than the complexity of that software. Even with separate modules tested to 100% with unit tests, there's still integration tests you have to consider, because unit tests don't tell you how modules can interact with each other. We have that exact sort of design at work, most of our new software architectures are highly modular, because it's the best you can do to simplify complicated software. It is definitely a more maintainable approach, but it's not a magic bullet. You can be absolutely sure that each module works 100% correctly in isolation, but as soon as you integrate them into a larger system, they can start interacting with each other in unforeseen ways.
  23. [quote name='*Aqua*']Wait, are we talking about the same bitmap? I was under the impression that a bitmap image (.bmp) was meant.[/QUOTE] Yes, that's what I'm referring to. [quote name='jf0']There is a 'maths trick' that came to be called 'Tuppers self referential formula', that when plotted, apparently appears to draw a picture of itself. But it is actually just a method of decoding a given constant into a bitmap image. given the right constant it would 'draw' any possible picture of a certain size. If you want to go from a 'number' to a bitmap, just take the bits that represent that number and interperet them as the bits of the bitmap. That is all a bitmap (image made of pixels) is. You will be dissapointed though, most will just look like random noise.[/QUOTE] Indeed. In fact, any file can be encoded as an integer. We sometimes joke at work that software engineering is just the practice of discovering large integers that happen to do what you want when you execute them as a program.
  24. [quote name='*Aqua*']That is already ruled out. You would need 500 TB storage for it.[/QUOTE] It is 100% impossible to assign a unique integer to every possible image without using at least as much memory as the bitmaps themselves. A bitmap is the smallest generic representation capable of encoding all possible images. If you're only interested in representing a small subset of all images, it's easily possible to do better and there's various techniques for doing so; just about any lossless compression algorithm will do. However, there is no single algorithm which is guaranteed to encode an arbitrary image using less space than a simple bitmap. [quote name='*Aqua*']My dictionary doesn't know "absed". What does it mean?[/QUOTE] It's a typo of "based".
  25. We already have a way of uniquely identifying every possible image with a specific number. It's called a bitmap.
×
×
  • Create New...