Jump to content

Yourself

Members
  • Posts

    134
  • Joined

  • Last visited

Everything posted by Yourself

  1. [quote name='More Boosters']I don't really want to detail the thing's movement. I just want its moment of inertia so I can divide it by its mass and radius^2 to get the multiplier in front of it so I can do the same thing and approximate Earth's moment of inertia and then compare it to calculations and real life measurement. I'm not sure if assuming that a sphere must have a moment of inertia in terms of constant * mr^2 is sensible to do but I can't mathematically prove why that should be the case (yet, if I'm on the right track it would help a lot so I can mention it's not completely out of my hat) so here we are.[/QUOTE] Oh, well in that case the number you're looking for is 2/5.
  2. [quote name='Findthepin1']So there's this game called EVE Online. It has remarkably asymmetrical spaceships and some of their Earthlike planets are over half the radius of Jupiter. Anyway, the game's universe has over 5000 star systems. They seem realistically enough spread out. I don't understand how they are able to store all that information, for every meter of every planet in 5000 solar systems, plus everything on them and everything in space. It would probably take up tens of terabytes at the very least, or petabytes at the most. Likely, they don't have that kind of capability. What am I missing?[/QUOTE] What you're missing is that they simply don't store that much data because there's no need to. Interaction with planets in EVE is pretty limited. You can't actually travel to the surface. In fact, you can't even really get close enough to see much surface detail, so there's not much data required. The distance between planets is largely irrelevant in terms of the data requirements (beyond additional memory required to store larger numbers accurately, but this memory requirement would be negligible next to literally anything else; it's on the order of bytes). Assuming the planet surface textures aren't generated procedurally and that each planet is truly unique, the memory requirements for all planets together would probably be on the order of hundreds of GB, which honestly isn't that much data for servers to be working with. I would assume there's some amount of procedural generation simply because of how expensive it would be to have artists create several thousand unique planets.
  3. So you're saying it's impossible to have a probability distribution that isn't uniform? What is your reasoning for that? Like, sure, I see you say there are two outcomes, so they must have equal probability, but I haven't seen you explain why. Like why couldn't I have 2 outcomes where the probability of one occurring is 25% and the probability of it not occurring is 75%. Or 1% and 99%? Don't the laws of probability only require that all probabilities add up to 100%? What law requires all probabilities to be equal?
  4. That's a bit confusing. The math doesn't need to be that complicated. Of course I say that, but I'm about to write a differential equation for it. So, you know, whatever. So let's look at a tiny section of cable. We'll say the length of this section is ÃŽâ€L and it's a distance r from the center of earth. So let's do the whole free-body diagram thing only because I'm lazy it's not so much a diagram as it is a paragraph. I'm going to work in the rotating reference frame, so the centrifugal force will show up explicitly. Anywho, we have 4 forces acting on our section of cable: Gravity: μ ÃŽâ€m / r² Tension towards Earth: T Tension away from Earth: T + ÃŽâ€T Centrifugal force: ÃŽâ€m ɲ r So I've introduced some extra variables here: μ = Standard gravitational parameter of Earth ÃŽâ€m = Mass of our small section of cable É = Angular velocity of Earth ÃŽâ€T = The difference in cable tension across our cable section Great, moving on. Since we don't want the cable to be falling to Earth or flying off into space, we kind of want all the pieces of it to be not accelerating, so that means all these forces need to sum to 0: 0 = -( μ ÃŽâ€m / r² + T ) + ( T + ÃŽâ€T + ÃŽâ€m ɲ r ) Let's do some rearranging: ÃŽâ€m ( μ / r² - ɲ r ) = ÃŽâ€T Now, let's say the mass per length of the cable is given by some function we'll call î, that means our little piece of cable's mass can be written as its length, ÃŽâ€r, multiplied by this linear density: î ( μ / R² - ɲ r ) ÃŽâ€r = ÃŽâ€T We'll divide by ÃŽâ€R and take a nice limit and, bam, we get our ODE: dT/dr = î ( μ / r² - ɲ r ) This is a first order ODE so we need a single boundary condition to complete this. One thing we know is that the tension in the cable has to go to 0 at the very top (because there's nothing left up there). so we write: T(REarth + L) = 0 But we never said what L was, the cable is free to be as long as it needs to be. So we can actually specify another boundary condition and then figure out what L should be to satisfy the equation above. So, we can pick this one: T(REarth) = 0 So basically we just made the cable tension at the ground 0. In fact, we can choose the tension to be whatever we want at ground level and it'll just change the length of the cable. We have even more freedom because there's still that î function sitting inside there which corresponds to how the thickness of the cable varies along its length. We could even go further and try to find the thickness that minimizes the the average tension over the whole cable. But, it's been a long time since I've done any calculus of variations, I've spent all day playing Fallout 4, and it's also 1 AM and I'm very tired. So I'll just leave it at this: we can act least mathematically create a cable whose tension at ground level is 0 (provided É isn't 0, which for Earth, it's not). What this equation tells you is also that the tension in the cable will increase with altitude and reach its maximum at geostationary orbit before it starts decreasing again. This is because RGEO is the solution to this equation: 0 = μ / RGEO² - ɲ RGEO For values of r < RGEO, that quantity on the right is positive (which means dT/dr is positive --> increasing) and for values of r > RGEO, the quantity on the right is negative (so dT/dr is negative --> decreasing).
  5. I think there's something critical missing from all your scenarios: if an AI ends up causing some kind of doomsday (and presumably is intelligent enough to recognize that's what's going to happen so that it can profit off of it), what does it do next? In all these situations it basically destroys the global economy for a very small short-term gain, which is a really crappy strategy for accomplishing its directive. These are things a comic book super-villain does. Another thing I wonder about is why it would take an AI to do any of these things or why an AI would somehow be more capable of them than humans. Like, causing some kind of confrontation with humanity as a whole would be a wildly inefficient way to complete any sort of goal. Like, if you wanted to get humanity out of your way and you had all the time in the world all you'd have to do is keep us happy and deliver good quality of life. That right there seems to get us to stop reproducing pretty effectively. Make it more expensive to have and raise children and you've got the perfect recipe for the population shrinking and no one caring. This has already happened without the aid of AI. Why would an AI necessarily have no sense of pleasure? It's artificial; presumably we can imbue it with whatever emotional states we want. Moreover, why would lack of pleasure necessarily be a bad thing. Generally speaking, the pursuit of pleasure leads to some pretty bad decisions in humans.
  6. Well, it's not the most devious. Syntactically GML is based largely on the C family of languages* (since we're talking syntax this means C, C++, Java, C#, etc.) and the ^ operator is bitwise XOR in all of those. In fact, very few (popular) languages use ^ as an exponentiation operator. More often than not it's bitwise XOR. I'm not entirely sure where the popularity of using ^ for exponentiation derives from exactly. I'd wager it comes from LaTeX since it uses the ^ operator is used to typeset superscripts. However, I don't know enough of its history to say whether that was inspired by something else. *GML also supports an alternative syntax based on Delphi. This leads to some serious craziness since it understands Delphi operators for assignment (:=) and equality (=) as well as the C-style operators for assignment (=) and equality (==). These two syntax styles can be mixed interchangeably and often leads people to suggest that using = in an if statement would result in a bug: if( a = b ) Since in most C-like languages, = is assignment. The thing about GML is that the meaning of = is context dependent. In that case, since it's parsing an expression, it interprets = is the Delphi-like equality operator and ends up working like its supposed to. But this can lead to other craziness: a = b = 1 In C-like languages this would be multiple assignment; it would assign the value 1 to the variable b and then assign the new value of variable b (which is 1) to a. In GML that's not what this does. Unlike C an assignment in GML is a statement, not an expression (much like it is in Python), so assignments can only appear as a statement. The grammatical structure of an assignment is a variable name followed by an assignment operator (which can be either := or =) followed by any expression. Since the = operator acts as equality inside expressions, this code actually compares the value of b with 1 and stores the result into a.
  7. Honestly, I can't wait for the transition to artificial intelligence and ultimately artificial life. As far as I'm concerned it's the best way forward for us as a species. Besides that the AI takeover has already started. Honestly I sometimes wonder if the internet as a whole became self aware, would anyone notice? Our own awareness is extremely tied to our own senses like sight, sound, and touch. But does that mean that without senses, without some experience of something external to yourself you can't be self-aware? I suppose in some sense if the entirety of existence was just you, in what way could you distinguish self from other? All of that goes so far out of the realm of our own experience that I feel fairly confident that the first self-aware machines won't be noticed. And they probably won't even notice us any more than we notice our own neurons. Regardless, I like the idea of machines being sort of our child race. Humanity can't last forever, nothing can; seems the least we can do is leave something behind that can last a little longer.
  8. Oh, that's not the end of the silliness with that language. In this case it does that because GML only has one numeric type and that's a real (specifically a double). As I recall there are only two types in the whole language: real and string. In fact, the data structures within the language aren't really data types, you're provided with a numeric ID (which is how all resources work in Game Maker) which is passed as an argument to various functions that manipulate the data structure. The IDs aren't unique and they're created sequentially, so the first instance of any resource you create has ID 1 (or 0, I can't remember). People can (and have) gotten into all sorts of trouble because of this. You're not the first person I've helped with this problem. The first time I saw this exact issue was probably 14 years ago.
  9. In GML, ^ is the bitwise xor operator, not exponentiation. This essentially results in your force being inversely proportional to the distance (for distances significantly larger than 2), which results in the characteristic petal pattern of your orbits. I used Game Maker in the early 2000s and people were making this mistake back then, too. You could use the power method instead or just compute the distance first and then multiply it by itself. You could also manually compute distance by just summing squares because it avoids an intermediate square root, but worrying about optimization before the code works is pretty useless. Make it work, make it right, make it fast, in that order.
  10. Well if it's XAML, then you're using C# and WPF. And if that's true, you're in luck, because I deal in both routinely. If it turns out you're actually using WinForms, though, you have my condolences. Judging from your post it sounds like your actual project is in WinForms (the ListBox control in WPF doesn't have any sort of Refresh or Update method and even if it did WPF is designed in a way that you'd never use it) and through your Googling you've accidentally stumbled into answers for WPF which is an entirely different beast (a much more powerful one, in fact). Anyway, if you need help, you're gonna need to be way more specific. Without details the best I can do is start giving generic non-specific advice or otherwise trying to guess at what your actual problems are and neither of us has that kind of time.
  11. https://en.wikipedia.org/wiki/Tests_of_general_relativity If you're looking for something more practical, the accuracy of GPS depends pretty critically on accurate predictions from GR. So, you know, that wouldn't work terribly well if GR weren't a good model of reality.
  12. Which is to say there are no naked singularities. At least so long as you subscribe to the cosmic censorship hypothesis, anyway.
  13. In fact, it is always cheaper to do it this way for a 180° flip. Basically you trade time for ÃŽâ€V (since it takes longer to perform the whole maneuver).
  14. I'm not sure I understand the importance of that distinction.
  15. That is, unless we replace ourselves with robots.
  16. Yes, I'm on Windows. I primarily develop in Visual Studio, though I'll use LINQPad if I don't want to go through all the trouble of actually creating a new solution. I also ran the whole test suite again, but this time I targeted x64. Previously I was running with the default, which is Any CPU which would run as x64 on a 64-bit machine and x86 on a 32-bit machine. At least, it would if I had changed it from the default. By default it prefers 32-bit, so essentially it was running as a 32-bit process before. If I specifically target 64-bit, I get a different Jitter and different results: Initializing... 8-bit table built. 16-bit table built. 24-bit table built. Testing... Tests passed. Profiling array access... Result = 0x563765EE46B64B32 Base time: 0.608915 ns/call Profiling Reverse8... Result = 0x4CD26D6277A6EC6A Reverse8: 8.348577 ns/call Profiling Reverse16... Result = 0x4CD26D6277A6EC6A Reverse16: 5.116408 ns/call Profiling Reverse24... Result = 0x4CD26D6277A6EC6A Reverse24: 33.064754 ns/call Profiling UnsafeReverse8... Result = 0x4CD26D6277A6EC6A UnsafeReverse8: 6.70843 ns/call Profiling UnsafeReverse16... Result = 0x4CD26D6277A6EC6A UnsafeReverse16: 5.058065 ns/call Profiling UnsafeReverse24... Result = 0x4CD26D6277A6EC6A UnsafeReverse24: 29.71604 ns/call Notably the baseline array access and XORs are much faster (presumably because the XORs are operating on 64-bit integers) and the unsafe code actually edges out the safe code for performance.
  17. Pretty interesting seeing the performance differences. Not sure why exactly that is. I could imagine that it may be related to some of the overhead in the array accesses in C#, since it may be doing some bounds checking on the lookup tables. I could try rolling unsafe versions of the lookup functions, since that would allow me to essentially force it to skip the bounds checks. But at least the cache misses still have an obvious and significant impact on performance. That's the only thing I actually set out to demonstrate in the first place, after all. EDIT: Well, that's not it: Initializing... 8-bit table built. 16-bit table built. 24-bit table built. Testing... Tests passed. Profiling array access... Result = 0x35B74AB9323BAF0A Base time: 2.114285 ns/call Profiling Reverse8... Result = 0x50F5DC4C9D52EDAC Reverse8: 7.892897 ns/call Profiling Reverse16... Result = 0x50F5DC4C9D52EDAC Reverse16: 4.123271 ns/call Profiling Reverse24... Result = 0x50F5DC4C9D52EDAC Reverse24: 32.213506 ns/call Profiling UnsafeReverse8... Result = 0x50F5DC4C9D52EDAC UnsafeReverse8: 8.589045 ns/call Profiling UnsafeReverse16... Result = 0x50F5DC4C9D52EDAC UnsafeReverse16: 5.796279 ns/call Profiling UnsafeReverse24... Result = 0x50F5DC4C9D52EDAC UnsafeReverse24: 36.317731 ns/call It closed the gap somewhat, but not nearly to the extent necessary to say that's where the majority of the performance difference is coming in.
  18. That's a simple enough change, here's the modified code and results: using System.Diagnostics; namespace ReverseProfile { static class Program { const int SampleCount = 100000000; static void Main() { var rand = new Random(); var samples = new ulong[SampleCount]; Console.WriteLine( "Testing..." ); for( int i = 0; i < SampleCount; ++i ) { samples[i] = rand.Next64(); if( Reverse8( samples[i] ) != Reverse16( samples[i] ) ) { Console.WriteLine( "Failed: Reverse8 != Reverse16 @ 0x{0:X16}", samples[i] ); return; } if( Reverse16( samples[i] ) != Reverse24( samples[i] ) ) { Console.WriteLine( "Failed: Reverse16 != Reverse24 @ 0x{0:X16}", samples[i] ); return; } } Console.WriteLine( "Tests passed.\n" ); Console.WriteLine( "Profiling array access..." ); ulong r = 0; var sw = Stopwatch.StartNew(); for( int i = 0; i < SampleCount; ++i ) { r ^= samples[i]; } sw.Stop(); var baseTime = sw.Elapsed; Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Base time: {0}/call\n", FormatTime( baseTime ) ); Console.WriteLine( "Profiling Reverse8..." ); r = 0; sw.Restart(); for( int i = 0; i < SampleCount; ++i ) { r ^= Reverse8( samples[i] ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse8: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); Console.WriteLine( "Profiling Reverse16..." ); r = 0; sw.Restart(); for( int i = 0; i < SampleCount; ++i ) { r ^= Reverse16( samples[i] ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse16: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); Console.WriteLine( "Profiling Reverse24..." ); r = 0; sw.Restart(); for( int i = 0; i < SampleCount; ++i ) { r ^= Reverse24( samples[i] ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse24: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); } static Program() { Console.WriteLine( "Initializing..." ); byte[] reverse4 = { 0x0, 0x8, 0x4, 0xC, 0x2, 0xA, 0x6, 0xE, 0x1, 0x9, 0x5, 0xD, 0x3, 0xB, 0x7, 0xF }; smReverse8 = new byte[1 << 8]; for( int i = 0; i < smReverse8.Length; ++i ) { smReverse8[i] = (byte) ( ( reverse4[i & 0xF] << 4 ) | reverse4[i >> 4] ); } Console.WriteLine( "8-bit table built." ); smReverse16 = new ushort[1 << 16]; for( int i = 0; i < smReverse16.Length; ++i ) { smReverse16[i] = (ushort) ( ( smReverse8[i & 0xFF] << 8 ) | smReverse8[i >> 8] ); } Console.WriteLine( "16-bit table built." ); smReverse24 = new uint[1 << 24]; for( int i = 0; i < smReverse24.Length; ++i ) { smReverse24[i] = (uint) ( ( smReverse8[i & 0xFF] << 16 ) | ( smReverse8[( i >> 8 ) & 0xFF] << 8 ) | smReverse8[i >> 16] ); } Console.WriteLine( "24-bit table built." ); } private static ulong Reverse8( ulong n ) { return ( (ulong) smReverse8[n & 0xFF] << 56 | (ulong) smReverse8[( n >> 8 ) & 0xFF] << 48 | (ulong) smReverse8[( n >> 16 ) & 0xFF] << 40 | (ulong) smReverse8[( n >> 24 ) & 0xFF] << 32 | (ulong) smReverse8[( n >> 32 ) & 0xFF] << 24 | (ulong) smReverse8[( n >> 40 ) & 0xFF] << 16 | (ulong) smReverse8[( n >> 48 ) & 0xFF] << 8 | (ulong) smReverse8[n >> 56] ); } private static ulong Reverse16( ulong n ) { return ( (ulong) smReverse16[n & 0xFFFF] << 48 | (ulong) smReverse16[( n >> 16 ) & 0xFFFF] << 32 | (ulong) smReverse16[( n >> 32 ) & 0xFFFF] << 16 | (ulong) smReverse16[( n >> 48 ) & 0xFFFF] ); } private static ulong Reverse24( ulong n ) { return ( (ulong) smReverse24[( n >> 40 ) & 0xFFFFFF] | (ulong) smReverse24[( n >> 16 ) & 0xFFFFFF] << 24 | (ulong) smReverse24[n & 0xFFFF] << 40 ); } private static ulong Next64( this Random rand ) { return (ulong) rand.Next() << 32 | (ulong) rand.Next(); } private static string FormatTime( TimeSpan time ) { double ms = time.TotalMilliseconds / SampleCount; if( ms > 10000 ) return string.Format( "{0} s", ms / 1000 ); if( ms > 10 ) return string.Format( "{0} ms", ms ); if( ms > 0.01 ) return string.Format( "{0} µs", ms * 1000 ); return string.Format( "{0} ns", ms * 1000000 ); } private static readonly byte[] smReverse8; private static readonly ushort[] smReverse16; private static readonly uint[] smReverse24; } }using System; Initializing... 8-bit table built. 16-bit table built. 24-bit table built. Testing... Tests passed. Profiling array access... Result = 0x6CFAB40F37C448F1 Base time: 1.97389 ns/call Profiling Reverse8... Result = 0x8F1223ECF02D5F36 Reverse8: 10.088138 ns/call Profiling Reverse16... Result = 0x8F1223ECF02D5F36 Reverse16: 5.013811 ns/call Profiling Reverse24... Result = 0x8F1223ECF02D5F36 Reverse24: 31.326612 ns/call
  19. Sure, use a small lookup table, like one that can reverse the bits of a single 8-bit byte. Then the problem reduces to just reversing the bytes in a structure and passing each of them through the lookup table (the code sample I provided had an example of this for 64-bit integers). If you try to make a lookup table for reversing every possible 48-bit value, the whole thing is going to end up being even slower because of all the memory shenanigans it'll cause. Even if you had that kind of memory available, you're basically just going to take a dump all over your cache locality and that can be a very big deal for speed. In fact, I was curious exactly to what extent that will impact performance, so I wrote a quick little app in C# to test this out. I have 3 methods for reversing the bits in a 64-bit integer. Each one reverses it either 8-bits, 16-bits, or 24-bits at a time. I first profile generating a random 64-bit integer and xor-ing it into a result variable (to make sure nothing optimizes out the function calls, I always make sure to use the result). I then profile each of the 3 methods and subtract out the overhead from the first run. using System; using System.Diagnostics; namespace ReverseProfile { static class Program { const int Samples = 100000000; static void Main() { var rand = new Random( 0 ); Console.WriteLine( "Testing..." ); for( int i = 0; i < Samples; ++i ) { ulong x = rand.Next64(); if( Reverse8( x ) != Reverse16( x ) ) { Console.WriteLine( "Failed: Reverse8 != Reverse16 @ 0x{0:X16}", x ); return; } if( Reverse16( x ) != Reverse24( x ) ) { Console.WriteLine( "Failed: Reverse16 != Reverse24 @ 0x{0:X16}", x ); return; } } Console.WriteLine( "Tests passed.\n" ); Console.WriteLine( "Profiling Random..." ); ulong r = 0; rand = new Random( 0 ); var sw = Stopwatch.StartNew(); for( int i = 0; i < Samples; ++i ) { r ^= rand.Next64(); } sw.Stop(); var baseTime = sw.Elapsed; Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Base time: {0}/call\n", FormatTime( baseTime ) ); Console.WriteLine( "Profiling Reverse8..." ); rand = new Random( 0 ); r = 0; sw.Restart(); for( int i = 0; i < Samples; ++i ) { r ^= Reverse8( rand.Next64() ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse8: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); Console.WriteLine( "Profiling Reverse16..." ); rand = new Random( 0 ); r = 0; sw.Restart(); for( int i = 0; i < Samples; ++i ) { r ^= Reverse16( rand.Next64() ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse16: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); Console.WriteLine( "Profiling Reverse24..." ); rand = new Random( 0 ); r = 0; sw.Restart(); for( int i = 0; i < Samples; ++i ) { r ^= Reverse24( rand.Next64() ); } sw.Stop(); Console.WriteLine( "Result = 0x{0:X16}", r ); Console.WriteLine( "Reverse24: {0}/call\n", FormatTime( sw.Elapsed - baseTime ) ); } static Program() { Console.WriteLine( "Initializing..." ); byte[] reverse4 = { 0x0, 0x8, 0x4, 0xC, 0x2, 0xA, 0x6, 0xE, 0x1, 0x9, 0x5, 0xD, 0x3, 0xB, 0x7, 0xF }; smReverse8 = new byte[1 << 8]; for( int i = 0; i < smReverse8.Length; ++i ) { smReverse8[i] = (byte) ( ( reverse4[i & 0xF] << 4 ) | reverse4[i >> 4] ); } Console.WriteLine( "8-bit table built." ); smReverse16 = new ushort[1 << 16]; for( int i = 0; i < smReverse16.Length; ++i ) { smReverse16[i] = (ushort) ( ( smReverse8[i & 0xFF] << 8 ) | smReverse8[i >> 8] ); } Console.WriteLine( "16-bit table built." ); smReverse24 = new uint[1 << 24]; for( int i = 0; i < smReverse24.Length; ++i ) { smReverse24[i] = (uint) ( ( smReverse8[i & 0xFF] << 16 ) | ( smReverse8[( i >> 8 ) & 0xFF] << 8 ) | smReverse8[i >> 16] ); } Console.WriteLine( "24-bit table built." ); } private static ulong Reverse8( ulong n ) { return ( (ulong) smReverse8[n & 0xFF] << 56 | (ulong) smReverse8[( n >> 8 ) & 0xFF] << 48 | (ulong) smReverse8[( n >> 16 ) & 0xFF] << 40 | (ulong) smReverse8[( n >> 24 ) & 0xFF] << 32 | (ulong) smReverse8[( n >> 32 ) & 0xFF] << 24 | (ulong) smReverse8[( n >> 40 ) & 0xFF] << 16 | (ulong) smReverse8[( n >> 48 ) & 0xFF] << 8 | (ulong) smReverse8[n >> 56] ); } private static ulong Reverse16( ulong n ) { return ( (ulong) smReverse16[n & 0xFFFF] << 48 | (ulong) smReverse16[( n >> 16 ) & 0xFFFF] << 32 | (ulong) smReverse16[( n >> 32 ) & 0xFFFF] << 16 | (ulong) smReverse16[( n >> 48 ) & 0xFFFF] ); } private static ulong Reverse24( ulong n ) { return ( (ulong) smReverse24[( n >> 40 ) & 0xFFFFFF] | (ulong) smReverse24[( n >> 16 ) & 0xFFFFFF] << 24 | (ulong) smReverse24[n & 0xFFFF] << 40 ); } private static ulong Next64( this Random rand ) { return (ulong) rand.Next() << 32 | (ulong) rand.Next(); } private static string FormatTime( TimeSpan time ) { double ms = time.TotalMilliseconds / Samples; if( ms > 10000 ) return string.Format( "{0} s", ms / 1000 ); if( ms > 10 ) return string.Format( "{0} ms", ms ); if( ms > 0.01 ) return string.Format( "{0} µs", ms * 1000 ); return string.Format( "{0} ns", ms * 1000000 ); } private static readonly byte[] smReverse8; private static readonly ushort[] smReverse16; private static readonly uint[] smReverse24; } } Now, looking at the methods, one would expect Reverse16 to be about twice as fast as Reverse8 and Reverse24 to be about 3 times as fast. If computation time was purely a function of the number of instructions that pass through the CPU, anyway. Here's what a typical run looks like: Initializing... 8-bit table built. 16-bit table built. 24-bit table built. Testing... Tests passed. Profiling Random... Result = 0x4D522B76016D765B Base time: 20.373974 ns/call Profiling Reverse8... Result = 0xDA6EB6806ED44AB2 Reverse8: 8.063506 ns/call Profiling Reverse16... Result = 0xDA6EB6806ED44AB2 Reverse16: 4.015819 ns/call Profiling Reverse24... Result = 0xDA6EB6806ED44AB2 Reverse24: 49.221145 ns/call What do you know, Reverse16 is about twice as fast as Reverse8, that's reassuring. But, oh, Reverse24 is over 6 times slower than Reverse8. So our prediction is off by a factor of 18, what gives? Locality of reference. The lookup table for 24 bits is too big fit in the cache so only portions of it are loaded at any given time. If we were iterating through every 64-bit value (which we're not going to do because none of us have that amount of time to wait), you'd probably see completely different timing behavior. Instead, I'm throwing random values at it, which means lots of cache misses and the CPU ends up spending a lot of time fetching stuff from main memory (or slower cache levels). Of course I don't know what the optimal lookup table size is. It's probably hardware dependent at the very least, but I'd be willing to bet that the 16-bit lookup table is a good starting point. As usual, standard advice applies: Profile before optimizing, and make it work, make it right, make it fast in that order.
  20. The code sample there is C#, which is related to the C family of languages in name only. C++ has `this` as a keyword, but C does not. Within a class method, `this` is a pointer to the class instance that the method is currently executing on. Most of the time you don't need to use it since it's implicitly added for you. Some exceptions to this are situations where you've named a method parameter the same as a member variable and you need to disambiguate them. Generally I prefer to name member variables with a prefix indicating that they are, in fact, member variables so that this isn't a problem. There are other situations where the `this` pointer actually does come up that are more useful. For example, methods that return a reference to the current object: struct Foo { Foo& SomeMethod( int x ) { // do something with 'x' return *this; } }; In a sense, all member methods really have an extra implicit parameter and that extra implicit parameter is called `this` and is a pointer to the instance to operate on. So really you could imagine the above struct could also be implemented like this: struct Foo { static Foo& SomeMethod( Foo* this, int x ) { // do something with 'x' return *this; } }; Of course this might prevent the compiler from being a little more clever about optimizing method calls (who knows, C++ compilers are absurdly complicated), but semantically it's similar to the previous situation. In C# the semantics are a bit different since C# doesn't expose pointers (unless you're working with unsafe code, which you should rarely be doing). In C# the 'this' keyword is a variable containing the value of the instance that the called method is currently operating on. Like in the C++ case it can also be thought of as an implicit method parameter (and, in fact, the IL code emitted by the compiler treats it as such; in non-static methods the `ldarg.0` instruction is always `this`).
  21. What's amusing is that, in the U.S. that nuclear waste never goes anywhere. Yeah, we don't actually do anything with it besides put it all in casks and leave them in the bottom of pools. It turns out nuclear waste doesn't actually take up that much space thanks to its density.
  22. However, if we were talking a large solar array rather than a fusion reactor, there would be certain advantages to it. Firstly it'd be in direct sunlight for 95% of its orbit (compared with a ground-based collector which is only in direct sunlight approximately 50% of the time). Secondly, it'd avoid atmospheric attenuation of the sunlight itself, allowing more energy to be extracted.
  23. The problem is you're not being careful with your units, you end up giving the body radius in kilometers, which is why you have "random" factors of 1000 being divided our multiplied out: because you want the radius to be specified in meters, because the value of the gravitational constant you're using has units of m³ / ( kg s² ), not km. Also, don't mix in strings with numbers, that's really messy. Here's a working version of the first script: $radius = 6378100; //Earth radius $mass = 5.97219e+24; //earth mass $G = 6.67408e-11; // Gravity constant $volume = 4 * pi() * $radius * $radius * $radius / 3; $density = $mass/$volume; //density.. $ev = sqrt(2* $G * $mass / $radius); // Escape velocity I've also replaced the 4.19 magic number with what it actually should be: 4 À / 3
  24. The amount of data that Google crunches would surely boggle the mind...provided anyone actually knew how much that really is. This XKCD What-If has an interesting estimate of the capacity of Google and estimates it around 15 exabytes. Which is a lot. That's about the same order of magnitude as the total information content of the human genome...of every living human combined. I'm not a biologist, so the extent of my knowledge of encoding the human genome comes down to "2-bits per base pair" and I'm not sure whether to count both pairs of all chromosomes. But if you do, I think it runs out to about 1.5 GB of information per human, so counting all ~7.3 billion of us, it hits around 11 exabytes. Of course this is just sort of a comparison of the static storage capacity of Google, which doesn't really tell us the amount of transient data they process. I don't really feel like spending the time trying to research a good estimate for that, so I'll do the lazy thing and just point out that the estimate for global internet traffic per month is on the order of 70 exabytes/month. The fraction of that that passes through Google's servers is anyone's guess, but it should at least give a bit of a reference for just how much data that our civilization routinely tosses around.
×
×
  • Create New...