-
Posts
3,708 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by Nuke
-
being scratch resistant is completely different from being scratch proof (which is as far as i know impossible). abuse the device enough and it will scratch.
-
i seem to recall reading about another capacitive based reactionless engine (operating off of the Woodward effect) where the dielectric within a capacitor would undergo mass fluctuations, and by syncing this up with a mechanical oscillation you could produce thrust. that big hunk of plastic could be operating as a dielectric in a capacitor (especially with all that copper around it).
-
i dont think they make rom anymore, certainly not the write once and if there are bugs in your firmware then deal with it kind. or those cool eproms with the little window in the chip where you had to erase it with uv light. modern eeproms are pretty good, i sometimes wonder why they arent used instead of flash. i know working with arduino that you can overwrite the eeprom about 10x as often as the flash before it degrades. its probably a density thing. most of the modern eeproms ive used have a serial interface, and so would be somewhat slower than a typical storage device. older mobos used to have those removable eeprom/flash chips with a parallel interface, but a fairly recent mobo had a socketed 8-pin dip spi eeprom, which i thought looked out of place on a modern mobo. then again you really dont need to run much code off of it. just enough to get start a bootloader. once the computer is up and running it relies little on the bios. in the old days the computer was dependent on a lot of bios code but i think those dependencies have been solved in other ways.
-
where i live its a nice and toasty 39f. thanks for stealing all our cold.
-
there is not enough room in this one to be a psychopath. i think id wreck the big ship, then use the little ship to get nachos.
-
being the psycho that i am i would have to pick the second option, except id do it whilst eating popcorn. then when the cops come i can say: 'not my fault'.
-
he did in his 2006 google talk. he wanted to do a rather ambitious jump up to a 3 meter reactor. which wasnt that much bigger than what they had already been running up to that point. he saw taking an intermediate step ad being pointless. but i guess after he died emcc decided to take a more conservative approach and try to prove high beta was the way to go (which they have done). their next step is a 3 year commercial research program which will cost about 30 million. im not sure where they are getting their funding from, but this program will pretty much determine if polywell is viable or not (this is about equivalent to the iter reactor in that it is the last step before a breakeven demo system).
-
found a new polywell talk from last month. http://research.microsoft.com/apps/video/default.aspx?id=238715&r=1 sounds like they are looking for money again.
-
i had a laptop many years ago that had pre-installed software to do this. its one of the first things i uninstalled.
-
the reason fusion reserch has taken so long is the tokamak. those things are beasts, huge, hungry for time and money, of which they consume vast amounts. that is why fusion is always 10 years away, because the evil tokamak always wants more. you always need a bigger one. lasers will never be breakeven, they are just to inefficient. i hear you can do the same job with particle accelerators more efficiently, but thats another huge expensive machine. its really gonna be between lockheed, dpf, and polywell. these machines are small and are subject to fast iteration of experiments. if your computer models reveal a flaw in the design, they can be corrected quickly without costing a fortune. you can experiment with geometry (images of lockheeds machines show coils on rails, so they can be swapped out and adjusted pretty much at will), and cheaply. you also dont need any exotic custom built research facilities, just your typical lab space. then you start looking at the science behind them and realize they are on to something.
-
just send crazy astronauts.
-
New study: Cheapest forms of energy in the future
Nuke replied to AngelLestat's topic in Science & Spaceflight
tesla was pretty cool but not that much, he didnt even believe in electrons. there are a number of ways to transmit electricity. electromagnetic coupling, rf coupling, capacitive coupling, directed microwave/laser, etc. wireless power transfer is well understood (thanks for the most part to tesla's reserch). usually the efficiencies are very low over anything but short distances. tesla's successes mostly come from the fact that he was using insanely high voltages, insanely high currents, or both. -
i have a right to arm bears. *insert appropriate jpeg here*
-
before you know it we will be writing for virtual machines running on virtual machines.
-
thats kind of the way i see things. if you are going to go through the trouble of programming a virtual machine, why not program a real machine instead. im not sure if portability is worth the performance penalty. while programming my own game engine (in c++) ive probibly moved development back and fourth between linux and windows and vise versa about 4 times, and i think the only thing i had to do to my entire codebase was change my file system paths from windows like to linux like, and eventually all the os specific stuff was put into a single header file, which could be swapped with a preprocessor directive. this is because early on i made a decision to use only cross platform libraries, like opengl and sdl. you dont have to give up performance to be portable, you just have to cross compile. there are a few places i use virtual machines though. i love lua. it has one of the faster bytecode interpreters in the interpreted world. its also a simple language that doesnt try to force any particular paradigm on you (you can go functional, or oop if you want). normally you compile the interpreter into a c/++ application, but i find that it is fast enough to run stand alone (and its very easy to build gui apps with it). i also wrote my own lightweight bytecode interpreter, which runs on an avr, so that i could run code from its internal eeprom or an external device like an sd card (since you can only run machine code on the built in flash with that architecture). one thing i do find disturbing is how many coders depend on a big black code blobs. or where they simply #include a large fraction of their application. in my game i did not #include Physx.h, i wrote my own set of physics classes, my own collision detection and response code. collision detection code is fast becoming the biggest performance hog in gaming, because of all the n^2 complexity algos you need to use. why do you want to blob that in? its better you write it yourself so you can shave every scrap of slow off of it.
-
"man these people were idiots"
-
we had touchscreens prior to sttng. the hp-150 computer had one, and it was made in '83. sttng didnt come out till '87. prior treks had traditional control panels. pretty sure automatic doors were around before the 60s. portable military radios, the precursor to cell phones, were around too. i think roddenberry just kept up to date on technology. looked at devices we already had, made them smaller and cool looking and put them everywhere.
-
downloading xfce and kde versions, dont know when il get around to playing with them. need to see if anything on my linux box needs to be backed up first (i doubt it).
-
i might try it out next time im in a linux mood. i hear it has more market share than ubuntu now.
-
i mean only to speculate on the differences in architecture performance would be if the two companies were on equal footing in terms of process size. amd has on more than one occasion outperformed intel chips despite intel having the process advantage. yes there are things you can do with the process other than make it smaller to get better performance. new materials, or perhaps 3d chip design (last i checked intel is already doing 3d transistors, and stacked packages are common in system on chip devices). one of the reasons computers have progressed so well is the scalability of semiconductor fabrication. i have a feeling quantum computing wont follow as progressive a curve as moors law has allowed. there is always going to be a faster computer.
-
eventually we just wont be able to go to a lower process, and the battle will revolve around architecture. id be curious what amd could do at the 22nm process point (or the newer 14nm process).
-
those are just the joys of interrupt based programming. something triggers an interrupt and the computer has to do a task switch to run the interrupt service routine, then task switch again to go back to what it was doing. this is why old computers freeze when you open the cd drive, because the isr has to wait for mechanical components to move to do its job, blocking execution. fortunately we have removed most of the mechanical parts from the computer, almost everything is solid state now, except for the cooling fans (and low power devices get rid of those too). you also have additional cores to handle the isrs as they are needed. but you still need to handle interrupts which most of the hardware requires. the way phones cram in sensors, communications devices (most phones now have at least 3 different radios, and on top of that all the chip to chip communications across the mobo, and usb traffic). thats a lot of buffering to deal with. i dont think there is anything the programmers can do to speed things up (but to be fair ive never seen software so bad as can be found in an "app store").
-
i was still doing the boot disk thing on my pentium rig. many dos games could run in windows, but if you wanted performance you absolutely needed to run your games in dos. some dos games didnt work in windows at all, and getting them to work in dos was also a chore. they needed to load the code within the 640k limit before expanded memory could be initialized, so you had to cull tsrs and drivers you weren't using, everything you didnt need. after you loaded the sound and joystick driver there was not a scrap of memory left for anything else but the game. it was tedius.
-
i was poor so i used obsolete tech junk throughout the early 90s, had an old apple 2 and ti-99 console that i got from school for helping clean out their closets. the ti-99 had a basic interpreter in rom and so it was probibly one of the first computers i programmed (but it had no storage so i had to type in the program each time i turned the machine on). it had a cartridge slot but the only cart it came with was parsec. the apple 2 only had a few disks, the usual games and typing programs you would find in the school library. most were all self booting applications but i did have a pro dos disc to mess around with. i couldn't do any programming on it unfortunately. also tried to talk them into letting me have an old mac classic, but they wouldn't let one of those go. around that time i also learned ibm dos inside and out after spending a summer with the grandparents (grandpa was an electrician who fixed monitors so there was always a computer or 5 in his workshop). when grandpa died grandma brought me a 386 that he was working on at the time. unfortunately it didnt work. i think it just needed an os, but before i could find one at the local computer shop, my older brother started messing with the hardware and messed it up more. finally i talked my grandma into getting me a 120 mhz pentium machine around '97 and around that time i took a programming class in highschool, so thats where my geekdom officially started.