Arugela
Members-
Posts
1,310 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by Arugela
-
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
I said alternative because you can accomplish the same thing with massive ram and charts of the total end results of all calculation combined in all potentials. The limit is how much ram if you can't refresh fast enough. That can also be worked around. I'm not derailing anything. This is how you get massive amounts of ram now. Higher bit rate is the mechanical means to accomplish an alternative. The point is that quantum computing is logically(the datas end result) the equivalent of a 3d chart of results representing complex data outputs(IE real life data results). This is the 2d version. If you know the range of things involved and potential end results you can combine display all combinations and get the same results. You could even test quickly for things outside of your parameters fast. Qauntum computing allows massive simultaneous data being worked on. So does this. It could even be ram for quantum computer processors. It has potential for greater data flushes etc to go with more powerful future pc's using much greater data flows. And I'm sure there are more efficient ways to do it. Sorry, but this is how I started the thread. You guys really hate new concepts. You would think people would like the idea of massive Vram and ram on your computer. And, yes, this can get up to speeds for modern computers. The downside is probably power or longevity. Maybe heat. Or price. But with how much we can do to compensate for the downsides it can probably be made much faster than current technology. And you apply anything from monitors, ram, or hdd's as tricks to it, minimum. You can probably even use it as a CPU if you wanted. Maybe a 3d layered CPU. Size is less important when you achieve greater bit rate and throughput potentially. If the downside is price it would be massively useful in areas where that is less of a concern. You could speed up very large workloads exponentially. It would allow bus systems on mobos to be maximized at all times and systems to maintain maximum throughput potentially. We did run computers like this once. This would just be going back to some old tricks. Not to mention the ease of compression and mass storage. It would make all server loads exponentially cheaper and lighter. You wouldn't need anywhere near the current permanent storage. Or you make the current exponentially greater. https://portal.research.lu.se/ws/files/2711949/2370907.pdf The other advantage is you can start applying techniques like this to something as simple as your phone. You could get expensive diagnostic results from your phone and send the results to your doctor as a cheap or free initial scan(probably could now.). Then better equipment could be used later in a lab if needed. Anything using any aspects of this tech or similar could be made available at home. You could also store the state potentially for read with power off in a form of hdd/dvd/flopydrive. But it's probably easier to send the data via image data as it's smaller and faster. But it could be used and dockable data or as a way to deal with poweroffs or low power mode or other things. Very useful if used as cache. To speed up processing you could use tricks to manipulate light directly at some point and get it to the data form that is needed. Or any other aspect of the device. If you had fast write you could even change the light value of the light as a way of computation. Many things would be possible if it has multiple ways to keep track of read/write and so on. Bend/change the light and get to the part of data you need. -
Multiplayer in KSP 1.8
Arugela replied to popos1's topic in KSP1 Suggestions & Development Discussion
But the problem with world rotating around multiple things on each machine is not really an issue potentially. That was one of the main drawbacks people couldn't figure out from what I've read. The problem is mute. Multiplayer can be done logically like I said. So, it's doable. It's more a matter of lag at that point like all multiplayer. And the physics doesn't even need to be live. Your computer could run the phsyics like other objects. It' should be more a matter of how much data is sent vs how much has to be processed live. Which is doable no matter what. Only the N-body phsyics is a potential problem with exponential physics from parts count is in the way of multiplayer then. Remove that and put nbody physics and on rails planets and you can do multiplayer even if you can't now. Which you could if you don't care about lag when getting close to other players. Then it's probably a matter of game content for multiple people being added. Which regardless shouldn't be that bad as you could stay away from each other during the game. Or use smaller ships if needed. Or not if you want to mess with somebody! >< Those are your words not mine. You edited my statement to, "just," and insinuated. Base logic stands! Technically lag is less important in this game as you have less points where it matters. You don't have to spend time near other players. If fact it's much less likely unless you really want to. Which I'm sure would be common. But I would imagine that is not as huge of an issue unless you have very large ships. And you can always have settings to stop physics of other players on your machine to void it in many ways. You could simplify their physics burst probably also. You don't need perfect timing in most cases. Just close proximities. Although more accurate would open more doors for things. But you can have settings to allow people to set for different circumstances. Most, if not all, of the stuff to do this is in game. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
Either way, the real point is a higher bit conversion machine(and instant mass read functions). In this case using something common and getting lots of ram/storage. It only needs to have enough space to display some information semi permiently and could naturally have large amounts of space from the high bit representation. Like I said, 1 1920x1080 display at 10.8m is a petabyte of information at 32 bit depth. It rapidly grows from there so a few bit depth difference increases it exponentially. If you simply load all info at the start of a program this would give huge data for vram addition. Basically extended Vram. They already put those needless displays on their expensive mobos and cards. Why not make them useful. They could add to mobos also for extended system ram. Or double extended system ram. It could also display system info like a screen inbetween uses or temporarily if needed if you can use it without making the pixels invisible. This would work in tangent with other dynamic system ram by having a quicker pull or greater space than hdd depending on use. Programs could program into massive displays to store info for CPU/GPU. Especially if it's faster than and HDD or M.2 or other permanent storage it could be invaluable for system designs. And lots of stuff can be done with excessive system ram. Especially if it can access and dump from all point of storage simultaneously. If each node can be read and transmitted at once that is massive gathering time for information. You could maximize bus usage without a problem. You might even want to add fiberoptic or other faster buses long with the regular. Oled even has 0.1ms refresh. It's not the same as other parts of the machine but it's pretty good considering you don't need to change the display at basic functions. If you do then you can even use it as base system ram. Outside of longevity of parts. But that could probably be dealt with. Interstingly, 24bit makes it equilivent in capacity to modern hdd's. It could be a dump for HDD access when the system is on if it has faster access times. 32 bit coud be for larger capacity for super use. Especially if you can use it to compress at a higher bit rate. This means the normal hdd's can then store data for any higher bit rate in image data and then have it used for things we can't do today. Like display petabytes of instantly readable static ram for software. Or do smaller backups converting from 24bit for HDD backup or even 32bit compression. I'm sure more could be done. Then run them through the system for live instant display and read of massive amounts of data. and even backup utilities. Backups would be very different if you didn't need 1:1 to store them. Or the limited monder compression via removing spaces etc. Although those could also be used and these tools could help speed that up with the right logic. The write functions don't have to be fast as you can compensate with mass data. The read functions and getting the data out are. and if you can read all nodes that can do things ram and hdds can't as easily. Plus, even if HDD's like M.2's got close to as fast you could use them as write objects as to not wear out the other storage. I would assume monitor tech, even if modded heavily, would probably have longer living write functions. They could even use it as cache for m.2 drives and place them on the back as tiny displays with multiple purposes. Might need to get it to faster write speeds though. Or use excess size to do tricks to simulate higher write speeds. Then you could write less to the final drive potentially. Although having faster actual display speeds might be good in that case. It could be very good supplementary tech. https://www.ncbi.nlm.nih.gov/pubmed/18699476 Not sure how relevant this is, but if it can't read light it could be a double reader on the other side of the light node to use as a display off read or similar depending on application. You could use stuff besides light. The point is to make greater than binary bit conversion/function for data. You can get stuff out of it for modern computers. It could help extend modern tech beyond it's current limits. If an oled was used at 0.1ms response and it displayed in 0.5gb/s (0.5e-9) isn't that logically capable of being faster then ram. Or made to work at the same speeds. You could use tricks on with the size of data and respone time and the ability to flush mass data to outdo ram in calculations or sync it up. It's effectively potentially faster in certain situations if you consider it from a standpoint of sending things at a 1bit rate. the bit rate is technically speeding up by it's factor. And mass data point can also make up for small data grabs with accompanying logic. Even the slower 0.1ms can be played with similarly to create an equally or faster function ram form. Although those may cut into the capacity effectively. But it would still probably be more and more versatile than ram. Although I would assume ram might be more stable in the long run or have other advantages. Probably power and longevity. Half a gigabyte of data at 32bit depth is 0.5e-8. and 0.01 response times for ram(is that correct) could be equivalent logically if the display is 0.1ms or 0.1e-6?! I don't think that is correct. But it is close. Even 24 bit would be close node to node if you think about total data sent. One trick for slower devices could be to layer logic in color patterns like an hdd so it's split between all nodes and you can take parts of it with each read device quickly and put it together to get much faster reads or seeks. It would probably have a lot in common with a spinning HDD. that layering could be logically modified into the image even from non shifted data fairly quickly. As you don't need to change one point at a time like an HDD. You can change all points potentially at once. Or relatively fast. This could speed up slower than ram into something equivalent if needed. Or relatively speed it up regardless for faster functioning in any situation. https://etd.ohiolink.edu/!etd.send_file?accession=miami1375882251&disposition=inline Would any of this help? https://www.chromedia.org/chromedia?waxtrapp=mkqjtbEsHiemBpdmBlIEcCArB&subNav=cczbdbEsHiemBpdmBlIEcCArBP If you have static solutions and you could excite it fast enough maybe you could use a dead read function to get it out. and have lot of little lcd like things with single colors. Or can you use this to change the light to make varied colors or other aspects of the color to change the read data? And even if the change in fluorescents is in the milliseconds can't the data depth be used to effectively get faster data. It could have it's own cache or be attached to system cache to help deal with the data flushes or something and get the correct data. And that would only be to overcome the inability to change data as fast as ram. Which is unnecessary if you don't rely on write speed but mass read of data. But if both can be achieved more could be done. The really fun part of this would be that those wild rgb computers with multicolors would literally demonstrate the power of the PC. The more rainbow capability the more you can do!! >< RGB WILL RULE THE WORLD!! You could literally buy the equivalent of a crappy rgb strip and stick it in your computer tower and upgrade your pc. You could wear crappy rgb wrist straps at a rave and dance around like an idiot while your pc does your calculations for your experiments. And the brighter you are the more you can do. It would be a whole new world! Just cover yourself in strips and have a small pocket pc and go on your way. VR in your pocket. Just need full on normal glass displays for real prescriptions glasses. Can stuff involved in spectrometry or similar read light or anything in the form of a high bit depth currently? I'll assume this much heat would not occur. Or I hope not. Monitors don't get this hot but I guess some variations might. I wonder how hot the stuff in those articles about spectrometry get. Some were using temperature to adjust the lights or something. -
Multiplayer in KSP 1.8
Arugela replied to popos1's topic in KSP1 Suggestions & Development Discussion
If you only display the other person form your machine can't you just run them from your physics perspective. Why do we need to change the base game to get multiplayer? If each person runs the world where it revolves around them. Then just display the other person as an assets from their games perspective like all other object. Add physics when need to simulate the same results. Why is multi-player not achievable? Is it too hard to do that? The game already does this with any game you are not controlling. What is the difference? Live updating?! -
Multiplayer in KSP 1.8
Arugela replied to popos1's topic in KSP1 Suggestions & Development Discussion
Can you not have each person render the game the way it is now then simply report their position to the other? If the visuals around you are simply processed on each time you just need to keep their others position and needed data up to date. Why does it matter that the other person renders by moving the world around them. You can do that locally. Only neutral data needs to be dealt with to then render the other player on your stuff. Their physics can be rendered on their machine and yours on yours... Why does the game even need to change how it does anything for multiplayer? All you need to do is represent the other player at close range accurately at minimum. This does not require you to have the game world move around the other player at all!! 8\ Could you do a limited or not physic representation that accurately just displays the results of their actions? Then to each their own as far as physics goes. You only need to see the results. Not calculate it on your pc. And calculating the interactive physics should not require you to calculate the world rotation issue either. Just enough info for collisions etc of the base model of the vehicle. In fact you could use that logic to get rid of parts count problems for multiple vehicles. If this is not already implemented. The oddity then would be two people in multiplayer with different physics mods. One with normal the other with more realistics air dynamics. Although that could be cool. You could live show the differences side by side. So, you could leave that open on purpose. You could also have an option to live compute the other players physics if you wanted too to some greater extent. Or go without. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
All things are measurable which means it can be represented by another type of data. 3d is he easiest. Figure out the bounds of potential data. IT maybe less than the set. But your representation has to be more. Then take the model used and turn it into another model. If it can't be translated to another thing it literally means it doesn't exist. It's fantasy. If not you misunderstood it. There is no thought experiment involved. The example is faulty. If it's quantifiable it's translatable. Period. else it literally can't be quantified. It has no parameters to be measured and understood. That is the literal requirement to use as a computation. You are completely mistaken. And if you use it to compute the only part you are using is the part you can predict which means you can quantify it. That is the very definition of computing something. if you don't know enough to translate it you literally don't know enough to use it to compute something. That is literally how you do it. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
That is about how I imagined this when I first thought of it. It was probably from reading about those at some point. 8) Originally I wanted a permanent state that could even be in power off to save electricity and a read(light detection) on state for error checking. That would make it ram and hdd. And if you do it with enough bit length and enough data you don't need constant refresh. You can refresh at start of a program and not need to do anything until the end of the program. Basically you could read from both ends. The permanent state could be retrieved and the light detector could read for error correction in real time. Assuming it could be done at the correct speeds. Although error corrections would be more complicated than in normal ram etc. Unless the light read time is faster. I guess it would depend on the hardware. Maybe they could design monitors to allow the gpu to read data from it as extended video ram. Although you might want unused parts around the edges or some other hidden place. Maybe extra pixels around the edge. The frame allow sight of the normal resolution and the rest if hidden for computational purposes. they could even hide sensors in the frame and use software to detect accuracy in the light range and filter out data from the surrounding image correctly. Or just read the last state. That is a lot of free ram potentially. If you use it as as slow or no changing data it might help deal with certain problems. Although I guess it could be the opposite. NVM, you said buffer. I was thinking the state was preserved in the led. Either way, that would be cool. Have they considered adding mass extra buffers and using them like this? Or is that a part of modern GPU compression? Would you still need to read from a visual device to get compression/decompression to extend storage?! I wonder which would be cheaper. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.580.2521&rep=rep1&type=pdf Definitely the same basic method to handle data. 8) -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
I think you open the calculator. Jokes aside: It has lots of processors on it. You could use any part of it or any predictable part to compute anything. Else, you need a way to read it. If your phone camera can pick up that data you could write an app to figure out how to deal with the data and then have it display data for the phone to process. You could literally display a program in it's entirety and have the phone use software to use the data as a game translating the needed info as possible. It's computation from a giant static sheet of data. Or that is the simplest. If you want rotating pixels you haveto move the image to stop burn in issue and keep the reading end up to date on the change in pixel placements or whatever else is done to manipulate it. You could even then simply take a photo of the image and save the image for referal. Have it check with the monitor or something as a data check and then use the image data from the image to act as the program via translating the data. I'm assuming a proper device reading constantly would be faster. But any combination of any method could be used to get better response times in different situations. So, you want as robust a combination as possible potentially. That could be quick way to download even very large data. A single 1920x1080 could represent a petabyte at 32bit. a photo of this is technically an instant petabyte of data being downloaded. Then you could simply upload software in image references and check via compressed 32bit data for faster checking. Not sure if the photo or 32bit compression over normal lines is faster.(reminds me of something about remote GPU.) But there could always be uses for both. Just like giving away those things you can shoot with your phone for prizes all over the place. There will always be use cases and lots of them. That could be a way to make mobiles competitive with desktops. if you use the camera and computation to deal with the image you could hypothetically drop the calculation difficulty down to it's ability. Although desktops could increase even more. But it could be a big difference in the types of programs usable. At some point the differences might not effect game play. Like if it's a matter of how far the world loads past the visible distance. This sort of thing could help with complex digital transmission for camera work. Computation could be done with the image data in some manner. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
Anything in math can be translated to anything else in math. It's translatable. Qauntum computing is using 3d positioning to dictate data. This can be converted to 2d in any number of ways. You can always make an equivilant. Or it literally couldn't communicate and would be of no use to a normal computer. And I'm using bit correctly. I'm assuming color dictates the exact value of the bit. It's a true 24-32bit representation. The sensor reading it would spit out the data and other things compute it as fast as possible to usable form. When not using binary to represent data you have a much larger volume of data translating into binary or spitting out a truncated bit with the correct data values in the correct spots as efficiently as possible. So every bit of data is half a gig at 32 bits. I'm not mistaken. That is for a pure color representation using modern monitor technology. And yes I know error checking must be done. But there are lots of ways to do that. Like I said. The simple application is massive effective ram. Ram that can be checked at each 32bit or otherwise. Translate with hardware and drivers. spit it out as quick as possible and use it to find data fast in very large pools of answers. Instead of live calculation you find the end result on a chart and send it to the cpu or gpu as a variable(s). This would be easily usable with very complex large data sets. Basically, looking up the result of multiple calculations as a single answer or otherwise that must be output together. If needed use 0 states as representations and bulk send from read data. And as I said. Everything can be modelled with classic computers. That is the point of binary. It just takes more data and work to accomplish. That is the point of high bit data representation also. Removing some of that. That is exactly what I am referring too! Same thing can be done with sufficiently large amounts of ram, effectively, and enough bandwidth with quick enough effective retrieval times. You could do it now with ram, but it might be limited in use depending on specifics. Servers could be good for this. And there are lots of ways to use this sort of thing. You can also speed up the search results by having ways of effectively reducing the search criteria etc. Lots of ways to accomplish this in all direction. It would be part of the tech. If you read it 1:1 per node you can narrow it down farther for post processing of info and changing it into binary. Or whatever is done after to compute and use the results. This could be done on the device with special hardware or the rest of a gpu or sent to cpu etc. Depends on the task and how it's programmed to be done. http://web.media.mit.edu/~achoo/nanoguide/ This could be designed to solve their problems with computation of information. This increased data using miniturized version could give a means to compute if it even needs to be miniaturized. It's effectively a bit size translator and can compress/uncompress down to the bit difference. This means higher bit data translation and quick computations. If it's all translated directly into images from a static image and read constantly enough. Assuming it can be done that quickly. Or if it can be changed in state fast enough it could be simply displayed via translation increasing the effective computational power even more. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
I mean the logical end result. The point is to in essence get complex results quickly. I'm assuming this can be counted as a software pseudo equivalent using large amounts of ram and some software logic using charts. Basically taking 3d retrieval calculations and flattening them to a 2d plane with mass storage.(Then combing multiple complex results to make it more efficient.) If I'm understanding it correctly. Basically you have an alternative method to the same goal. Take the 3d represented space and turn it into a 2d or 1d value. Same with 2d. Compare storage and various retrieval task speeds. I'll assume quantum is much faster. But it's like an early version to some extent. And it acts as ram and other things(great for storing massive static data for games etc.). And it could possibly be done now. Trying to get the math down correctly though for cost and volume. At 32bit color depth it's half a gigabyte per pixel of data. That is so far 0.0125c per pixel to be the same price as modern permanent storage at 0.025c per GB. Oleds only cost 0.00048 cents base on the new asus monitor cost. That is 26x price difference not including the cost of other parts. So for total package cost. That means Vram in a small package potentially the size of a small small display on a gpu or something. That would mean a current price of Vram would cost 0.1 cents per 8 GB of vram. That is a good supplementary vram cost! And it could fit in a tiny chip depending on the size of the pixel being used and it's implementation. After I fixed the wrong calculations I did,(hopefully) it can afford to be half as cost efficient as modern price per GB. That is much better than Vram prices. So, it's not as good as HDD storage without special implementation, but not expensive either. Much better for any specialized processing it has to be a part of. A specialized 1920x1080 10.8 inch display used for processing could cost up to 259.2 dollars and have a capacity of 1.113255523e15 storage. Exactly 1 peta byte of storage for the same cost of modern permanent storage! An oled is about twice the cost of a gigabyte so it's about 4 times in total cost. So, 1,000 dollars per petabyte for an oled ram device minimum. Either way, it should still have practical applications as a form or ram/vram and a compression processor at minimum as long as the file is not larger than the displays storage capacity. Assuming similar processing abilities. Although that could logically be overcome potentially based on the GPU it's on and it's attributes if it's a fixed gpu compressor. Read a little of the wiki. Not sure why they are trying to turn optical into a binary processor. Use it as native high bit rates transmissions. Then combine it with a modern computer! Also, you could use normal ram for bootup and then run the system on this new ram for the most part and have it as an optical version of Static ram. Just based on having so much storage you wouldn't need to refresh often. Except on real events like program loading. And, technically, if you can read more than one node with less than one sensor you can also use less pixels to trick a sensor into acting like multiple pixels because of color combinations.(remember static display not dynamic. Changes only prior to program loading) This reduces size and price more. High pixel count or sensor count would probably only need to be there for error correcting or possibly faster retrieval from multiple sensor nodes. It would depend on implementation and what you could do with it. Probably lots of tricks to cross check nodes with a multiple sensors at the same time whole cross checking or retrieving chunks of data and sending in the cpu or gpu. Multiple sensor method could allow all surounding light nodes to read data and break it down with attached computation devices and dump it in needed chunks as fast as possible where needed and even store the data. This could be customized massivley per program depending on the hardware for many many results and tasks. Every tiny detail of the light emitter and sensor could result in very different computational abilities. Many sensor and lights might be good like in the eye for complex data reading and analysis/conversion. Fun thing. If you are talking storage of data for large loading maps in games. You could afford the 1 second refresh. You could load in a way potentially that makes you never see it. Depends on the density of data in the environment though. Assuming it's not so much storage you even need to load data during the applications lifespan. You could also do weird things combining bit length(same device or different parallel device.) to get some more effects like a quantum computer. Just need to apply some logic. You can also rotate the image if you keep the reading device effectively in sync or compensate for it somehow. That would be to stop burn in. Unless you burn into the correct bit length and not worry about it. Wouldn't Oleds 0.1ms refresh time make it similar to ddr system ram refresh times? NVM< it's 100,000 times slower. But it's still useful for large scale storage purposes. It could counter the faster normal ram on the system helping with massive volume and hopefully fast read times. Not sure if this counts: https://ieeexplore.ieee.org/document/6984925 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263804/ <- Too tired to read this atm. Or if it would help. I'm proposing to detect static color in a non changing environment. And if this even detects color. if it does it could be used if and when light could be shifted at similar speeds(assuming you can't now) for active ram dynamic ram. Assuming you can't get better light sources. Maybe you could use many ligths to act like different colors or use something more complex to reproduce bit density. If you have 0.1ms speeds. But subtly shift the colors so it in essence updates the entire grid at a ns time by timing the changes in a sequence you could achieve a faster response time. Especially if the sensors can translate the slower color change or other aspects. If not maybe a different light source than an LED type device. I'm not sure what exists. Something that coudl change to this things read ability could be cool though. Same principles apply. I wonder in real world what type of bit depth could be achieved? What else could be used to make a high bit density transmitter. The point is basically native high bit representation. Anything accomplishing that would work. Although this still might work. Bit rates for ram are getting higher per cycle. All you need is effective throughput. If you need only 100,000 times the input. If you can get 1:1 in other areas a 1920x1080 could have a density of over 2 million nodes transmitting. So the difference could be made up in flush data. Let alone in read times. -
Oled/displays as alternative/suppliment to quantum computing!?
Arugela replied to Arugela's topic in Science & Spaceflight
No, you are missing the point(maybe). It only had to display one color combination representing massive color depth. The depth are massive bit lengths of static data. You only have to read the correct node like ram to get your result and write it into your program. it's all retrieval just like ram. You could literally write and design a game with all outcomes of all combination as a massive file display it on the screen have it be read by the reading device and play from nothing else. The trick would be the reading device reading the correct color value. And some interpretive software being run through the rest of the PC. It would be literal quantum computing done via GPU. You wouldn't change the values on the screen(colors) unless avoiding something like burn in. Unless you got a really fast display you would want to simply display massive volumes of information as high bit depth data on a grid. In this case 24-32 bit data represented in colors. You could display all outcomes i one display and never change the monitor values. That is how it is like an HDD. Then you store the data in a simpler form on the HDD representing values to display. The display at it's simplest would only ever display one thing. One combination per software. Possibly assigned across the monitor and segmented like a partition on an HDD. However that would need to go. Not sure how you couldn't just process the information from simplified color depth and turn into into similar with software retrieving quickly. then you wouldn't need hardware. It's expansive logic basically. All representative data being put back into the software. Basically making it possible to display complex data in visual references of monitor display data. Technically, you could just have the gpu spit it out. Not sure if the hardware would be needed. Maybe it helps somewhere. Could you logically isolate the same data with just a gpu/cpu/ram quickly or would hardware reading be faster with actual fiberoptic/light reading for retrieval?! At minimum this is how you store data at a lower than 1 bit value. This makes all data storage exponentially larger. Literally as it's in a bit form! And this could logically be taken much much farther!! As you only have to store the specific color grid data of the screen resolution to represent all data on an HDD or other medium of a single display instance. You just might want to run it through something to isolate the color depth/bit value to needed info and sort it out very efficiency. Preferably instantly this is why you could display values in different ways. Say a short one for instant retrieval and a long full bit for slower retrievals. Then it is a matter of efficiency. the display being a massive ram display could have anything put on it/loaded when the program starts. The rest is a matter of software logic and the forethought of the programmer to display as needed for retrieval or whatever else he has to do. The reason for the display is the potentially quick retrieval of large data via light. And it's natural color depth inherent in the information. Your basically translating between two mediums to cut the overall processing of the rest of the system. Or can you get that much data through a CPU these days? I'm not familiar enough. The hope would be the squeeze the data into a smaller form on an HDD with simplified display data representing a single static display. then putting it through the monitor to physically calculate the data into much larger data. Sort of a compression effect. Plus this could provide a much larger data array more easily to display the complex(combined outcomes) of other things. It should provide a massively expanded version of ram to play with. Basically petabytes of light based ram to go with the current system attached through a very simplified display logic in your GPUS and supplement it. This could be used for lots of things. I would assume it would be quicker to read off a display as it's basically specilized hardware than trying to run it full tilt through ram and cpu and calculate the outcome. Or I would hope. If not it could hold massive data for things like large maps in a video game like how expanded ram usage in AMD is supposed to help hypothetical game design with larger surface areas needing more volume of ram. I would assume if you did a physics thing with it you could combine multiple physics results as a single outcome. If you know all potential results store them as a single result for retrieval and if needed splicing for specific data. Retrieval could be speed up by knowing only so many things are present and using deductive logic and large amounts of sensors to pull data quicker via accurate retrieval to hopefully be quicker than a calculation. I would assume that would be more useful as the combined effects got more complex. I could be wrong though. Maybe to make it more useful you would want to massively increase either color depth or display size to make it effectively that far past the current hardware throughput to be worth it. And use in ways it's fundamentally helpful and gives you options you couldn't have otherwise. Probably used with higher bit rate computers. As you are effectively maximizing the bit size of the computer to maximize ram to it's maximum with light displays. You could hypothetically make permanent displays also. This could allow error checking as you could check on two ends and just apply electricity. Then it's a floppy disk make of light. Just a matter of lifespan and reading etc. But it could have a permanent state and a read function for ECC naturally in the storage device. And a third check if it was a security device and the thing it's hooked into was also checking it. Is that weaker than a current 128 bit sign? I think that would logically expand into more. You could hide data in it also as extra security methods that are very hard to detect without prior knowledge. (potentially complex changes in the data read over time as a security features. Non static bit signing!) If anything it's light based extended Vram. I guess it then comes down to cost vs function. I mean many peta bytes could be useful somewhere to someone Maybe in supercomputers or in places where they do heavy graphics design professionally. Maybe if it's directly on the gpu as vram additions. Extended Vram! Unless something is in the way, petabytes of vram would be pretty nice on a modern gpu. Could be like a new massive cache level for gpu. Maybe it's a fiber optic coprocessor using the gpu for low volume storage of highly compressed data. IT's a translate for larger data compression. Could be used as a quick data compressor then too. If anything advanced GPU compression. I honestly think there could be uses for this. data easily stored on modern computers and translated in sized way past modern hdd for temp use like ram. Major increases in ram/data capacity in one thing. The rest is imagination. Or can software logic do this currently? I was assuming the direct computation would be better. If not maybe it's nice as a physical ram device to spare current vram or something. Like I said, a coprocessor. And one using short length fiberoptics or similar light based technology. Yea, I'm probably making false assumptions about the practical differences between fiber optic/light and copper... Either way, If copper is better make a similar device to translate the logical data if possible. Isn't there an advantage of translating massive bit length though. I think it naturally shortens the time to send data. And you could increase bit depth to increase transmission speeds. Assuming a lot. It's native 32bit transmission at modern monitor capabilities. Assuming you can read it all. So, it's a binary to higher bit translator. That could be very powerful and useful in modern application. Outside of cost and whatnot. Functionally, I would assume it's very useful. Nice long post! ;p Edit: http://www.fiber-optic-tutorial.com/latency-whats-differences-fiber-copper.html I don't know the specifics, but light in a vacuum! 8D That might not be needed if the tech is based on keeping a single display though. Unless it helps that much with reading the data. The rest would be translating from the reader to the rest of the system faster. Hopefully increased by the bit rate change at least. Mind you it would not be designed to be phsyically changed without a total program change. It's advantage would be massive display of large volumes of data making up for fast changing of the data. If you can display all outcomes you don't need to change it. You just store and retrieve. This would also allow maximum use of the motherboard or other bus transmission. If speed isn't the problem with retrieval(which could be done with a 1/1 data point to the display, or less for cheapness) then it will be transmission of large volumes of data at once. You could literally flush petabyte throughput. I guess you could also use it as a cache for network data. that link has a comment about taking NIC data directly. As it's high bit data naturally you could do multiple things to simplify data transmission and storage. Basically, native light based burst data. Maybe it could be used with motherboards with fiberoptic native bus lines for larger data or specialized use cases. Or node to node fiberopctic interconnectivity between parts. Or multigpu with massive Vram attached... Could that help make up for multi-gpu capabilities. If you change the data and way it's dealing with it?! That could be for very different software obviously. BTW, 10x10, 100 pixels at 32-bit color depth would be 429 teragigabits or 53.7 teragigabytes of vram! 100 pixels!!!! For modern prices of 0.025Cents per gigabyte you could spend 0.0125Cents per gigabyte and not be spending that much. If I did the math correctly. https://www.anandtech.com/show/14123/asus-proart-pq22uc-4k-oled-monitor-5150-usd 4000 dollars for 3840x2160 is only 0.00048 0.046764 cents per pixel. About twice the cost of a gigabyte. 0.048 cents per pixel oled or anything like it cost? What else would be needed. Would you need that for simple displays like this? I would think whatever gives true 32-bit with longevity would be better. But you could always read anything. Actually, I would think the advantage of oled would be literally growing it for the sake of recycling if you made a pure bio version of it... Maybe even growing it at home. Then it could be made with different resources and in different ways for convenience. And you could use stuff off a farm to or a compost heap as a way to get rid of it. You could potentially literally feed it to your dog. Not to mention pure bio could be powered like a plant or animal. Complete replacement of electricity or massive reduction in power needs from a line depending on the combination of components. You could have a pet computer! It could play with you always!! >< If not it could be more power efficient per bit of data. And it's way more compact! You would at least get some form of Vram up to the space of current permanent storage. And think of it this way. We are still storing primarily in binary. The most inefficient form of storage. That is why storage amounts are so low! Computers would literally benefit exponentially from this type of tech development. Also, how well can we read color atm? I'm not familiar with that at all. What bit length could we detect with sensors and how accurately? -
This would take some things to make work correctly, but theoretically why not use oleds or displays as a GPU version of a quantum cpu. To my undestanding quantum computing is just a hardware method to find the results of a complex grid of results. Why not do this with other hardware. You could do this with ram also and a pure software application now with the write logic. But here is the fun of a display. Direct GPU connection. Base idea: Color depth is the bit rate. Display could be sent out from the GPU like a normal monitor. Needs a powerful GPU potentially as it could be a literal secondary monitor. 2. It shoots out to an enclosed monitor(possibly smaller or of odd resolutions) and is read on another end by the monitor.(is this pointless without some way of making it preserve the results?) this result is basically a giant sheet of data. 3. It reads the data and finds the value off a very complex large grid to get complex results in fairly simple prebuilt bit. 4. Software does the rest. It's basically a giant light based HDD with instant potential retrieval. If not isolating each pixel you could even read and do advanced calculations like in normal monitor correction logic to find the surrounding bit rates. As the bits don't have to be linear or logically displayed like a monitor, lots of cool things could be done with the detectors and reading surrounding bits etc. The logically, in a game, you could put all calculation resutls(end results) of all possible combination of results for the entire game in a massive display grid and display from the gpu to be read and used. Turn off any uneeded nodes. Rotate them for lack of burn in etc. All done under the hood! The rad side would need a special port on the gpu or a return method like a back port or USB or something of sufficient bandwidth to return data. But logically the idea could be done. I think we do half the logic with current technology. I'm assuming the problem is tech and latency or something. Example: A 1920x1080 display with 16,000,000 color depth. Each 16,000,000 bit's is 2,000,000 bytes. 1920x1080=2,073,600x2,000,000=4.1472e12. 4.1472 petabytes of reference. Perfect for maximizing our current 64 bit world! ;p And if the only thing the 1ms or faster refresh rate does is move nodes to stop burn in(unless you want it to do more.) you don't have to worry about latency except in your ability to read the node. The idea is it does not need to move nodes to live calculations. Just display an entire grid of complex results for a full instant feed back. All reading would be in the detection devices for retrieval. How much data would you need for modern games. Could you use your smartphone with a device as a calculator for physics etc or other complex computations. How much would you need for various computations. I would think this would be more interesting if oled could be printed at home as easily as a printer or something. Then fixed and reprinted as needed. Maybe the read device is also a printer and it can correct the screen in combination with your GPU and whatnot. So, basically, it's a huge HDD/Ram/Qauntum Chip run by the GPU. Hopefully with the latency and other characteristic of light itself and in essence or literally fiber optics or better. As you only need to read in one application and not write except for having the capacity to store/display the data in the light grid initially! You might also have to translate 16,000,000 bit data into binary. But I'm assuming that can be done. Maybe a custom GPU just for this. Or extra Vram or specialized ram. Also why has nobody implemented a pure ram version of this for modern games. It could hypothetically speed up things even for log physics uses and basic game or other software calculations. Especially simpler ones. They could even compare to a grid on their servers for fast cross checking and other complex calculations. The entire game could be run in this. And you could also use it to supplement calculations. then partial calculations fed by a grid of results could be combined for very custom calculation methods for god knows what or how many things. You could use it as a base for auto figuring ideal calculations for any hardware combo etc. Lots of security uses etc. You name it. Modern servers could be fantastic for this potentially. 32 bit depth of color and 4k would be less than a modern 64bit depth needed. 2^32=4294967296*7680*4320=1.42496707e17. 64bit is 1.844674407e19 This would basically take more limited hardware and use it to translate into higher bit depth and read and give complex results quickly back into a smaller weaker system. Entire complex answer could be instantly read and put into variable in a computer program. Literally quantum computing. This could be done now with large amounts of ram basically or a very fast very big HDD. Maybe this is a future use for DDR and more complex sized ram as ram becomes bigger and less like SDram. If we start going way beyond DDR into QDR and higher this could be a use for ram. Or specialized ram slots for other computations or computational referencing!
-
Yes, but with that, how many bubbles would it take at what rate without regarding the pracital means to produce it. If you even stopped one small point how much would it hypothetically take?
-
One of the thoughts was basic initial measurements. If you had infinite resources and abilities how many bubbles would it take in what time scale and average release to be able to stop it. Say starting with basic bubbles from a store. Even the laziest of measurements. I'm curious what scale it would involve. Also, they are technically both bubbles. So, even if very small, how much could you estimate a bubble could stop? Minimally how many at what rate would have to be released to stop it given a nukes normal expansion time etc. This would be going past potential realistic limits. I found it interesting that video made it sound like small bubbles were more effective. Plus the gas moving slowly. That might help the problem in some aspects. Not sure what the bubbles should be filled with. You could compare hypothetical air to hydrogen I guess. I didn't think about the gas inside originally. I was thinking just the bubble substance. Quick google search: https://www.google.com/search?client=firefox-b-1-d&q=thermal+conductivity+of+a+bubble https://www.researchgate.net/figure/Thermal-conductivity-decrease-when-bubbles-of-increasing-size-are-taken-into-account_fig5_265295677 <-Not sure what type of bubbles this is referring too. https://www.sciencedirect.com/science/article/abs/pii/S0375960107013011 https://link.springer.com/article/10.1134%2FS1028335814120088 <- Here's a direct looking one. I don't know enough to understand it though. 8p
-
Lets take a funny hypothetical. Something very strong vs something very weak. Could you, with strong enough or rapidly enough applied sources, stop a nuclear blast with bubbles. And how many bubbles how quickly would need to be applied to do so? Would any other techniques work. The nature of the material and nuke are up in the air!! ;d (As in up or debate. They could also be up in the air.) Explosion could be at any distance modify heat and energy. Basically, at any point could this be made to work? Materials could also be changed to change the nature of the bubbles or whatever is needed along with nuke and city structure/materials. Anything. Even being as absurd as possible and changing the substance of the earth or atmosphere even. ANYTHING! Although the goal is on earth. If you want to be silly about it maybe the bubbles are a person named bubbles. But that would be pretty much mute as you can use any materials already. Good thing I didn't make the title, "by blowing bubbles!" >< Edit: I know! What if the materials are partially liquid concretes or something that hardens or naturally has a very high heat containing aspects to make thin layers of resistant materials that just get blown apart as it gets heated up? Or on top of this it is blown with massive amount of hydrogen or helium? One of the highest thermal in take materials I saw somewhere was hydrogen but it was flammable. What if you took a missile or other device in abundance and blew some sort of special bubble with hydrogen in the middle and a substance with high heat absorption and blew it in the air so it could absorb heat and blow up as a high thermal absorbing explosion/bubble to counter it? Then on the ground you could have different types from a city water system blowing to help lower it and maybe absorb or deal with other things like radiation for containment if possible. Also the potential for heat capacity and thermal conductivity of magma and natural hot materials in a high heat state. They are already hot in some stages but how much capacity do they have. You could use them in this state to start with. Or assume it will go into such a state. Do any gain attributes at higher levels or already have it naturally. Does the conductivity increase as it heats up. Say something like volcanic rocks or other common earth materials. It would be going against 100,000,000 degree heat already! BTW, how much of the nuke is that hot and how hot is it at smaller points. how does it loose heat etc. Maybe specialized methods could be developed or attributes of it exploitable for instance. Is only a thin layer that hot and afterwords a dramatic reduction or anything? Just the plasma layer or also the inside? Or would this only lead to making a nuke to counter a nuke?! And if so could a specialized one be done with special attributes or other materials used to counter the other nuke without being counter productive? A nuke bubble to counter nukes? Could you get away with less? Hell, if a smaller safer cheaper nuke can be used to counter a bigger more expensive one it would make nukes a little less desirable potentially. Especially if you can make them safer from a radiation standpoint.
-
It might. If not can you control the substance so it's think enough that it can take the rolling. You might be able to adjust the liquid to different levels of play. What about heavier(or lighter) balls than normal so the rolling creates more impact. Or oversized balls and really big pins? It might just work as is. I'm assuming you would be adjusting your liquid to not make a mess. 8) If not it would make ball retrieval a whole new thing. Maybe add a giant claw like in those games at stores to get plushies and junk.. A quarter per turn to retrieve a ball if it sinks. Then they can makes tons of money off bowling. You might need to adjust technique a little though. I would think you want as much forward moment with as little angle as possible. So, high popping the ball and would cause problems. Although it would be funny. I think I used to bowl that way. Low and to the groundish. Although you might want to maximize it in this case.
-
Make both the lane and the walkway out of this and try to bowl on it. Obviously the pins need a solid surface(or about a mm or so of non newtonian surface to sit on.). Could it become a sport?! might want to go with the small hole option for the balls though. Or is it safer it you get stuck so it doesn't go flying. Accidents will happen more often. maybe it should be done in a closed chamber or with non Newtonian wall protectors?! Just make sure not to loose the kiddies in the goop! I had a child once. He had a promising career in non Newtonian bowling. But then, one day, he just disappeared!
-
Wouldn't they go out the whole on their own? It's in a vacuum. Or was that a joke? I think it's fishy. I think they just don't want to admit they are still arming astronaughts in space and someone accidently shot one off. 8) I'll assume the next or last rounds of laws were passed to make sure astronaughts are armed in case of a school shooting in the hidden parts of the law. Just in case one of those school trip laws are passed and we start letting kids in the station. I wonder if they would implement a bandsaw and finger drilling run up to make sure no fingers can make holes either. If we only improved that surgery and healing times on lost limbs! 8o Teacher: "Everybody get your fingers ready. We can't take them in space attached. Keep them in their baggies and cold at all times. And make sure to bring your astronaut lunches!" If there are no fingers you can't easily shoot a gun. Maybe make that a general policy on the space station. No finger left attached!
-
Multiplayer in KSP 1.8
Arugela replied to popos1's topic in KSP1 Suggestions & Development Discussion
If it had an easy room to allow players of KSP to connect with each other that would be interesting. Assuming they resolve the issue with lag and parts count etc. They just need to make sure to put in stuff for white/black listing players in case. Making any multiplayer under the assumption of no problems would be a massive mistake. And it's easy to do on a basic level. -
I just read something where someone said physx is now open source. They mentioned things now being more easily offloadable to the gpu for amd. Does this in any way effect this game or mean more things can be loaded to gpu.Say like the craft part or joint logic to allow bigger parts. Or anything that can allow the physics to be done differently to get more performance or functions in the game? Maybe customizable physic options. You choose where to offload your phsyics! 8) Or would kerbals just get wavy flowing hair? Thinking about it. If you can get wavy hair in game easily. Doesn't that bode well for air base physics simulations for aircraft parts? What can we do with this? 8o
-
Cupcake's Dropship Dealership...
Arugela replied to Cupcake...'s topic in KSP1 The Spacecraft Exchange
Do you need a mod to get to the ocean floor or is there a setting for it in stock? I thought they changed it in stock so you could go to the ocean floor. But when I tried I blew up around -300km. -
Veering off the runway might have to do with it being nose heavy. If It puts weight on the forward wheels it may go veer off. make the COM and COl be closer or maybe adjust the balance on the wheels so it's pointing up and not down.
-
Super large cargo plane question. COM position.
Arugela replied to Arugela's topic in KSP1 Gameplay Questions and Tutorials
I was more confused if there is partial thrust blockage and how to test it. Either way. I removed all the engines and added 64 panthers, 8 nukes, 28 rapiers. It's now 700 parts and 2880 tones with a minimum weight of 580 tons(1580 tons with cargo) instead of 1200(2200 tons with cargo). I'm hoping to slowly get it into orbit and get it to laythe. If not I'll add mining drones to let it skip to destinations. I was able to fly my 100 rapiers up to 400kn of thrust. I'm hoping to push them to some lower altitude high thrust and get out of orbit. The Jpanthers are to get it off the runway and aimed up in time. I'll have to see. If it works it will have some very nice high isp cruising engines for laythe. I think afterwords I'll make a smaller 500 ton cargo version(if I can get this to orbit) that halves everything and reduces the parts count to around 350 so it's manageable. It will be a nearly identical half stated version with half weight, half engines, etc. Should be easier to fly and probably lack the bad lag. -
Super large cargo plane question. COM position.
Arugela replied to Arugela's topic in KSP1 Gameplay Questions and Tutorials
I've been confused if partial blocking was effecting engines or not. I was assuming it was not and so ignoreing that. I think several people have told me it no longer partially blocks but only all or nothing. There seems to be some confusion on this. I still can't tell from flight. I couldn't think how to test this as partial and full would be hard to differentiate. The point of the craft is to get 1k tons to orbit the to minimus to mine and go as many places as possible. I have a smaller version with only 64 engine pods that might get to orbit. But not to minimus. The craft is a personal milestone thing. Seeing how far I can take the 1k tons of cargo. Without using the cargo fuel that is. Although with a mining cargo plane that is a bit stupid. But it probably helps if the cargo doesn't posses large amounts of fuel. I'm actually a little confused if the dart and nerv aren't interfering with each other or the burn time indicator isn't messed up. I'm going to have to use the cheats and take it to orbit to test. The dart is technically in one half space. It might be being effected somehow. Could be something else entirely though. Edit: NVM, I forgot you could see the total thrust in the aerogui. Assuming it's always accurate... According to the aerogui it's not loosing thrust in it's current configuration. So, hopefully that is true. Here is an empty version with updated aerodynamics. Hopefully it has less distance between the full weight and empty position. It flies very nicely regardless. Bit hard to stop though still. Craft: https://www.dropbox.com/s/pk4xuwr4aeqgqnl/KB-52 1_5_1_2335 x100 empty.craft?dl=0 Edit: Yep, much better com on full. I will update the original craft file when I test fly it.(updated, but not test flown.) I think the one issue this has is that I put too much LF+OX and not the normal larger or equal ammount of LF I do on my planes with this engine setup. -
Super large cargo plane question. COM position.
Arugela replied to Arugela's topic in KSP1 Gameplay Questions and Tutorials
I added a craft file. I'll add more pics after I get done flying it. It's almost to orbit. By adjusting the nose I meant readjusting it during flight to lower angle of attack or to pull up to gain altitude. It's fairly strong as is but I wasn't sure if making a lower or higher COL helped with the strength of wings. I'm assuming from what you said I need a higher COL as it needs the most help atm with re-entries. Added some pics from flight. Not sure which angles help explain the craft.