Jump to content

Oled/displays as alternative/suppliment to quantum computing!?


Arugela

Recommended Posts

This would take some things to make work correctly, but theoretically why not use oleds or displays as a GPU version of a quantum cpu. To my undestanding quantum computing is just a hardware method to find the results of a complex grid of results. Why not do this with other hardware. You could do this with ram also and a pure software application now with the write logic.

But here is the fun of a display. Direct GPU connection.

Base idea: Color depth is the bit rate. Display could be sent out from the GPU like a normal monitor. Needs a powerful GPU potentially as it could be a literal secondary monitor.

2. It shoots out to an enclosed monitor(possibly smaller or of odd resolutions) and is read on another end by the monitor.(is this pointless without some way of making it preserve the results?) this result is basically a giant sheet of data.

3. It reads the data and finds the value off a very complex large grid to get complex results in fairly simple prebuilt bit.

4. Software does the rest.

It's basically a giant light based HDD with instant potential retrieval. If not isolating each pixel you could even read and do advanced calculations like in normal monitor correction logic to find the surrounding bit rates. As the bits don't have to be linear or logically displayed like a monitor, lots of cool things could be done with the detectors and reading surrounding bits etc.

The logically, in a game, you could put all calculation resutls(end results) of all possible combination of results for the entire game in a massive display grid and display from the gpu to be read and used. Turn off any uneeded nodes. Rotate them for lack of burn in etc. All done under the hood! The rad side would need a special port on the gpu or a return method like a back port or USB or something of sufficient bandwidth to return data. But logically the idea could be done. I think we do half the logic with current technology. I'm assuming the problem is tech and latency or something.


Example: A 1920x1080 display with 16,000,000 color depth.

Each 16,000,000 bit's is 2,000,000 bytes.

1920x1080=2,073,600x2,000,000=4.1472e12. 4.1472 petabytes of reference. Perfect for maximizing our current 64 bit world! ;p

And if the only thing the 1ms or faster refresh rate does is move nodes to stop burn in(unless you want it to do more.) you don't have to worry about latency except in your ability to read the node. The idea is it does not need to move nodes to live calculations. Just display an entire grid of complex results for a full instant feed back. All reading would be in the detection devices for retrieval. How much data would you need for modern games. Could you use your smartphone with a device as a calculator for physics etc or other complex computations. How much would you need for various computations.

 

I would think this would be more interesting if oled could be printed at home as easily as a printer or something. Then fixed and reprinted as needed. Maybe the read device is also a printer and it can correct the screen in combination with your GPU and whatnot.

 

So, basically, it's a huge HDD/Ram/Qauntum Chip run by the GPU. Hopefully with the latency and other characteristic of light itself and in essence or literally fiber optics or better. As you only need to read in one application and not write except for having the capacity to store/display the data in the light grid initially!

You might also have to translate 16,000,000 bit data into binary. But I'm assuming that can be done. Maybe a custom GPU just for this. Or extra Vram or specialized ram.

 

Also why has nobody implemented a pure ram version of this for modern games. It could hypothetically speed up things even for log physics uses and basic game or other software calculations. Especially simpler ones. They could even compare to a grid on their servers for fast cross checking and other complex calculations. The entire game could be run in this. And you could also use it to supplement calculations. then partial calculations fed by a grid of results could be combined for very custom calculation methods for god knows what or how many things. You could use it as a base for auto figuring ideal calculations for any hardware combo etc. Lots of security uses etc. You name it. Modern servers could be fantastic for this potentially.

 

32 bit depth of color and 4k would be less than a modern 64bit depth needed. 2^32=4294967296*7680*4320=1.42496707e17. 64bit is 1.844674407e19

 

This would basically take more limited hardware and use it to translate into higher bit depth and read and give complex results quickly back into a smaller weaker system. Entire complex answer could be instantly read and put into variable in a computer program. Literally quantum computing. This could be done now with large amounts of ram basically or a very fast very big HDD. Maybe this is a future use for DDR and more complex sized ram as ram becomes bigger and less like SDram. If we start going way beyond DDR into QDR and higher this could be a use for ram. Or specialized ram slots for other computations or computational referencing!

Edited by Arugela
Link to comment
Share on other sites

I'm not an expert at this at all, so don't take my word for this, but... I don't think you get the idea.

 

Behind that monitor would have to be a normal computer to calculate what to put on the screen, so it would be no faster than a regular computer at solving these. In fact, it would be slower, since it would have to then analyze the data from the screen on top of that.

 

Also, why do you need a screen? Would regular wiring not work? Just hook it directly to the lower computer, directly send the binary code that would have corresponded with the color depth to begin with, and... well, you have a completely normal computer, and again with extra stuff on it.

 

It's not that quantum computers can technically do much more than regular computers (besides, say, intrinsic randomness), it's that, due to their both-1-and-0 nature, they should be able to perform certain tasks much quicker and more efficiently than normal computers.

 

...as far as I understand, anyway. Again, not the person to ask.

Edited by ThatGuyWithALongUsername
Link to comment
Share on other sites

No, you are missing the point(maybe). It only had to display one color combination representing massive color depth. The depth are massive bit lengths of static data. You only have to read the correct node like ram to get your result and write it into your program. it's all retrieval just like ram.

 

You could literally write and design a game with all outcomes of all combination as a massive file display it on the screen have it be read by the reading device and play from nothing else. The trick would be the reading device reading the correct color value. And some interpretive software being run through the rest of the PC. It would be literal quantum computing done via GPU.

 

You wouldn't change the values on the screen(colors) unless avoiding something like burn in. Unless you got a really fast display you would want to simply display massive volumes of information as high bit depth data on a grid. In this case 24-32 bit data represented in colors. You could display all outcomes i one display and never change the monitor values. That is how it is like an HDD. Then you store the data in a simpler form on the HDD representing values to display.

 

The display at it's simplest would only ever display one thing. One combination per software. Possibly assigned across the monitor and segmented like a partition on an HDD. However that would need to go.

 

Not sure how you couldn't just process the information from simplified color depth and turn into into similar with software retrieving quickly. then you wouldn't need hardware. It's expansive logic basically. All representative data being put back into the software. Basically making it possible to display complex data in visual references of monitor display data. Technically, you could just have the gpu spit it out. Not sure if the hardware would be needed. Maybe it helps somewhere. Could you logically isolate the same data with just a gpu/cpu/ram quickly or would hardware reading be faster with actual fiberoptic/light reading for retrieval?!

At minimum this is how you store data at a lower than 1 bit value. This makes all data storage exponentially larger. Literally as it's in a bit form! And this could logically be taken much much farther!! As you only have to store the specific color grid data of the screen resolution to represent all data on an HDD or other medium of a single display instance.

 

You just might want to run it through something to isolate the color depth/bit value to needed info and sort it out very efficiency. Preferably instantly this is why you could display values in different ways. Say a short one for instant retrieval and a long full bit for slower retrievals. Then it is a matter of efficiency. the display being a massive ram display could have anything put on it/loaded when the program starts. The rest is a matter of software logic and the forethought of the programmer to display as needed for retrieval or whatever else he has to do.

 

The reason for the display is the potentially quick retrieval of large data via light. And it's natural color depth inherent in the information. Your basically translating between two mediums to cut the overall processing of the rest of the system. Or can you get that much data through a CPU these days? I'm not familiar enough. The hope would be the squeeze the data into a smaller form on an HDD with simplified display data representing a single static display. then putting it through the monitor to physically calculate the data into much larger data. Sort of a compression effect.

 

Plus this could provide a much larger data array more easily to display the complex(combined outcomes) of other things. It should provide a massively expanded version of ram to play with. Basically petabytes of light based ram to go with the current system attached through a very simplified display logic in your GPUS and supplement it. This could be used for lots of things.

I would assume it would be quicker to read off a display as it's basically specilized hardware than trying to run it full tilt through ram and cpu and calculate the outcome. Or I would hope. If not it could hold massive data for things like large maps in a video game like how expanded ram usage in AMD is supposed to help hypothetical game design with larger surface areas needing more volume of ram.

 

I would assume if you did a physics thing with it you could combine multiple physics results as a single outcome. If you know all potential results store them as a single result for retrieval and if needed splicing for specific data. Retrieval could be speed up by knowing only so many things are present and using deductive logic and large amounts of sensors to pull data quicker via accurate retrieval to hopefully be quicker than a calculation. I would assume that would be more useful as the combined effects got more complex. I could be wrong though.

 

Maybe to make it more useful you would want to massively increase either color depth or display size to make it effectively that far past the current hardware throughput to be worth it. And use in ways it's fundamentally helpful and gives you options you couldn't have otherwise. Probably used with higher bit rate computers. As you are effectively maximizing the bit size of the computer to maximize ram to it's maximum with light displays.

 

You could hypothetically make permanent displays also. This could allow error checking as you could check on two ends and just apply electricity. Then it's a floppy disk make of light. Just a matter of lifespan and reading etc. But it could have a permanent state and a read function for ECC naturally in the storage device. And a third check if it was a security device and the thing it's hooked into was also checking it. Is that weaker than a current 128 bit sign? I think that would logically expand into more. You could hide data in it also as extra security methods that are very hard to detect without prior knowledge. (potentially complex changes in the data read over time as a security features. Non static bit signing!)

 

If anything it's light based extended Vram. I guess it then comes down to cost vs function. I mean many peta bytes could be useful somewhere to someone Maybe in supercomputers or in places where they do heavy graphics design professionally. Maybe if it's directly on the gpu as vram additions. Extended Vram! Unless something is in the way, petabytes of vram would be pretty nice on a modern gpu. Could be like a new massive cache level for gpu.

Maybe it's a fiber optic coprocessor using the gpu for low volume storage of highly compressed data. IT's a translate for larger data compression. Could be used as a quick data compressor then too. If anything advanced GPU compression.

I honestly think there could be uses for this. data easily stored on modern computers and translated in sized way past modern hdd for temp use like ram. Major increases in ram/data capacity in one thing. The rest is imagination.

 

Or can software logic do this currently? I was assuming the direct computation would be better. If not maybe it's nice as a physical ram device to spare current vram or something. Like I said,  a coprocessor. And one using short length fiberoptics or similar light based technology.

 

Yea, I'm probably making false assumptions about the practical differences between fiber optic/light and copper...

 

Either way, If copper is better make a similar device to translate the logical data if possible. Isn't there an advantage of translating massive bit length though. I think it naturally shortens the time to send data. And you could increase bit depth to increase transmission speeds. Assuming a lot. It's native 32bit transmission at modern monitor capabilities. Assuming you can read it all. So, it's a binary to higher bit translator. That could be very powerful and useful in modern application. Outside of cost and whatnot. Functionally, I would assume it's very useful.

 

Nice long post! ;p

 

Edit:

http://www.fiber-optic-tutorial.com/latency-whats-differences-fiber-copper.html

 

I don't know the specifics, but light in a vacuum! 8D

That might not be needed if the tech is based on keeping a single display though. Unless it helps that much with reading the data. The rest would be translating from the reader to the rest of the system faster. Hopefully increased by the bit rate change at least. Mind you it would not be designed to be phsyically changed without a total program change. It's advantage would be massive display of large volumes of data making up for fast changing of the data. If you can display all outcomes you don't need to change it. You just store and retrieve. This would also allow maximum use of the motherboard or other bus transmission. If speed isn't the problem with retrieval(which could be done with a 1/1 data point to the display, or less for cheapness) then it will be transmission of large volumes of data at once. You could literally flush petabyte throughput.

 

I guess you could also use it as a cache for network data. that link has a comment about taking NIC data directly. As it's high bit data naturally you could do multiple things to simplify data transmission and storage. Basically, native light based burst data.

Maybe it could be used with motherboards with fiberoptic native bus lines for larger data or specialized use cases. Or node to node fiberopctic interconnectivity between parts. Or multigpu with massive Vram attached... Could that help make up for multi-gpu capabilities. If you change the data and way it's dealing with it?! That could be for very different software obviously.

 

BTW, 10x10, 100 pixels at 32-bit color depth would be 429 teragigabits or 53.7 teragigabytes of vram! 100 pixels!!!!

For modern prices of 0.025Cents per gigabyte you could spend 0.0125Cents per gigabyte and not be spending that much. If I did the math correctly.

https://www.anandtech.com/show/14123/asus-proart-pq22uc-4k-oled-monitor-5150-usd

4000 dollars for 3840x2160 is only 0.00048 0.046764 cents per pixel. About twice the cost of a gigabyte.

0.048 cents per pixel oled or anything like it cost? What else would be needed. Would you need that for simple displays like this? I would think whatever gives true 32-bit with longevity would be better. But you could always read anything. Actually, I would think the advantage of oled would be literally growing it for the sake of recycling if you made a pure bio version of it... Maybe even growing it at home. Then it could be made with different resources and in different ways for convenience. And you could use stuff off a farm to or a compost heap as a way to get rid of it. You could potentially literally feed it to your dog. Not to mention pure bio could be powered like a plant or animal. Complete replacement of electricity or massive reduction in power needs from a line depending on the combination of components. You could have a pet computer! It could play with you always!! ><

If not it could be more power efficient per bit of data. And it's way more compact! You would at least get some form of Vram up to the space of current permanent storage.

And think of it this way. We are still storing primarily in binary. The most inefficient form of storage. That is why storage amounts are so low! Computers would literally benefit exponentially from this type of tech development.

 

Also, how well can we read color atm? I'm not familiar with that at all. What bit length could we detect with sensors and how accurately?

Edited by Arugela
Link to comment
Share on other sites

I mean the logical end result. The point is to in essence get complex results quickly. I'm assuming this can be counted as a software pseudo equivalent using large amounts of ram and some software logic using charts. Basically taking 3d retrieval calculations and flattening them to a 2d plane with mass storage.(Then combing multiple complex results to make it more efficient.) If I'm understanding it correctly. Basically you have an alternative method to the same goal. Take the 3d represented space and turn it into a 2d or 1d value. Same with 2d. Compare storage and various retrieval task speeds. I'll assume quantum is much faster. But it's like an early version to some extent. And it acts as ram and other things(great for storing massive static data for games etc.). And it could possibly be done now.

Trying to get the math down correctly though for cost and volume.

At 32bit color depth it's half a gigabyte per pixel of data. That is so far 0.0125c per pixel to be the same price as modern permanent storage at 0.025c per GB. Oleds only cost 0.00048 cents base on the new asus monitor cost. That is 26x price difference not including the cost of other parts. So for total package cost. That means Vram in a small package potentially the size of a small small display on a gpu or something. That would mean a current price of Vram would cost 0.1 cents per 8 GB of vram. That is a good supplementary vram cost! And it could fit in a tiny chip depending on the size of the pixel being used and it's implementation.

After I fixed the wrong calculations I did,(hopefully) it can afford to be half as cost efficient as modern price per GB. That is much better than Vram prices. So, it's not as good as HDD storage without special implementation, but not expensive either. Much better for any specialized processing it has to be a part of.

A specialized 1920x1080 10.8 inch display used for processing could cost up to 259.2 dollars and have a capacity of 1.113255523e15 storage. Exactly 1 peta byte of storage for the same cost of modern permanent storage! An oled is about twice the cost of a gigabyte so it's about 4 times in total cost. So, 1,000 dollars per petabyte for an oled ram device minimum.

 

Either way, it should still have practical applications as a form or ram/vram and a compression processor at minimum as long as the file is not larger than the displays storage capacity. Assuming similar processing abilities. Although that could logically be overcome potentially based on the GPU it's on and it's attributes if it's a fixed gpu compressor.

 

Read a little of the wiki. Not sure why they are trying to turn optical into a binary processor. Use it as native high bit rates transmissions. Then combine it with a modern computer!

 

Also, you could use normal ram for bootup and then run the system on this new ram for the most part and have it as an optical version of Static ram. Just based on having so much storage you wouldn't need to refresh often. Except on real events like program loading.

And, technically, if you can read more than one node with less than one sensor you can also use less pixels to trick a sensor into acting like multiple pixels because of color combinations.(remember static display not dynamic. Changes only prior to program loading) This reduces size and price more. High pixel count or sensor count would probably only need to be there for error correcting or possibly faster retrieval from multiple sensor nodes. It would depend on implementation and what you could do with it. Probably lots of tricks to cross check nodes with a multiple sensors at the same time whole cross checking or retrieving chunks of data and sending in the cpu or gpu. Multiple sensor method could allow all surounding light nodes to read data and break it down with attached computation devices and dump it in needed chunks as fast as possible where needed and even store the data. This could be customized massivley per program depending on the hardware for many many results and tasks. Every tiny detail of the light emitter and sensor could result in very different computational abilities. Many sensor and lights might be good like in the eye for complex data reading and analysis/conversion.

Fun thing. If you are talking storage of data for large loading maps in games. You could afford the 1 second refresh. You could load in a way potentially that makes you never see it. Depends on the density of data in the environment though. Assuming it's not so much storage you even need to load data during the applications lifespan.

You could also do weird things combining bit length(same device or different parallel device.) to get some more effects like a quantum computer. Just need to apply some logic.

You can also rotate the image if you keep the reading device effectively in sync or compensate for it somehow. That would be to stop burn in. Unless you burn into the correct bit length and not worry about it.

Wouldn't Oleds 0.1ms refresh time make it similar to ddr system ram refresh times? NVM< it's 100,000 times slower. But it's still useful for large scale storage purposes. It could counter the faster normal ram on the system helping with massive volume and hopefully fast read times.

Not sure if this counts: https://ieeexplore.ieee.org/document/6984925

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6263804/  <- Too tired to read this atm.

Or if it would help. I'm proposing to detect static color in a non changing environment. And if this even detects color. if it does it could be used if and when light could be shifted at similar speeds(assuming you can't now) for active ram dynamic ram. Assuming you can't get better light sources. Maybe you could use many ligths to act like different colors or use something more complex to reproduce bit density. If you have 0.1ms speeds. But subtly shift the colors so it in essence updates the entire grid at a ns time by timing the changes in a sequence you could achieve a faster response time. Especially if the sensors can translate the slower color change or other aspects. If not maybe a different light source than an LED type device. I'm not sure what exists. Something that coudl change to this things read ability could be cool though. Same principles apply.

I wonder in real world what type of bit depth could be achieved?

What else could be used to make a high bit density transmitter. The point is basically native high bit representation. Anything accomplishing that would work.

Although this still might work. Bit rates for ram are getting higher per cycle. All you need is effective throughput. If you need only 100,000 times the input. If you can get 1:1 in other areas a 1920x1080 could have a density of over 2 million nodes transmitting. So the difference could be made up in flush data. Let alone in read times.

Edited by Arugela
Link to comment
Share on other sites

You are confusing the number of states with the number of bits. A color depth of 16,777,216 means that the pixel can be in one of 224 states (which happens to be the first power of 2 which can represent more colors than the human eye can distinguish in most cases). This is only 24-bits of information because it can't be in an arbitrary combination of those 16 million+ states simultaneously, it can be in only one of them. Put another way you translated 224 information into 16,777,2161 information, but mistakenly took this to mean you had 216,777,216 information. This confusion is inflating your numbers by quite a lot. Also because you are going from digital to analog and back to digital you need to add in some error correction to account for the fact that both the displaying and reading of these pixels will be imperfect.

Using more than one state or level to represent more than 1 binary bit of data per unit is not uncommon in computer hardware, though it tends to be used selectively where the increased density per unit outweighs the higher chance of one state being so close to another state that it is read as the wrong one. One example is multi-level cell solid state drives which use different charge levels to represent more data per cell (2, 3, and 4 bit per cell variants exist). You'll also see it used in data transmission, particularly wireless communications.

 

Quantum computing really is its own thing and can't be modeled with classical computers. They are probability based and work with gates that shift the probabilities in various ways, but ultimately don't guarantee a specific outcome. For specific problems they have a high chance at arriving at the correct answer, but you need to either re-run the calculation several times or confirm it with a classical computer to know if it is the right answer. They excel in cases where the search space is vast and simply cannot be brute forced in a practical period of time.

Link to comment
Share on other sites

Anything in math can be translated to anything else in math. It's translatable. Qauntum computing is using 3d positioning to dictate data. This can be converted to 2d in any number of ways. You can always make an equivilant. Or it literally couldn't communicate and would be of no use to a normal computer.

And I'm using bit correctly. I'm assuming color dictates the exact value of the bit. It's a true 24-32bit representation. The sensor reading it would spit out the data and other things compute it as fast as possible to usable form. When not using binary to represent data you have a much larger volume of data translating into binary or spitting out a truncated bit with the correct data values in the correct spots as efficiently as possible.

So every bit of data is half a gig at 32 bits. I'm not mistaken. That is for a pure color representation using modern monitor technology. And yes I know error checking must be done. But there are lots of ways to do that.

Like I said. The simple application is massive effective ram. Ram that can be checked at each 32bit or otherwise. Translate with hardware and drivers. spit it out as quick as possible and use it to find data fast in very large pools of answers. Instead of live calculation you find the end result on a chart and send it to the cpu or gpu as a variable(s). This would be easily usable with very complex large data sets. Basically, looking up the result of multiple calculations as a single answer or otherwise that must be output together. If needed use 0 states as representations and bulk send from read data.

And as I said. Everything can be modelled with classic computers. That is the point of binary. It just takes more data and work to accomplish. That is the point of high bit data representation also. Removing some of that.

44 minutes ago, satnet said:

 They excel in cases where the search space is vast and simply cannot be brute forced in a practical period of time.

That is exactly what I am referring too! Same thing can be done with sufficiently large amounts of ram, effectively, and enough bandwidth with quick enough effective retrieval times. You could do it now with ram, but it might be limited in use depending on specifics. Servers could be good for this. And there are lots of ways to use this sort of thing.

You can also speed up the search results by having ways of effectively reducing the search criteria etc. Lots of ways to accomplish this in all direction. It would be part of the tech. If you read it 1:1 per node you can narrow it down farther for post processing of info and changing it into binary. Or whatever is done after to compute and use the results. This could be done on the device with special hardware or the rest of a gpu or sent to cpu etc. Depends on the task and how it's programmed to be done.

http://web.media.mit.edu/~achoo/nanoguide/

This could be designed to solve their problems with computation of information. This increased data using miniturized version could give a means to compute if it even needs to be miniaturized. It's effectively a bit size translator and can compress/uncompress down to the bit difference. This means higher bit data translation and quick computations. If it's all translated directly into images from a static image and read constantly enough. Assuming it can be done that quickly. Or if it can be changed in state fast enough it could be simply displayed via translation increasing the effective computational power even more.

Edited by Arugela
Link to comment
Share on other sites

I think you open the calculator.

Jokes aside:

It has lots of processors on it. You could use any part of it or any predictable part to compute anything.

Else, you need a way to read it. If your phone camera can pick up that data you could write an app to figure out how to deal with the data and then have it display data for the phone to process. You could literally display a program in it's entirety and have the phone use software to use the data as a game translating the needed info as possible. It's computation from a giant static sheet of data. Or that is the simplest. If you want rotating pixels you haveto move the image to stop burn in issue and keep the reading end up to date on the change in pixel placements or whatever else is done to manipulate it.

You could even then simply take a photo of the image and save the image for referal. Have it check with the monitor or something as a data check and then use the image data from the image to act as the program via translating the data. I'm assuming a proper device reading constantly would be faster. But any combination of any method could be used to get better response times in different situations. So, you want as robust a combination as possible potentially.

That could be  quick way to download even very large data. A single 1920x1080 could represent a petabyte at 32bit. a photo of this is technically an instant petabyte of data being downloaded. Then you could simply upload software in image references and check via compressed 32bit data for faster checking. Not sure if the photo or 32bit compression over normal lines is faster.(reminds me of something about remote GPU.) But there could always be uses for both. Just like giving away those things you can shoot with your phone for prizes all over the place. There will always be use cases and lots of them.

That could be a way to make mobiles competitive with desktops. if you use the camera and computation to deal with the image you could hypothetically drop the calculation difficulty down to it's ability. Although desktops could increase even more. But it could be a big difference in the types of programs usable. At some point the differences might not effect game play. Like if it's a matter of how far the world loads past the visible distance.

This sort of thing could help with complex digital transmission for camera work. Computation could be done with the image data in some manner.

Edited by Arugela
Link to comment
Share on other sites

well its just the display. right now its connected to an esp32 and an imu board. the inertial reference code needs work but the screen works great. 

there was the time back when they used crt as memory (a williams tube). think of it as optical dram. you charge the phosphors, sort of like the cap in dram. writing was accomplished by an electron gun which illuminates the phosphors and changes the electric charge on the surface. the x,y coordinates of the gun act as the memory address. this can be read back with a thin metal plate on the surface.  it had the added bonus that you could see the contents of the memory which certainly helped with debugging. they could store about 2.5kbits.

modern screens pretty much are memory at the low level. if you didnt have a lot of ram to work with you could get away single buffering your render, once you send it to the screen it stays there until you write to it again. 

Edited by Nuke
Link to comment
Share on other sites

That is about how I imagined this when I first thought of it. It was probably from reading about those at some point. 8)

Originally I wanted a permanent state that could even be in power off to save electricity and a read(light detection) on state for error checking. That would make it ram and hdd. And if you do it with enough bit length and enough data you don't need constant refresh. You can refresh at start of a program and not need to do anything until the end of the program.

Basically you could read from both ends. The permanent state could be retrieved and the light detector could read for error correction in real time. Assuming it could be done at the correct speeds. Although error corrections would be more complicated than in normal ram etc. Unless the light read time is faster. I guess it would depend on the hardware.

Maybe they could design monitors to allow the gpu to read data from it as extended video ram. Although you might want unused parts around the edges or some other hidden place. Maybe extra pixels around the edge. The frame allow sight of the normal resolution and the rest if hidden for computational purposes. they could even hide sensors in the frame and use software to detect accuracy in the light range and filter out data from the surrounding image correctly. Or just read the last state. That is a lot of free ram potentially. If you use it as as slow or no changing data it might help deal with certain problems. Although I guess it could be the opposite.

NVM, you said buffer. I was thinking the state was preserved in the led. Either way, that would be cool. Have they considered adding mass extra buffers and using them like this? Or is that a part of modern GPU compression? Would you still need to read from a visual device to get compression/decompression to extend storage?! I wonder which would be cheaper.

 

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.580.2521&amp;rep=rep1&amp;type=pdf

 

Definitely the same basic method to handle data. 8)

Edited by Arugela
Link to comment
Share on other sites

1 hour ago, Arugela said:

Anything in math can be translated to anything else in math.

We aren't talking about maths, we're talking about quantum superposition. 

Here's a thought experiment to explain they you cannot simulate a quantum computer using a classical computer. 

You have a room, which you place into quantum superposition at will, and including all of that rooms contents. You give a friend a phone book and a phone number, and tell them to open to a random page in that book, and see if they can find the owner of that number. If the do find the owner, they are to leave the room and tell you immediately, but if they don't, they should wait in the room for a few minutes. They go into the room, and you put the room into superposition. You can be fair confident that in a few seconds, your friend will re-emerge, telling you who owns the number, because your friends quantum state has collapsed into the one you wanted, as you set that particular state to collapse first. 

That's hugely simplified, but it gets the point across. It's not that quibits can store a large amount of states, but that they can be in a huge number of states simultaneously, and you can hopefully collapse those down into whichever of those states you wish.

Link to comment
Share on other sites

All things are measurable which means it can be represented by another type of data. 3d is he easiest. Figure out the bounds of potential data. IT maybe less than the set. But your representation has to be more. Then take the model used and turn it into another model. If it can't be translated to another thing it literally means it doesn't exist. It's fantasy. If not you misunderstood it. There is no thought experiment involved. The example is faulty.

If it's quantifiable it's translatable. Period. else it literally can't be quantified. It has no parameters to be measured and understood. That is the literal requirement to use as a computation. You are completely mistaken. And if you use it to compute the only part you are using is the part you can predict which means you can quantify it. That is the very definition of computing something. if you don't know enough to translate it you literally don't know enough to use it to compute something. That is literally how you do it.

Edited by Arugela
Link to comment
Share on other sites

1 hour ago, Arugela said:

Quantum computing is using 3d positioning to dictate data.

It's using quantum phenomenon.

Examples of quantum phenomenon being utilized :

- Spin

- Tunneling

NMR (basically spin)

All the existing methods utilize one of those phenomenons, which are quantum effects. The readouts always goes as a standard bit, even if the process was carried in quantum bits, because observation "collapses" the probabilities.

Quantum computing is only useful if you're trying to simulate a quantum interaction - if you're just trying to browse the internet, it's almost useless.

Link to comment
Share on other sites

Either way, the real point is a higher bit conversion machine(and instant mass read functions). In this case using something common and getting lots of ram/storage. It only needs to have enough space to display some information semi permiently and could naturally have large amounts of space from the high bit representation. Like I said, 1 1920x1080 display at 10.8m is a petabyte of information at 32 bit depth. It rapidly grows from there so a few bit depth difference increases it exponentially. If you simply load all info at the start of a program this would give huge data for vram addition. Basically extended Vram. They already put those needless displays on their expensive mobos and cards. Why not make them useful. They could add to mobos also for extended system ram. Or double extended system ram. It could also display system info like a screen inbetween uses or temporarily if needed if you can use it without making the pixels invisible.

This would work in tangent with other dynamic system ram by having a quicker pull or greater space than hdd depending on use. Programs could program into massive displays to store info for CPU/GPU. Especially if it's faster than and HDD or M.2 or other permanent storage it could be invaluable for system designs. And lots of stuff can be done with excessive system ram. Especially if it can access and dump from all point of storage simultaneously. If each node can be read and transmitted at once that is massive gathering time for information. You could maximize bus usage without a problem. You might even want to add fiberoptic or other faster buses long with the regular.

Oled even has 0.1ms refresh. It's not the same as other parts of the machine but it's pretty good considering you don't need to change the display at basic functions. If you do then you can even use it as base system ram. Outside of longevity of parts. But that could probably be dealt with.

Interstingly, 24bit makes it equilivent in capacity to modern hdd's. It could be a dump for HDD access when the system is on if it has faster access times.

32 bit coud be for larger capacity for super use. Especially if you can use it to compress at a higher bit rate. This means the normal hdd's can then store data for any higher bit rate in image data and then have it used for things we can't do today. Like display petabytes of instantly readable static ram for software. Or do smaller backups converting from 24bit for HDD backup or even 32bit compression. I'm sure more could be done. Then run them through the system for live instant display and read of massive amounts of data. and even backup utilities. Backups would be very different if you didn't need 1:1 to store them. Or the limited monder compression via removing spaces etc. Although those could also be used and these tools could help speed that up with the right logic.

The write functions don't have to be fast as you can compensate with mass data. The read functions and getting the data out are. and if you can read all nodes that can do things ram and hdds can't as easily. Plus, even if HDD's like M.2's got close to as fast you could use them as write objects as to not wear out the other storage. I would assume monitor tech, even if modded heavily, would probably have longer living write functions. They could even use it as cache for m.2 drives and place them on the back as tiny displays with multiple purposes. Might need to get it to faster write speeds though. Or use excess size to do tricks to simulate higher write speeds. Then you could write less to the final drive potentially. Although having faster actual display speeds might be good in that case. It could be very good supplementary tech.

 

https://www.ncbi.nlm.nih.gov/pubmed/18699476

Not sure how relevant this is, but if it can't read light it could be a double reader on the other side of the light node to use as a display off read or similar depending on application. You could use stuff besides light. The point is to make greater than binary bit conversion/function for data. You can get stuff out of it for modern computers. It could help extend modern tech beyond it's current limits.

 

If an oled was used at 0.1ms response and it displayed in 0.5gb/s (0.5e-9) isn't that logically capable of being faster then ram. Or made to work at the same speeds. You could use tricks on with the size of data and respone time and the ability to flush mass data to outdo ram in calculations or sync it up. It's effectively potentially faster in certain situations if you consider it from a standpoint of sending things at a 1bit rate. the bit rate is technically speeding up by it's factor. And mass data point can also make up for small data grabs with accompanying logic. Even the slower 0.1ms can be played with similarly to create an equally or faster function ram form. Although those may cut into the capacity effectively. But it would still probably be more and more versatile than ram. Although I would assume ram might be more stable in the long run or have other advantages. Probably power and longevity.

Half a gigabyte of data at 32bit depth is 0.5e-8. and 0.01 response times for ram(is that correct) could be equivalent logically if the display is 0.1ms or 0.1e-6?! I don't think that is correct. But it is close. Even 24 bit would be close node to node if you think about total data sent. One trick for slower devices could be to layer logic in color patterns like an hdd so it's split between all nodes and you can take parts of it with each read device quickly and put it together to get much faster reads or seeks. It would probably have a lot in common with a spinning HDD. that layering could be logically modified into the image even from non shifted data fairly quickly. As you don't need to change one point at a time like an HDD. You can change all points potentially at once. Or relatively fast. This could speed up slower than ram into something equivalent if needed. Or relatively speed it up regardless for faster functioning in any situation.

https://etd.ohiolink.edu/!etd.send_file?accession=miami1375882251&amp;disposition=inline

Would any of this help?

https://www.chromedia.org/chromedia?waxtrapp=mkqjtbEsHiemBpdmBlIEcCArB&amp;subNav=cczbdbEsHiemBpdmBlIEcCArBP

If you have static solutions and you could excite it fast enough maybe you could use a dead read function to get it out. and have lot of little lcd like things with single colors. Or can you use this to change the light to make varied colors or other aspects of the color to change the read data?

And even if the change in fluorescents is in the milliseconds can't the data depth be used to effectively get faster data. It could have it's own cache or be attached to system cache to help deal with the data flushes or something and get the correct data. And that would only be to overcome the inability to change data as fast as ram. Which is unnecessary if you don't rely on write speed but mass read of data. But if both can be achieved more could be done.

The really fun part of this would be that those wild rgb computers with multicolors would literally demonstrate the power of the PC. The more rainbow capability the more you can do!! >< RGB WILL RULE THE WORLD!!

You could literally buy the equivalent of a crappy rgb strip and stick it in your computer tower and upgrade your pc. You could wear crappy rgb wrist straps at a rave and dance around like an idiot while your pc does your calculations for your experiments. And the brighter you are the more you can do. It would be a whole new world! Just cover yourself in strips and have a small pocket pc and go on your way. VR in your pocket. Just need full on normal glass displays for real prescriptions glasses.

Can stuff involved in spectrometry or similar read light or anything in the form of a high bit depth currently?

 

 

I'll assume this much heat would not occur. Or I hope not. Monitors don't get this hot but I guess some variations might. I wonder how hot the stuff in those articles about spectrometry get. Some were using temperature to adjust the lights or something.

Edited by Arugela
Link to comment
Share on other sites

2 hours ago, Arugela said:

the real point is a higher bit conversion machine(and instant mass read functions).

So not quantum computing ?

You're derailing your own thread.

Link to comment
Share on other sites

I said alternative because you can accomplish the same thing with massive ram and charts of the total end results of all calculation combined in all potentials. The limit is how much ram if you can't refresh fast enough. That can also be worked around. I'm not derailing anything. This is how you get massive amounts of ram now. Higher bit rate is the mechanical means to accomplish an alternative.

The point is that quantum computing is logically(the datas end result) the equivalent of a 3d chart of results representing complex data outputs(IE real life data results). This is the 2d version. If you know the range of things involved and potential end results you can combine display all combinations and get the same results. You could even test quickly for things outside of your parameters fast.

Qauntum computing allows massive simultaneous data being worked on. So does this. It could even be ram for quantum computer processors. It has potential for greater data flushes etc to go with more powerful future pc's using much greater data flows. And I'm sure there are more efficient ways to do it.

Sorry, but this is how I started the thread. You guys really hate new concepts. You would think people would like the idea of massive Vram and ram on your computer. And, yes, this can get up to speeds for modern computers. The downside is probably power or longevity. Maybe heat. Or price. But with how much we can do to compensate for the downsides it can probably be made much faster than current technology. And you apply anything from monitors, ram, or hdd's as tricks to it, minimum. You can probably even use it as a CPU if you wanted. Maybe a 3d layered CPU. Size is less important when you achieve greater bit rate and throughput potentially.

If the downside is price it would be massively useful in areas where that is less of a concern. You could speed up very large workloads exponentially. It would allow bus systems on mobos to be maximized at all times and systems to maintain maximum throughput potentially.

We did run computers like this once. This would just be going back to some old tricks.

Not to mention the ease of compression and mass storage. It would make all server loads exponentially cheaper and lighter. You wouldn't need anywhere near the current permanent storage. Or you make the current exponentially greater.

https://portal.research.lu.se/ws/files/2711949/2370907.pdf

The other advantage is you can start applying techniques like this to something as simple as your phone. You could get expensive diagnostic results from your phone and send the results to your doctor as a cheap or free initial scan(probably could now.). Then better equipment could be used later in a lab if needed. Anything using any aspects of this tech or similar could be made available at home.

You could also store the state potentially for read with power off in a form of hdd/dvd/flopydrive. But it's probably easier to send the data via image data as it's smaller and faster. But it could be used and dockable data or as a way to deal with poweroffs or low power mode or other things. Very useful if used as cache.

To speed up processing you could use tricks to manipulate light directly at some point and get it to the data form that is needed. Or any other aspect of the device. If you had fast write you could even change the light value of the light as a way of computation. Many things would be possible if it has multiple ways to keep track of read/write and so on. Bend/change the light and get to the part of data you need.

Edited by Arugela
Link to comment
Share on other sites

49 minutes ago, Arugela said:

I said alternative because you can accomplish the same thing with massive ram and charts of the total end results of all calculation combined in all potentials.

That's not how eigenstates works... or at least I think so...

I've no idea of quantum mechanics, and I've only got eigenstates from structural dynamics... which isn't really probabilistic at all.

Guess I'm out of bounds. I know that QM can be simulated with classical computers - that's what we've been running so far - but I also hope we all have heard about floating-point error. Though you only have to do better than actual quantum computer's noise.

Edited by YNM
Link to comment
Share on other sites

53 minutes ago, Arugela said:

Qauntum computing allows massive simultaneous data being worked on.

No, that is not why quantum computers are interesting. Instead, they allow for some really fast algorithms to be used.

One example is this one:

  • On a classical computer, searching for a single element in a unsorted list of N elements takes O(N) time, i.e. the time taken is approximately proportional to the length of the list. If the list is sorted, this can be improved to O(log(N)), i.e. the time taken is approximately proportional to the logarithm of the length of the list.
  • On a quantum computer, something weird happens for this problem: There exists an algorithm that can search for an element in an unsorted list and find it with high probability in just O(log(N)) time, i.e. just as fast as for the sorted list on a classical computer.

So, on a classical computer, searching for one element in an unsorted list with 1 trillion elements will take a million times longer than for the unsorted list with 1 million elements. On the other hand, on a quantum computer, it would only take twice as long.

What you are describing is still a classical computer and so, even if your computer is capable of finding one element within an unsorted list with googol (10^100) elements within 1 day, it won't be able to do so for a list with googol^2 elements before the end of the universe (as it would take you googol days). If a quantum computer could do this for the case of googol elements within 1 day, it would only take 2 days for googol^2 elements.

Link to comment
Share on other sites

I am assuming there are ways to widdle down this sort of thing though. If you have read node that are not isolated but can read across the spectrum and you can change the light you can ultimately apply a very large amount of methods to keep it up to par and get close. You could logically design the device to do something like that specifically potentially.

Although I would design for maximum usability and function as it's one benefit it massive data storage and representation. You could apply filter logic in a structured environment to get things you need. The idea is you would design it's use around knowing the total of what you are doing with it to start. Plus 1:1 or even greater read ability per node depending on how you made it. I would think there are very interesting mechanical and other means to process light and em and other things. One thing you could do is read multiple aspects of the light or more than one thing at a time as a way to store data and use it. It's not limited to just light. Reading multiple qaulities(or deciphering it) of non physical things as data could be combined and read at once. How many ways can you read light or other aspects needed to produce it. Every point could be read and use as analysis to speed seek up. Not to mention manipulation as calculation or anything else usable. There would be a hell of a lot of things you could do to it.

Plus this would have many other uses. If anything, if you can read the screen physically and store the data you could still use it as display. You just have to keep the read device up to date on screen changes so it can still read the data in it. It's basically combing all the things in a current computer into one device. So, it should be usable by all devices in a current computer and a good companion to them. On top of anything else added. If it's done well it could be a good companion to quantum computers also as it could be made to do heavier computations than a binary device in a smaller space as the bit depth usage should add more functionality and hence power.

This kind of tech should scale with computers also. It should be able to use most advances that other devices benefit from in some way or another. If it's equal as ram to modern HDD in space it might stay that way over time. At least as far as the end results. There would be more dimensions to play with to keep it up to par. And how this works obviously already exist all over the place. It's just applying it.

https://en.wikipedia.org/wiki/Nanosensor

https://en.wikipedia.org/wiki/Nanophotonics

Combine all applicable levels and you can increase processing or other abilities. There are no end of ways to deal with the data and get what you want out of it.

It would be even cooler if it could melt parts of itself down and remake it with lithogrophy internally. Some parts could be regenerated and corrected for mistakes.

This stuff doesn't even have to be accurate. It just needs to be predictable.

https://ieeexplore.ieee.org/document/8409924

Is one advantage of fiberoptic or other light transmitting data methods the lack of need for hard joint. IE if it breaks you can replace it and because it's sending light and doesn't need a soldered connection? that would make the tech easier as it's replaceable by definition. As long as the light transmitter can be aligned or whatnot. And you could light along the whole things and manipulate it to remove certain issues. Or adding to it potentialy. I guess it depends when you translate the info.

You could combine it with soldered old fashion ship joints if you needed to send some by cable and have it hold the fiber optic lenses in place to send the data. Or something odd.

Actually downside might be security: https://www.google.com/search?client=firefox-b-1-d&amp;q=Steganography

Although if you can secure that it could be a nice tool. You would have too anyway.

Edited by Arugela
Link to comment
Share on other sites

So, if you just went with the representative data instead of the hardware. What type of things could you process. Where are the limits.

1 pretend you have an image file of a 1920x1080 monitor at 32bit color depth. This is logical data representing the screen display.

You could then apply logic to analyse it to pull out data with modern hardware. This is imediate increase in computers size and power based on just software logic.

2. Can you put in and image inside an image. This just changes the data. This then makes the data depth a matter of software translations. This adds both depth and breadth to data and natural encryption methods with each layering. The deeper the more complex the encryptions. It also means all modern networks can send infinitely more data than previously. to get the desired data you would simply translate it twice or treat is as a deeper layer of logic inside the existing image data. And each layer coud change the bit depth to disguise the data. That could be infinite data patterns to find the correct data. Natural infinite encryption based on computational abilities. Especially if time is no objects. This added to modern encryption means.

 

What are the limits and problems with such tasks? Would quantum computers help find data or would it not be needed. I would there would be ways if you know the structure of the data ahead of time to find the exact data quickly based on pre known circumstances.

How much could this be applied right now to all software?

https://wccftech.com/microled-vs-oled-everything-explained/

https://www.displaydaily.com/article/display-daily/microled-emerging-as-next-generation-display-technology

If not this is supposed to get actual NS response times with no burn in and long life span. Just might be pricey. But if it gives you a petabyte of vram then why not. If it can give you infinite encryption and ram like speeds then who cares. If you get enough response you can simulate even deeper bit depth and get much more interesting data retrieval and other functions maximizing use. Even using them as a processor depending on light manipulation methods and if you can make up for the relative difference to CPU speeds.

And if power savings is a thing the price could be worth it. Maybe. Imagine a hybrid microled/hdd or microled/m.2 drive. with the data manipulations I think at one layer at 1920x1080 at 32-bit you can get 66 thousand times the data storage for a 4 TB drive with compression. And more complex things could probably be applied. You are talking near infinite space and potential computational abilities.

Imagine a 128 bit layered image data that is translated. Each layer has a random bit depth to translate through. Each layer is also encrypted individually!!! You need to apply each layer correctly to get the data or nothing!! and any other method could be applied to make it more complex. I'm sure ways would be found around it in abundance. But it could be hard where and when it is hard. this from one basic image. I wonder if the weakness is in the end image and it's predictability. I imagine though, with ns response times decrypting the data you could add thousands upon thousands of layers or encryption. Let alone applying increasing forms of encryption on each layer and it keeps getting more complicated fast.

Any native machine logic to go over the image layer could also simultaneously be applied with the computation of the led device for multiple purposes. Your taking binary into a 3d realm of processing functions.

I wonder if you can use compression tricks and these devices to speed up or lower overall processing difficulty for monitors and vram to allow displays to more easily display without bogging down the gpu with multiple displays. Probably have to get past the response times. That could be a use for fast read and large displays. You could use fixed overly strong image to translate data like a crypt just for enlargement or something.

Maybe security is the issue.

 

Der... Ok, I was thinking you could represent a screen color combo in less than full 32bit data sizes as a smaller bit length and it reduces size without doing anything special.... Not sure why I was thinking this. But there should be tons of ways to reduce the data count. Especially with combined color representations. Or deductive notation. OK, it can be reduced as it can be represented by 32bit.

1. Consider the color result of multipe colors. This can be stored as a single color to reduce overall size if you know what to translate it back into. Then you just need a cipher to translate everything back. Reduction in size. (Possibly any size combos)

2. A notation system. Break down the colors into grids. If you break them in half and not 1/2 it can then display the grid mroe quickly as it uses half the colors and find the grid. 32bit could at minimum be half the data size with a single bit. Each reduction of the color spectrum then produces a smaller bit to go with it. This reduces the size of the data. (exponential cipher)

Example: 32 bit represented in two 64kb sized numbers((64x1024)^2)(1x1). One represents the notation and one the represents the color grid. 32bit colors represented in two 64kb chunks. You could even store it easily in a cpu's cache at that size. A gain of 32,768 times the storage. 32^3 times actually.(Maybe a fast emergency cpu onboard display logic for startup if you don't have a gpu present?)

You could do the same with two 64 bit values. A gain of 2,147,483,648 times the storage. And 64^4 is 24 bit. It could have easy conversions..

3. Could you break the exponential ciphers into smaller exponetential or other ciphers?! How about 2x2 8 bit values?! Which can be reduced down to 4x4 4bit values?! This could be broken down into 8x8 2 bit values? And finally 16x16 1 bit?! Can you do less than one bit?

8x8 bit is 24 bit again. Making conversions fast potentially. Depending on the color breakdown. Or do it logically instead of by color changing. Or lay it out for conversion on the screen differently. They could be displayed on different spots in order to collect or change the color value for read in many different ways.

Would the physical devices help. At minimum you could store more data and have it held in smaller chunks on an HDD. or could you get equivalent from a cpu/gpu now? I would think the advantage is easy display and then reading/manipulations of large color spectrums for fast processing. Or is the advantage whenever you are past certain aspects of a current system. Like maybe it's bit value. Or a convenient bit value to process via the current system resources. Assuming it can't be done with smaller notation.

I'm forgetting. The advantage is copper translated into light for quick transmition. Plus the connector could be non soldered and conected through something as easy as a video connector from your gpu. then transmit or reacive with optics so no soldering is needed. As long as you can a fix it properly.

Am I missing something else? I might be messing that up again.

Would this not change moore's law from double to exponential or more with each iteration?! It would potentially be an increasing rate potentially at an increasing rate over time.

If so, screw moore. He was an idiot.

BTW, do we use this method now anywhere? I would think gpu's might.

Edited by Arugela
Link to comment
Share on other sites

  • 3 weeks later...

Is this what is used in sound files now? If so can't we just do this with software to shrink down files sizes for download and upload?

 

https://www.bbc.com/bitesize/guides/z7vc7ty/revision/4

Couldn't we just use our audio equipment then to accomplish this to some extent. If not have the CPU use cache ram to help decode things. I think 32bit only needs at worst two less than 64kb chunks to decompress a file.

I wonder if we could use dial ups again if the info is small enough. 8)

 

Back to slowly downloading massive files and waiting eagerly to see the results!! ><

Edited by Arugela
Link to comment
Share on other sites

1) Big walls of text put off most of your audience. You can’t sell an idea if no one buys it.

2) There’s no thing like 32 bit color representation on a monitor. Every time you mention those 32 bits you lose audience because they think you don’t understand important implementation details.

 

Explain it so a six year old would understand it. Then work out the details.

Link to comment
Share on other sites

It doesn't matter what the actual means of producing the light is. The point is the ability to read and effectively produce the range of colors to have a bit with a range of an actual 32bit value(Although this is just an example. 24bit seems to small for a minimum although any and all and multiple bit lengths can be used and even switched between for any useful purpose.). As long as it can in any way or multiple ways produce the light it's fine. In fact the more the merrier. I didn't think this was complicated enough to have to explain much. If you can display or calculate a bit with a range of colors you can get the amount of given data representation. What is there to explain?!

Once you get correct colors or represented data you can then use it to collapse and translate data into actually smaller bits. Packing and unpacking data from much smaller data. then you can expand current data or optimize current tech if necessary in new ways.

Either way you need ways to get the data out efficiently. So, then you have fun with logic. You need to figure out all ways to do this anyway. So, it's kind of par for the course. Not to mention when and if to translate between optical and other means. But it could open up things.

Instant or near instant ways to do current tasks that take a long time. If the logic is faster or can be made faster you can use it with current tech to handle data fater or shrink download sizes or encrypt or whatever.

The last Idea was to use sound. if it's already using actual 24bit it's an easy translation and you can use more of your hardware to complete current tasks.

The idea of much larger functional amounts of ram open up a lot of things including more complicated physics calculation and thing effectively similar to quantum computer output if desired.

If you display a light that is in the range of an effective 32bit color range that color then represents a large string of binary. That is data. You can actively keep up with sufficiently fast hardware with light display or effectively fast means. Depends on the use case and what can or can't be done. If you can't change data it allow large amounts of data to be displayed without a refresh or change of data effectively. There is a lot you can do with that given out it allows expansion of data into 32bit representation live. You can also try to do the same thing with cache ram in a cpu and two 64kb of cache ram or similar.

It's not about selling something. It's a matter of fact issue. I'm just not sure where the downsides are for it. I'm sure there are some or many but I'm not sure how many ways there are around them either atm. It might be suprising. It usually is.

The oddity is the things I've seen on light based computing are all low value binary based. I don't see why it's not used to a higher bit rate. That is one of it's strengths potentially. this concept allows any system to be maximized. No more partially used buses etc. And much more profound software. Even with current hardware.

Edit:

Maybe this is where the confusion is coming from. I'm saying represent the 32bit color range with color each representing a string that is half a gigabyte or 4 gigabits in size. So unpacking 32bit into a larger value. Each string would be made up of a unique string of large data representing a unique half gigabyte string of 0's and 1's. IE the full possible range of 32bit as a large fully fleshed out potential range of data combinations. Then translating back and forth as needed or pulling data out. Each color in the spectrum representing a large string of data. That allow much larger data to be used. And if it can be packed into 32bit or whatever sizes it can extend existing systems past their bit limits potentialy and have more ram in essence enhancing existing computers.

One fun things you can do it not caclulate advanced physics but display the colors and shapes of the results straight to the screen. Then with enough knowledge of the results the screens result data could then have the program respond in a much simpler way.(this would have to be done to the display on sending the image also.) Bypassing complex physics results all together. Not sure how to accomplish it though. I'm sure it's been gone over in the past though. If anything it could be combined with normal physics to supplement it if not entirely replace it. It would probably be circumstantial.

And if the last 8 bit of 32bit color is for transparency it doesn't matter as long as it's a unique readable color. Or effectively is one.

Another trick is to have a way to read the lights individually and over multiple nodes or a single node if it's enough. Then use that as an effective computation device to get the equivalent of a larger data string. This allow potentially instant computation depending on circumstances and size of data to get what is one of the main points of such a device. The potential to do all calculations within a single instance. Or as close as possible. Having multiple read sensors could help accomplish this meaning write is minimally needed but ultimately useful. Anything predictable can be used to calculate where needed making it so you can customize software to get the needed results. So, the more read points with unique outcomes the better.

Then you can do the same with things besides the light itself. Reading heat or other qualities of the light or device that can also be to check accuracy of data could be used as a calculation method and purposely used depending on circumstances. The more ways to read the better.

For instance: If you have a node made of 4 led's. If you can read the light of the LED of each individual light and you can read the end results and you can read across multiple nodes you can use a method to use prediction to calculate something. Both by the combination of lights being used and the outcome and multiple outcomes based on how you display it. This can be done by writing/displaying fast and reading or by having large amounts of written/displayed data sitting to be read from the correct place in the device to find the results. Either way it works. The problem potentially with slow write is that you are more likely to run out of effective space as it's limited to as low as the devices maximum size and usable space depending on the application.

You can also use this to maximize multi core processors by flooding it at 100% at all times and using a more predictable flow of data to use the large chunks of data to string together and outcome. If all data is one big chunk of data and it's tagged internally you should be able to more easily string the outcomes together and keep multiple cores in use. Even if (large) parts of the data is nothing but 0's and empty fill data. So, you will probably want more power efficient CPU's or something to reduce the power consumption somehow. Maybe the predictability will lower cpu usage. I guess it depends on the software. There could be tags to not calculate certain pools of 0's so it doesn't actually use power on empty data. Then other 0's would be part of the calculated data. This could massively reduce power use. then the data is also never used differently making it predictable but not spitting out nothing so it's still easy to find in the same manner with less power usage. Or specialized cpu's could treat the 0's in a manner not to use data. This could lead to simpler cpu designs or more specialized ones meant to reduce power usage.

That could help to literally or effectively do multiple calculations at once to get complex results. It's all a matter of software design. Or to whatever extent hardware abilities.

Edited by Arugela
Link to comment
Share on other sites

TL;DR. You do not understand what quantum computing is really for. This is not just a "normal computer, but faster". This is not optical computing. This is not some extreme sort of multithreading. This is designing a computer that goes beyond standard elements from circuit theory. The advantage of quantum computers is that it is probabilistic, not deterministic. Every single classical computer is 100% deterministic and cannot implement an algorithm which requires something beyond that. You can't even have a real random number without an external generator. You could pool all computing resources on Earth, connect them however you want, and still won't be able to solve the problems you need quantum computers for, because all those machines are still made out of a bunch of transistors. To get around determinism, you need true quantum circuit elements.

Edited by Guest
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...