Jump to content

Geometric mobos(Sram)


Arugela

Recommended Posts

I've been randomly reading stuff on why we don't use sram anymore. I had always heard it might make a comeback with more recent developments. but it seems to not have happened.

One argument went into wire length and various arguments.

This brought me back to an idea I had along time ago. Geometric mobos made up of lego like blocks and geometrically designed for shorter wires. This, instead of flat 2d boards, could shorten the space of many system parts to increase at the least the core system. How pheasible would it be to have instead of a normal mobo(or on top of a normal mobo) a cube,pyramid, or spherical shape with core components either with the entire system or core components like ram bridges and cpu and other connectors reducing down wire length to speed up a system and get better features. If you also can make the entire system including gpu into segments instead of a whole card the engine system could be put in and customize by spacing in the shape instead of a long 2d space. I would think the main issue would become heating potentially. But design and possibly shrinkage could make interesting mini system or much more complex parts to fit on 2d or other boards. I've been playing too much STO, but what about a dyson sphere computer with these shapes inside of it or a borg ship shaped computer! 8)

And a lego like computer where you stick on cpu and other possibley different system parts could be interesting. if it's no bigger than a baseball or golfball or smaller how much could be changed about modern computer design. A whole computer the size of a baseball on a baseball stand providing power on a shelf would be cool.

What about cylanders like the one ship in that one star trek movie. It's long but you can maintain more minimal distance to the central core from any other component...

 

Edited by Arugela
Link to comment
Share on other sites

90+% of your cpu die is sram. the problem with sram is that its very big, 6 transistors per cell as opposed to a transistor and a cap. not only does this make sram very expensive but also very big. how deep your ram is also adds to memory latency because that whole speed of light thing. sram is good stuff though, very low idle power, very fast and quite easy to work with.

one thing you see in the embedded world is chip on chip packages, raspberry pi uses one. you can get a cpu that has pads on both sides, the top pads are so that a memory chip can be installed on top of the cpu, and a trip through the reflow oven bonds it all together. you cant really do the same thing with a high performance cpu like an i7, because you need a block of metal in contact with the die to act as a heat transfer medium into the heat sink, with a thermal interface paste of sorts in between. i often wonder why we have our ram way out in the boonies rather than right next to the cpu die. like ring the chip socket with sodimm slots to minimize signal paths to the chip. i dont think the case modders would like that very much, you cant stick a heafty cooling block on your cpu if you have ram in the way, even though the closer ram would probibly be worth more performance wise than a few hundred mhz of overclock. unfortunately thats the group you have to market your boards to.

when i build a computer less is usually more, all your performance comes from your cpu, ram, and ssds (and your gpu). you dont need a mobo that you could use for a coffee table, or a case the size of the apollo guidence computer to get good performance. smaller is always going to be better. thats why im fond of the mini-itx form factor. you can cram a lot of performance into a little tiny computer. they seldom take more than one video card and seldom more than 2 sticks of ram, maybe a minipcix or an m.2 slot. power supplies and video cards are really behind the times with reguards to miniaturization. again high end system builders demanding massive video cards with two pounds of heat sink, which they will remove and replace with a liquid cooling block that they dont need. at least their are half length cards that have pretty good performance, but id rather have a half height card to get my case size down. power supplies are finally coming down, i saw a 500w modular sfx power supply, so at least the psu people are getting with it. id like to get my system down to about half a cubic foot next time (realistically im probibly reusing my elite 110).

Link to comment
Share on other sites

5 hours ago, Elthy said:

Cooling is the primary spacerelated problem in PCs:

Powerfull hardware
Silent operation
Small formfactor

Pick two of those...

i dont think its like that at all. making your rig silent is more about selecting the right fan bearings and minimizing obstructions in the case. with tdp on the decline you only need one big case fan in addition to the cpu and gpu fans. water cooling aint the answer because you still have to have fans running. my rig is completely silent when idle and only gets a little bit loud under load. running 4790k, default settings (overclocking brings noise and unnecessary and heat, and the heat shortens the life of caps and other components), 8 gigs of ram and a 750ti. its not quite the newest rig on the block but its still what i call a powerful quiet rig, and it doesnt take up much more than a cubic foot of space (coolermaster elite 110). if you want dual video cards, water cooling, and want to overclock everything you will only get one of those things. but i find those builds silly.

Link to comment
Share on other sites

You decided against powerfull hardware, which is a viable option. But to get GPUs with > 250W powerisage cooled properly without loosing your hearing you need very big heatsinks, its simply physics.

Link to comment
Share on other sites

3 hours ago, Nuke said:

i dont think its like that at all. making your rig silent is more about selecting the right fan bearings and minimizing obstructions in the case. with tdp on the decline you only need one big case fan in addition to the cpu and gpu fans. water cooling aint the answer because you still have to have fans running. my rig is completely silent when idle and only gets a little bit loud under load. running 4790k, default settings (overclocking brings noise and unnecessary and heat, and the heat shortens the life of caps and other components), 8 gigs of ram and a 750ti. its not quite the newest rig on the block but its still what i call a powerful quiet rig, and it doesnt take up much more than a cubic foot of space (coolermaster elite 110). if you want dual video cards, water cooling, and want to overclock everything you will only get one of those things. but i find those builds silly.

However any tower pc is large form factor in this setting, And as you say you can get them to run quite. 
The geometic idea is nice if done on an smaller scale as in chips. for parts like graphic or memory I would rater go fiber optic because the increased speed, 
This also increases flexibility as you just connect power and the fiber. 
Chips goes more an more 3d the pi solution is one of the simplest ones, next is to glue chips on the top of each others, this is done in high capasity memory cards. Last is to build the chip itself more in 3d, Issue with all of this is heat making it easier to do with flash memory than high performance cpu.

Link to comment
Share on other sites

High Bandwidth Memory is propably the closest thing today (on an AMD Fury and soon Vega GPU) to Arugelas idea without sacrificing cooling performance or modularity. It has stacked dram chips on a interposer together with the graphics chip.

Link to comment
Share on other sites

11 hours ago, Nuke said:

i often wonder why we have our ram way out in the boonies rather than right next to the cpu die. like ring the chip socket with sodimm slots to minimize signal paths to the chip.

You get 30cm of wire for 1ns latency. You don't get that much distance round-trip on most boards. Given that RAM latency is absolutely glacial on that time scale (~15ns), it's simply not worth the effort to migrate northbridge into the CPU. You get significantly more bang out of dedicating more of the die to cache instead.

Link to comment
Share on other sites

12 minutes ago, K^2 said:

You get 30cm of wire for 1ns latency. You don't get that much distance round-trip on most boards. Given that RAM latency is absolutely glacial on that time scale (~15ns), it's simply not worth the effort to migrate northbridge into the CPU. You get significantly more bang out of dedicating more of the die to cache instead.

On modern cpu the northbridge is included in the cpu so the cpu has direct pins to memory and pci-e, reason might however mostly be to simplyfy design and also let you combine cpu and gpu in one chip, then talking to the northbride and not the memory or other units its an speed up. 
 

Link to comment
Share on other sites

18 minutes ago, K^2 said:

You get 30cm of wire for 1ns latency. You don't get that much distance round-trip on most boards. Given that RAM latency is absolutely glacial on that time scale (~15ns), it's simply not worth the effort to migrate northbridge into the CPU. You get significantly more bang out of dedicating more of the die to cache instead.

Afaik the reason for shorter distances isnt the signal delay, but its quality. It seems easier to have high frequency (and thus high bandwidth) over short distances, similary its easier to have extreme amounts of parallel datalines over a shorter distance.

Link to comment
Share on other sites

1 hour ago, Elthy said:

Afaik the reason for shorter distances isnt the signal delay, but its quality. It seems easier to have high frequency (and thus high bandwidth) over short distances, similary its easier to have extreme amounts of parallel datalines over a shorter distance.

This is solved to an large degree with pci-e, my fiber idea was mostly to give more flexibility. 

Link to comment
Share on other sites

1 hour ago, magnemoe said:

On modern cpu the northbridge is included in the cpu so the cpu has direct pins to memory and pci-e, reason might however mostly be to simplyfy design and also let you combine cpu and gpu in one chip, then talking to the northbride and not the memory or other units its an speed up. 
 

thats how intel integrated gets its performance. its not a lot for a gamer but it is a pretty hefty gpu core. it gets much of its performance from the proximity to the cpu.

 

and the 1ns k^2 noted is about four cycles at the cpu on my rig. a memory fetch can take hundreds of cycles to complete. so four doesn't seem like a whole lot. but considering most of what a computer does is fetch data in memory do an operation and put it back, its going to be doing that a lot and every time thats 4 cycles less stuff your cpu has to do. interestingly enough modern cpus will actually compute every possible value for an operation before it knows what the inputs are. so that when it shows up the cpu already has the result stored in the cache. its really quite nifty.

Link to comment
Share on other sites

4 hours ago, magnemoe said:

However any tower pc is large form factor in this setting, And as you say you can get them to run quite. 
The geometic idea is nice if done on an smaller scale as in chips. for parts like graphic or memory I would rater go fiber optic because the increased speed, 
This also increases flexibility as you just connect power and the fiber. 
Chips goes more an more 3d the pi solution is one of the simplest ones, next is to glue chips on the top of each others, this is done in high capasity memory cards. Last is to build the chip itself more in 3d, Issue with all of this is heat making it easier to do with flash memory than high performance cpu.

i look forward to 3d memory. when they figure out how to build up a cpu die to about 1 cm thick with massive vertical sram and dram memories built right into the die. then throw in a gpu and an fpga while you are at it. then your motherboard just becomes a big socket for the chip, psu on board, and a rack of ssd slots. that would be interesting indeed.

Link to comment
Share on other sites

The last group that maximized computer density by limiting wires was likely Seymour Cray (and whatever computer company he was at).  Even then, cooling was the critical element, and wildly moreso now that nvidia is producing 300W GPUs for supercomputing (yours for $15k a pop).

SRAM vs. DRAM.  Modern computers are based on a memory hierarchy: SRAM (small SRAM areas burning the most Watts) are fastest (~1ns) followed by larger SRAM areas (~10?ns) [and typically two levels of SRAM with different speeds], followed by DRAM [~50ns], followed by NAND flash [10,000ns?] followed by rotating discs [10,000,000ns?].  The idea is that as the speeds get slower and slower, you have to read less often (and typically need larger batches.  Everything from L1 [~1ns] to DRAM and below is read in batches of 64bytes a pop, everything above that has issues reading less than 4k bytes a pop.  And of course cost varies with price (except your fastest SRAM slows down if you make it bigger.  You can't have a large two-cycle L1 at any price).

3d memory is here, for values of NAND Flash (I think 64 layers is shipping).  Intel's 3dXpoint sounded like an attempt to barge in between DRAM and NAND flash (changing DRAM to something like a cache in HBM form, and turning NAND into a commodity).  Until they get the endurance they were claiming no more than a year ago, it isn't going to happen.

Don't forget that programming a computer made out of many processors is tricky [wild understatement] (notice that after the Unity upgrade and everything else, KSP largely uses only one CPU?).  You can trivially make a computer fast by slapping lots of processors together, and this has been true since the late 1980s.  Software techniques are slow to build up to use such a bounty.

Link to comment
Share on other sites

14 minutes ago, Nuke said:

i look forward to 3d memory. when they figure out how to build up a cpu die to about 1 cm thick with massive vertical sram and dram memories built right into the die. then throw in a gpu and an fpga while you are at it. then your motherboard just becomes a big socket for the chip, psu on board, and a rack of ssd slots. that would be interesting indeed.

Problem would be to cool it, or rather the core of it. 

Link to comment
Share on other sites

8 minutes ago, Nuke said:

something about cnt heat pipes.

Diamond is supposed to be an superconductor of heat, if you make large part of the chip block of diamond is should work, yes the diamond heat sink base would be far larger than the cpu.
Low tech solution, lots of holes on in the block, high pressure water inn, vacuum pump on back so water turn to steam in cpu might work. In both cases the cooling system is way larger than the computer. 

Link to comment
Share on other sites

put it in a fish tank? Make pumps in side to pull water through? It could made part of the system. ><

Maybe integrated PC inside the house water tank and system...

That might be cool in business or places with known long term computer needs. As long as it's replaceable/upgradable.

The day we see water tanks or water cooler tanks with CPU advertisements would be funny. You would have convos with people wearing VR/3d helmets/keyboards doing their daily office work while chatting and halfhazardly sloshing water all over the place in little cups.

"You need a cup of water joe? No thanks. I just need a better connection on my wifi to send in these documents to the boss."

Room level mainframe in each water cooler and admin/server room moved to the water closet(Building water tank storage(s)) where the admins are hidden like quasi motto in a belltower!! >< That could be bad or funny if it's an old building with a rain collecting water tower on the top.. Maybe they could get secondary jobs as live weather reporter for the local news. Or maintenance men in the basement. Basement dwellers would gain a whole new meaning.

Getting back to the subject a little. Wouldn't the smaller more compact size help a little with power over the lines and cooling a little? and if it's small enough can you do cooling on the board/lego pieces and send it all to an outer shell. Literally like a baseball or golfball to be removed? Maybe use electromagnetic or something on the computer boards to pull air or help in cooling?

How hard would be be if the motherboard itself pulled the heat and the wires/cpu/components was insular/non heat producing/reactive(like the fiber optics idea). Maybe a peltier or something else as the entire mobo/case? Would there be a purpose for that sort of design in an electronic device. the cooling would then be bigger than the rest of the system as it would be the majority of the system assuming mobo material is greater than other materials. It could be put in water or liquid cooling too if designed correctly.

If you knocked out all the unnecessary stuff and shrunk the entire computer down and it was all connectible in a much smaller fashion(like lego pieces potentially), what could you get away with cooling and processing wise, If it were a small pyramid or similar around golfball size for an entire modern computer?

I think I had a convo a few years ago asking about fiber optic computers and computers run on mass parallels using lights like a computer monitor produces and colors and other things and processing info. (AKA color spectrum detection as hardware storage and massive bitlengths for data representations etc to increase information processing.) the question was how much could you incrase processing by using LCD or similar tech and light spectrums as a way of sending data in mass form. Each light spectrum being a massive bitlength and subtle changes in light being changes in data. Say an entire length of lcd made a single massive line of data(the line is not straight) but it is read on a continuous basis, possibly not all at once but fairly fast. Although I would assume constant read states would be possible.) if each node was a color and the color was displayed and read properly and each node being a massive bit length and/or the entire line of LCD's being a bigger bit length made of the smaller bits could you process massive information on a visual/nonsivusual color spectrum in a line or whatnot. Then instead of processing data the normal way you simply keep track of changes on a massive color display that is storable without power(assuming) and ram/HDD and possibly CPU/GPU in one. I wondered if error checking and actual processing could be done via visual scan and electronic bit at once separately and used for checking and processing of info simultaneously for quicker use on a whole or in case situations. Particularly if the node held it's own data on it's state. I wondered if it could get longer lifespans than normal methods also. I think it came down to wether the nanometer size of the chip outdid the color spectrum depth of bits in the LCD and it's size or something. Would something like that be cooler and usable as either lines(normal fiber optics or my idea) or as a processing unit of some kind? Not sure whether LCD's produce more heat output than a cpu does in equivalent mass.

I think the origin of the convo was going past binary computing. I think my reasoning was the potential ability to change the lights spectrum and hold it with minimal change or ware on parts and massive depth of bit information to make up for size or other issues. And wether it had any other advantage/disadvantages. And if it can act as enough parts of a current computer(RAM/HDD/CPU/GPU/other) take over as a single multi use component given a correct configuration or design. If all else works I'm assuming the difficulty is repair/replacement and associated issues. though you would have to probably look at recycling and reuse to see if it can be made to override those issues. Along with ease or cheapness of manufacturing. Especially if you can get enough computing from a small enough area.

http://news.mit.edu/2013/computing-with-light-0704

Here's this. But I was thinking color spectrum detection because you could have 2^2,000,000 bit length at minimum plus possible non visual color spectrums. Although multiple measurements simultaneously was something I wondered about. If you can make it do two or more(possibley many more) thing at onces. Or  in between readable functions of one type or another...

It could make raid interesting. You split a screen into even parts and make them read/write at the same time. You could even use old raid logics by changing the read method.

If anything you might get way better monitors out of it. Like that ELCD(or was it OLED) stuff that was supposed to come out with massive color bit spectrums of display from japan. Imagine one of those small oled displays where the screen is a display but the back or edge of the display is other components(non visual to the user possibly) used as OLED lights for ram/hdd/cpu/gpu/other on top of any other very small components etched into the display materials. If the case itself were all OLED's and the side facing you is the screen while the rest of it is used for the computer it might work out nice. Especially if those oled's are or could be made less power hungry. If it could hold it state data physically without power it could go into low power mode and only use one read state to run most things and a small minimal chunk of oled's for computing.

https://en.wikipedia.org/wiki/Organic_light-emitting_transistor

https://en.wikipedia.org/wiki/Organic_field-effect_transistor

Lower cost in the future
OLEDs can be printed onto any suitable substrate by an inkjet printer or even by screen printing,[65] theoretically making them cheaper to produce than LCD or plasma displays. However, fabrication of the OLED substrate is currently more costly than that of a TFT LCD, until mass production methods lower costs through scalability. Roll-to-roll vapor-deposition methods for organic devices do allow mass production of thousands of devices per minute for minimal cost; however, this technique also induces problems: devices with multiple layers can be challenging to make because of registration—lining up the different printed layers to the required degree of accuracy.
 
And if we can print our displays one day we can print a computer or harddrive if led's can be used as all components. We could print entire computer systems on a sheet and use it as a carry display handheld computer. You could get print designs from online and simply print an entire PC for personal use. 8) If it's recyclable that removes the entire recycle/repair function. if we can recycle or fix the oleds/ofets on the print sheet we can simply put it in a scanner like device for repair or reprint the entire thing. Even saving data if possible.
 
Then one day we will be asking ourself, "What came first? the Printer or the PC!?"
 
And long discussions about the printing press, older writing, greek computers, and the cross use/reality of mechanical knowledge across time to computing abilities will commence!
 
Because when two things are used for different uses but are the same identical function mathematically/mechanically/geometrically, when did it really first begin to be used.
 
http://computer.howstuffworks.com/printing-computer.htm
http://electronics.howstuffworks.com/gadgets/home/dyson-bladeless-fan.htm <- could something like this(or another principle) be shrunk down or applied to a geometric computer like I mentioned. Maybe not for a large amount of air but basic flow. Possibly flow for air or water without changing equipment.
 
I would assume the use of light could be adaptable as you could change the use of an oled and it's read meaning and use it for different types of tasks. You could change from a full spectrum to partial spectrum and have on the fly changes in the lights meaning up to it's maximum. So possibly anything you want. you would just need to change how it was handled afterwords and change the lights to fit the new data types until you are done with it.
 
Would the LED's get brighter or duller with use... I would think you would start with low light and raise it for detection purposes depending on how it's read. Then it dims as it dies? Have they run into that with this kind of tech yet?
 
https://3dprint.com/39552/3d-printed-flexible-computer/
 
I guess this kind of tech my beat out the geomentric idea. Unless the geometric idea is for a large weird server or dedicated high performance part.
 
http://www.printedelectronicsworld.com/articles/7341/improvements-in-transistors-make-flexible-plastic-computers-reality
 
I guess LR-ofets aren't as good according to this article.
 
Edited by Arugela
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...