Jump to content

Supercomputers


Kiro

Recommended Posts

I remember hearing stories about how those "dishwasher" drives could jump several feet across the floor (hundreds of pounds of weight, and all) in the event of a head crash. Scary stuff. I never had the pleasure of working with them myself.

I've never seen them 'jump', not from a head crash anyway. For something like that to happen, the spindle and drive motor/flywheel would have to instantly seize - then maybe yes. For a large drive of that type, you'll hear a head crash... for sure... the sound is unmistakable, and unforgettable.

One thing I have seen dance across the floor was a huge magnetic core drum. We're talking real old skool stuff here. The thing stood verticle and spun at something like 3000rpm (don't quote me on the speed). Usually they're bolted down to a 1 or 2 inch thick steel plate, which is also bolted/anchored into the cement flooring. If the bolts shear (or some numb-nut forgot to bolt it in the first place) or the cement anchors come loose - get the hell outta the way.

Link to comment
Share on other sites

  • 2 weeks later...
I remember hearing stories about how those "dishwasher" drives could jump several feet across the floor (hundreds of pounds of weight, and all) in the event of a head crash. Scary stuff. I never had the pleasure of working with them myself.

Never had one 'jump', have seen several 'walk' away from their original position.

At one occasion we actually had one walk on command.

The trick was to find points on the disk that generated the most head-movement, and then find the resonance frequency of the cabinet.

The result was similar to a washing machine on spin cycle with unbalanced load.

Link to comment
Share on other sites

Typically the "ultra high end" computing parts will do literally nothing for game type applications. Your dreams would be better served wishing for KSP x64 with multithreading enabled running on a 5960X on full immersion cooling.

Actually, just having multithreading with any efficiency would likely do wonders, most people with gaming PCs have eight effective cores.

Gaming computers in late 2015 typically have at least eight threads.

Some of us, however, have a bit less sanity than others and....

Well maybe I bought this chip last week.

Edited by Camaron
Link to comment
Share on other sites

Gaming computers in late 2015 typically have at least eight cores.

Some of us, however, have a bit less sanity than others and....

Well maybe I bought this chip last week.

There's little to no point in having more than 4 cores for gaming, and currently the majority of +4 core 'gaming' pc's are sucky 8*** chips

Link to comment
Share on other sites

There's little to no point in having more than 4 cores for gaming, and currently the majority of +4 core 'gaming' pc's are sucky 8*** chips

Din't buy a 28-thread xeon for gamin', friend. (but hopefully KSP will benefit enormously from it, regardless).

Also, Intel has seven six-cores and an 8-core in it's i7 group.

Also, I meant "threads" when I said "cores" in my previous post.

Link to comment
Share on other sites

There's little to no point in having more than 4 cores for gaming

True that. Standard performance optimizations target 4-core systems. Except for consoles, but current generation of consoles has maneur for CPUs, so if the game was optimized for a console, any decent 4-core CPU will be more than adequate.

P.S. I don't see a point in purchasing a high-performance CPU for anything other than gaming and maybe some AI tasks. For pretty much anything else, you want raw FLOPS, and you cant beat a good GPU for that.

Edited by K^2
Link to comment
Share on other sites

Cant argue for 4 core. Markov Chain Monte Carlo analysis fried my single core Prescott although it could have been a MB fault, could not use the computer when the program was running, sometimes several days. With two cores it was slow but not a problem, ran faster with a slower process rate. Windows XP running A virtual DOS emuation mode, using a VT220 window and constant communication with Com1 clobbered gui function, a dual core processor with single proc rate a little over half the speed resulted in an almost seamless slower gui operation. Theoretically speaking if you had a process intesive program that was set to run in the background, as long as is had decent "Do event" frequncy, you could run 3 programs and and the results of each being fed to a forth using filed packets, which could pick up the packets once closed by the first, draw in the data abd deleting the file and you could triple the processing rate. I would not do this; running CPUs with intense numerical and integer processing for days on end ages the hell out of the processor. But occupying two cores, and letting the enviroment run a third is fine.

Link to comment
Share on other sites

True that. Standard performance optimizations target 4-core systems. Except for consoles, but current generation of consoles has maneur for CPUs, so if the game was optimized for a console, any decent 4-core CPU will be more than adequate.

P.S. I don't see a point in purchasing a high-performance CPU for anything other than gaming and maybe some AI tasks. For pretty much anything else, you want raw FLOPS, and you cant beat a good GPU for that.

One fun thing with current gen consoles is that they have an weak cpu with 8 cores something who should push more better multi core use in games.

Its plenty of programs who is very cpu demanding, I works with 3d a lot and its heavy, yes GPU starts taking over load but this tend to have the same effect as in KSP, you simply expand the scope :)

Link to comment
Share on other sites

True that. Standard performance optimizations target 4-core systems. Except for consoles, but current generation of consoles has maneur for CPUs, so if the game was optimized for a console, any decent 4-core CPU will be more than adequate.

P.S. I don't see a point in purchasing a high-performance CPU for anything other than gaming and maybe some AI tasks. For pretty much anything else, you want raw FLOPS, and you cant beat a good GPU for that.

But what do the FLOPS actually do?

Super computers have PETA FLOPS, you know...

Link to comment
Share on other sites

Very expensive paperweights...

*Sigh* I was seriously considering buying the Met Office's old Cray when they sold it off in the 90s.

It's probably not the model you were thinking of buying, but I'd still jump at the chance to buy a CRAY-1 if I ever got the opportunity. Might be no use as a computer these days, but it'd still be useful as a piece of furniture.

500004254-03-01?$re-inline-artifact$

Edited by pxi
Link to comment
Share on other sites

P.S. I don't see a point in purchasing a high-performance CPU for anything other than gaming and maybe some AI tasks. For pretty much anything else, you want raw FLOPS, and you cant beat a good GPU for that.

The guy from a bioinformatics institute says hello.

Our computer clusters don't have GPUs, because CPUs are more cost-effective and use less power in the tasks we do. Many of the clusters even have AMD CPUs, because it's the integer performance that matters, and AMD is often more cost-effective than Intel.

Link to comment
Share on other sites

The guy from a bioinformatics institute says hello.

Our computer clusters don't have GPUs, because CPUs are more cost-effective and use less power in the tasks we do. Many of the clusters even have AMD CPUs, because it's the integer performance that matters, and AMD is often more cost-effective than Intel.

GPUs are nice for that they can do, primarily rendering, simulations, it also reqire that the software support gpu

For normal server use its pointless

Link to comment
Share on other sites

Din't buy a 28-thread xeon for gamin', friend. (but hopefully KSP will benefit enormously from it, regardless).

Also, Intel has seven six-cores and an 8-core in it's i7 group.

Also, I meant "threads" when I said "cores" in my previous post.

Multiprocessor cores for Supercomputers are not designed for PC applications they are designed for parallel processing, for example processing the data from thousands of weather stations, sharing the data between adjacent processors and creating a weather model. In the case of the NSA processing all your personal data looking for 'risks' and then moving it over to a separate machine for analysis. Another example, suppose you have realtime data coming in from a space telescope, like SETI, you could divide the spectrum into overlapping segments and then analyze the signals in those segments, some areas with wider spectrum.

You would reach a certain limit in games which probably is going to be somewhere on the order of interacting game objects, for example objects like trees, or sprites, or other features, at which point it would not add anything. KSP for instance could benefit by processing information from SOI and surface separately so it could keep track of interaction with Atm or other things while your foreground process is warping. It could also wisely create processes for each fused part, and things like struts (which BTW you can walk through).

Link to comment
Share on other sites

GPUs are nice for that they can do, primarily rendering, simulations, it also reqire that the software support gpu

For normal server use its pointless

GPUs are no longer the special-purpose computers they used to be. Now it's better to think them as relatively cheap computers with a huge number of cores and a tiny amount of memory. If you're running numerical algorithms, GPUs almost always beat CPUs. If you're running combinatorial algorithms with only a few gigabytes of data, GPUs will probably beat CPUs. If you can represent the problem as a large number of small subproblems, a GPU will probably beat a CPU, especially if nearby subproblems tend to be similar.

If you're running combinatorial algorithms with large amounts of data with large-scale dependencies, GPUs are pretty pointless. If you're running a bunch of random stuff (like a server of a boring generic company), a GPU will also be useless.

Link to comment
Share on other sites

The guy from a bioinformatics institute says hello.

Our computer clusters don't have GPUs, because CPUs are more cost-effective and use less power in the tasks we do. Many of the clusters even have AMD CPUs, because it's the integer performance that matters, and AMD is often more cost-effective than Intel.

Without more details, it's hard for me to be particularly assertive on this, but my knee-jerk reaction is that you aren't using GPUs right. Even for tasks like artificial NNs, GPU is by far more power efficient*. You just have to make sure none of your CUDA cores ever spin idle or locked on memory access. Which is an extra challenge in design, but I'm not aware of any branch of bioinformatics where GPU optimization isn't a solved problem. If you are positive that yours isn't, I would actually be very interested in hearing a bit more about it.

* Power efficiency in GPU computations is the only reason they were economically viable for bitcoin mining for quite a while, for example. CPU consumed more power than you could buy with bitcoins you'd mine. Once you have your computations on rails, GPU is amazingly power-efficient. But if it keeps thrashing for data and access to it, all it's going to be generating is waste heat.

Link to comment
Share on other sites

Without more details, it's hard for me to be particularly assertive on this, but my knee-jerk reaction is that you aren't using GPUs right. Even for tasks like artificial NNs, GPU is by far more power efficient*. You just have to make sure none of your CUDA cores ever spin idle or locked on memory access. Which is an extra challenge in design, but I'm not aware of any branch of bioinformatics where GPU optimization isn't a solved problem. If you are positive that yours isn't, I would actually be very interested in hearing a bit more about it.

The problem with GPUs is that they've been optimized for problems that can be decomposed into a large number of small, mostly independent subproblems. Most bioinformatics problems involving sequences are exactly the opposite, as they are quite sequential in nature.

For example, people have been talking recently that reference genomes should be graphs instead of sequences, because a single sequence is always too biased towards some direction. Switching to reference graphs would involve redesigning pretty much everything that deals with reference genomes. Unfortunately it turns out that indexing graphs is much, much harder than indexing sequences. There are some ideas that almost work, but they require random access in hundreds of gigabytes of memory, and nobody has been able to parallelize them to use more than a few cores.

Another idea is to avoid reference bias entirely by basing the analysis on unassembled reads. Well, Illumina has been pretty successful in selling its sequencing machines around the world, and now everyone is drowning in terabases of sequence data. In order to analyze the data efficiently, you have to keep it in memory. But how can you even build in-memory data structures for anything that big?

Then there's the most fundamental problem in sequencing: de novo assembly. Nobody still knows how to do it well.

Nvidia has been targeting us for a while, but even their people haven't been able to use GPUs efficiently enough in the tasks everyone is already doing. (In the tasks I mentioned above, there are exactly 0 people in the world with a good idea about how to do them on any hardware.)

Link to comment
Share on other sites

P.S. I don't see a point in purchasing a high-performance CPU for anything other than gaming and maybe some AI tasks. For pretty much anything else, you want raw FLOPS, and you cant beat a good GPU for that.

The current problem with GPUs is that they are much more picky on the how, where and when. If you do distributed computing on multiple CPUs you just shove your data in there and see the magic happen. When it comes to GPUs, you often have to deal with matching hardware, drivers and versions, and a specific hardware group to boot.

As it stands, CPUs are much more flexible than GPUs.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...