Jump to content

Stupid calculators


Shpaget

Recommended Posts

We've come to rely on our pocket calculators, however, I've come across this little gem. My Citizen SR-270X is a bit of a special needs case.

When asking for the result of 2255, one would expect to be presented with some huge number.

This Citizen, on the other hand, is quite firm in stating the result is 0.5.

03kr40e.jpg

Interestingly, 255 is 11111111 in binary, which happens to be 8-bit two's-complement -1, which would mean that the calculator actually calculates 2-1.

However, it correctly calculates 2254, 2256 and all the way up to 2332. Weird.

Link to comment
Share on other sites

I'm almost certain that you're in a fixed decimal places mode. In other words, the Citizen calculator gets the same answer as the Casio, but it's truncating the answer to only show one digit after the decimal.

That, or they chose a really really odd place to save a few transistors.

Of course, I'm sure my TI-89 titanium would spit out the right number, but then again it's hardly fair to compare a graphing calculator to a scientific calculator.

Edited by SciMan
Link to comment
Share on other sites

I'm almost certain that you're in a fixed decimal places mode. In other words, the Citizen calculator gets the same answer as the Casio, but it's truncating the answer to only show one digit after the decimal.

You might be right there, it makes a lot of sense. Only if similar queries yield very different results, something else might be going on.

Edit: the bit about similar queries is in the OP. I guess that is what you get for posting while doing other things.

Edited by Camacha
Link to comment
Share on other sites

That's because it uses IEE floating point. 255 + 127 = 382 = 0x17E. But it only has room for 2 hex digits. So the 1 is lost. It becomes just 0x7E. As an actual floating point number, it's 0x3F000000 = 0.5. Easy.

Link to comment
Share on other sites

That's because it uses IEE floating point. 255 + 127 = 382 = 0x17E. But it only has room for 2 hex digits. So the 1 is lost. It becomes just 0x7E. As an actual floating point number, it's 0x3F000000 = 0.5. Easy.

How does the 255 + 127 in your explanation correspond to 2255? How is 255 as an exponent different than 254 and 256 from the perspective of the calculator?

Link to comment
Share on other sites

How does the 255 + 127 in your explanation correspond to 2255? How is 255 as an exponent different than 254 and 256 from the perspective of the calculator?

The format stores a number as sign * 2^exponent * (1 + mantissa). The sign is stored as a single bit. Exponent is stored as a number between -127 and +127. To achieve that, it adds 127 to the actual exponent and stores that. So number 1.0 is 2^0 * (1 + 0). And so the exponent it's actually going to store is 0 + 127. Since it's positive, sign bit is zero, hence 0x3F800000. Likewise, 0x3F000000 = 0.5, because now the exponent stored is just 126. 126 - 127 = -1, and 2^(-1) = 0.5.

I don't see why it would work differently with 2^254. You should get 0.25 as an answer. I can imagine 2^256 not working because of some other limitation. But it's hard to tell. If it works the same, the answer should be 1.0.

In general, with this simple exponentiation algorithm, instead of 2^n, you should get 2^(((n + 127)%256) - 127), where % denotes modulus operator.

Edit: Many systems use double-precision instead. So the limit is 2^1023 instead of 2^127. But I haven't been able to find one that would give the same mistake on 2^2047. Everything I've tried so far has correctly identified an overflow and displayed result as +Inf. (Impressively enough, Ti-89 managed to actually compute a value even for 2^2048)

Edited by K^2
Link to comment
Share on other sites

You might be right there, it makes a lot of sense. Only if similar queries yield very different results, something else might be going on.

Edit: the bit about similar queries is in the OP. I guess that is what you get for posting while doing other things.

Well, I guess that settles it. They chose a very odd place to save a few transistors.

Now that I know it's not fixed decimal places, this looks a lot like a buffer truncation error. In other words, the last N bits of data just get chopped off the end and thrown away.

This is different from a buffer overflow error, where a buffer wraps around to 0 if it gets incremented by one while at a value of 2^N-1 where N = buffer width in bits.

Ex. 0xFF ++ 0x01 = 0x00 is a buffer overflow for an 8-bit buffer, because 0x100 doesn't fit.

Whatever it is, something very strange is going on here.

Link to comment
Share on other sites

This is different from a buffer overflow error, where a buffer wraps around to 0 if it gets incremented by one while at a value of 2^N-1 where N = buffer width in bits.

Ex. 0xFF ++ 0x01 = 0x00 is a buffer overflow for an 8-bit buffer, because 0x100 doesn't fit.

No, this is precisely an overflow error, as I have explained.

Link to comment
Share on other sites

Well, I guess that settles it. They chose a very odd place to save a few transistors.

Now that I know it's not fixed decimal places, this looks a lot like a buffer truncation error. In other words, the last N bits of data just get chopped off the end and thrown away.

This is different from a buffer overflow error, where a buffer wraps around to 0 if it gets incremented by one while at a value of 2^N-1 where N = buffer width in bits.

Ex. 0xFF ++ 0x01 = 0x00 is a buffer overflow for an 8-bit buffer, because 0x100 doesn't fit.

Whatever it is, something very strange is going on here.

Normal standard is to give an error if overrun, other weird thing is that it did not switch to negative numbers but I guess sign is not first digit as in computers.

I however can top this, then I was an kid I had an Commodore 64, programing in basic it insisted that 7*7=49.0001, not sure about number of zeros but it was three or more.

Link to comment
Share on other sites

The format stores a number as sign * 2^exponent * (1 + mantissa). The sign is stored as a single bit. Exponent is stored as a number between -127 and +127. To achieve that, it adds 127 to the actual exponent and stores that. So number 1.0 is 2^0 * (1 + 0). And so the exponent it's actually going to store is 0 + 127. Since it's positive, sign bit is zero, hence 0x3F800000. Likewise, 0x3F000000 = 0.5, because now the exponent stored is just 126. 126 - 127 = -1, and 2^(-1) = 0.5.

I don't see why it would work differently with 2^254. You should get 0.25 as an answer. I can imagine 2^256 not working because of some other limitation. But it's hard to tell. If it works the same, the answer should be 1.0.

Yeah, that's two's complement.

It gives correct answers for 2^254, 2^256 and everything I tested up until 2^332, after which it Gives Math Error (I suppose 9.999 x 10^99 is as high as I can go) so if it was an overflow, that shouldn't happen. It seems that that was intended to be a spot where it switches from working with 8 bits to something else, but the transition was poorly done.

Link to comment
Share on other sites

Normal standard is to give an error if overrun, other weird thing is that it did not switch to negative numbers but I guess sign is not first digit as in computers.

Please, read the thread. I have explained exactly what happens. Sign IS the first bit, but it's the exponent that's overflowing. It does not overflow the same way as it does in integer math.

And by the way, all errors on over/underflows are always optional, even on PC. By default, neither integer nor floating point over/underflows generate an error. You have to set some flags to get them to trigger an exception.

Yeah, that's two's complement.

No, it is not. In two's complement, zero is zero. In IEEE exponent, zero is 127. It has the same modulus property as two's complement, but the number is shifted by a fixed offset.

Link to comment
Share on other sites

So, any guesses why this one particular fails the rest of calculations work fine?

Several. For starters, as I've indicated, a bunch of calculators use double precision. That has a larger range of exponents, -1023 to +1023. Next, there could be hardware difference. Particular chip might respond to over/underflow differences. Finally, there are usually flags that specify what should happen on floating point error. Typical responses are to do nothing, to replace result with Inf/NaN as appropriate, or to throw exceptions. It's possible that this is normal "do nothing" behavior for this calculator, while others tend to treat it as an error.

Link to comment
Share on other sites

So, any guesses why this one particular fails the rest of calculations work fine?
Several. For starters, as I've indicated, a bunch of calculators use double precision. That has a larger range of exponents, -1023 to +1023. Next, there could be hardware difference. Particular chip might respond to over/underflow differences. Finally, there are usually flags that specify what should happen on floating point error. Typical responses are to do nothing, to replace result with Inf/NaN as appropriate, or to throw exceptions. It's possible that this is normal "do nothing" behavior for this calculator, while others tend to treat it as an error.

Please read the question from Shpaget carefully.

He asked for the the rest of calculations, not rest of calculators.

Your answer still doesn't explain why 2^254 nd 2^256 seems to work correct.

For these exponents, the overrun issue (yes, FP overrun, not integer) should also occure.

This fact is it what makes the behaviour so weird.

Link to comment
Share on other sites

Your answer still doesn't explain why 2^254 nd 2^256 seems to work correct.

For these exponents, the overrun issue (yes, FP overrun, not integer) should also occure.

This fact is it what makes the behaviour so weird.

Ah, I missed the part where 254/256 work. That is, indeed, extra weird. But it's hard to say why these two are special without knowing some details on the algorithm used.

I was picturing this with a^b = 2^(b ln2(a)). This makes the exponent really fast to compute using FPU arithmetic. The ln2 can also be computed very efficiently by using FPU to get integer part, and iterating from there. But if that was the case, ln2 wourld return strictly 1.0, which would allow the rest of the algorithm to procede exactly the same for 2^254 and 2^256. I suppose, there could be some sort of a check in there that fails for some odd reason, but it's also possible that it's actually a completely different algorithm.

Edit: What would the result of 2^253 be? Does it still give you correct value?

Edited by K^2
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...