Jump to content

What's the difference between 0 and -0?


Sun

Recommended Posts

For normal intents and purposes, they're the same number. Except that one is positive and one is negative. How is that so? Technically, they have the same value, but I feel like there's a difference. Is there a difference? If so, what's the difference?

Link to comment
Share on other sites

The only thing I can think of is a negative number rounded to zero or negative zero that might be needed in calculating voltages in an electrical circuit?

But that's just my guess.

Or could it be used to represent a waveform number that is at zero, but is about to pass into negative numbers?

Edited by Tommygun
Link to comment
Share on other sites

For normal intents and purposes, they're the same number. Except that one is positive and one is negative. How is that so? Technically, they have the same value, but I feel like there's a difference. Is there a difference? If so, what's the difference?

The only thing I can think of is function limits from calculus: the limit of some function as it approaches 0 from the negative side may be different than the limit as you approach from the positive side. One such example is f(x) = 1/x: it approaches negative infinity as you approach from the negative side, and positive infinity as you approach from the positive side.

Link to comment
Share on other sites

There's no meaningful difference at all. Adding a negative sign in front of a number like that signifies it's additive inverse. Basically any number x has a unique number that we call -x such that x + (-x) = 0. (This is analogous to the idea of a multiplicative inverse, where say 1/3 * 3 = 1). 0 is special because its additive inverse happens to be itself, it's the only number you can add to 0 to make 0.

Link to comment
Share on other sites

In general whenever you see something like -0 anywhere it means that the number is a negative number that has been rounded to 0.

Specific situations and context can change that but in the absence of context it should be assumed to be so.

Link to comment
Share on other sites

-0 is a quirk of floating point units. its simpler logic wise to use a sign bit rather than a twos complement mantissa. a multiplication for example just needs to xor the bit independently of the other parts of the number. this means negative zero is a thing, but i think if you do something like 'if(anegativezero==apositivezero)' it will be true.

Link to comment
Share on other sites

It sort of depends on the algebra. One could conceive an algebra where there is a meaningful difference between 0 and -0. But for fields we usually deal with, qemist's reply covers it. -0 by definition is the number such that 0 + (-0) = 0, and that trivially gives you 0 = -0.

In practical situations, -0 arises in two contexts. Leszek covered one. If you are dealing with a tiny negative number it can be rounded to -0. Though, more typical notations for slightly positive and slightly negative numbers are 0+ and 0-.

The other situation, where -0 arises quite frequently, is computer science (Ninja'd on that by Nuke). The way real numbers are represented on modern computers is as floating point numbers. Unlike integers*, floating point numbers have one bit reserved for the sign, and that is computed separately from the rest of the operation. So if you, for example, multiply 0 * (-1), you get -0 as the result. It doesn't hurt anything, and typically means nothing. I have, however, seen some hacks that use that extra bit of storage for interesting things.

It's worth to keep in mind, however, that one of the reasons floating point is set up this way is because 0 is often treated as "very small number rounded to zero."

* Integer math is done completely differently. While the high bit can be considered a sign bit, and for the purposes of comparison it is, it's not used in algebra any differently from the rest of the fields. Instead, entire algebra is done modulo 2n, so there is no difference between a negative number and very large positive number. As a consequence, zero is entirely unique. There is no such thing as integer -0.

Edit: Nuke, whether == passes or fails depends on how you compare them. In most situations, compiler will know that these are two floats, and will use floating point comparison. FPU will, indeed, report 0 == -0 as true. But if you compare them in some odd context, you might end up with bitwise comparison of memory that will evaluate to false. That usually means you either have an error in the code or you are doing something intentionally creative, though.

Edited by K^2
Link to comment
Share on other sites

It sort of depends on the algebra. One could conceive an algebra where there is a meaningful difference between 0 and -0. But for fields we usually deal with, qemist's reply covers it. -0 by definition is the number such that 0 + (-0) = 0, and that trivially gives you 0 = -0.

In practical situations, -0 arises in two contexts. Leszek covered one. If you are dealing with a tiny negative number it can be rounded to -0. Though, more typical notations for slightly positive and slightly negative numbers are 0+ and 0-.

The other situation, where -0 arises quite frequently, is computer science (Ninja'd on that by Nuke). The way real numbers are represented on modern computers is as floating point numbers. Unlike integers*, floating point numbers have one bit reserved for the sign, and that is computed separately from the rest of the operation. So if you, for example, multiply 0 * (-1), you get -0 as the result. It doesn't hurt anything, and typically means nothing. I have, however, seen some hacks that use that extra bit of storage for interesting things.

It's worth to keep in mind, however, that one of the reasons floating point is set up this way is because 0 is often treated as "very small number rounded to zero."

* Integer math is done completely differently. While the high bit can be considered a sign bit, and for the purposes of comparison it is, it's not used in algebra any differently from the rest of the fields. Instead, entire algebra is done modulo 2n, so there is no difference between a negative number and very large positive number. As a consequence, zero is entirely unique. There is no such thing as integer -0.

Edit: Nuke, whether == passes or fails depends on how you compare them. In most situations, compiler will know that these are two floats, and will use floating point comparison. FPU will, indeed, report 0 == -0 as true. But if you compare them in some odd context, you might end up with bitwise comparison of memory that will evaluate to false. That usually means you either have an error in the code or you are doing something intentionally creative, though.

Thanks, floating points covers it, I guess you also will get it in common settings like -3*0=-0 and yes the main problem is visual.

Integer can get rollover, that it you pull the values too high and it become an large negative number. around 2 billions 2^31 is most common but 2^14 or around 32000 will roll for 2 bits integer. You might have seen this in games usually if you use exploits.

Healing somebody for 2.1 billion health is an bad idea :)

Link to comment
Share on other sites

It sort of depends on the algebra. One could conceive an algebra where there is a meaningful difference between 0 and -0.

No, not really. The symbol 0 is defined to denote the neutral element of addition, and the reasoning you gave shows that it is its own inverse (or at least, _an_ inverse, there might be more if we don't have inverses or associativity). Thus you would need to break that very definition to make 0 different from -0.

Link to comment
Share on other sites

No, not really. The symbol 0 is defined to denote the neutral element of addition, and the reasoning you gave shows that it is its own inverse (or at least, _an_ inverse, there might be more if we don't have inverses or associativity). Thus you would need to break that very definition to make 0 different from -0.

Addition doesn't have to be a group. ;)

Link to comment
Share on other sites

No, not really. The symbol 0 is defined to denote the neutral element of addition, and the reasoning you gave shows that it is its own inverse (or at least, _an_ inverse, there might be more if we don't have inverses or associativity). Thus you would need to break that very definition to make 0 different from -0.

When conceiving a new algebra, the breaking definition of 0 might or might not be necessary i don't see a fallacy in K2's post.

Link to comment
Share on other sites

Just to give an example, consider non-commuting addition. In that case, we might want to introduce a left identity and a right identity. Specifically: 0L + a = a = a + 0R for all a that are not identities. If that is the case, and we want to preserve at least associativity, then we define inverses thusly. a + (-a) = 0R, (-a) + a = 0L. That way, a + b + (-B) = a. Note that (-(-a)) is not a.

This brings up the question, what is 0R + 0L? I can't find any reason not to define it to be 0R. Consequently, 0L = (-0R). This leads to an algebra where -(a + B) = (-B) + (-a), which may look odd, but is entirely self-consistent.

Finally, as means of simplifying notation, I call 0R simply 0, and 0L = -0.

Link to comment
Share on other sites

Edit: Nuke, whether == passes or fails depends on how you compare them. In most situations, compiler will know that these are two floats, and will use floating point comparison. FPU will, indeed, report 0 == -0 as true. But if you compare them in some odd context, you might end up with bitwise comparison of memory that will evaluate to false. That usually means you either have an error in the code or you are doing something intentionally creative, though.

it does depend on language, implementation, your fpu architecture, and your compiler, etc. i was specifically refering to fpu compare though there are other ways of doing the same operation, and they might improve performance in some situations. but then again you can do all kinds of geeky, cool, and sometimes useful things by casting a float to int and operating on it as if it were an integer.

Link to comment
Share on other sites

The number 0 is wierd. It represent "nothing", but still it can be expressed as the sum of all other numbers: 1+(-1)+2+(-2)+pi+(-pi)...n+(-n)

Yes. In a very real way, the concepts of zero and infinity are linked.

Also, arguably, zero can be used as the starting point of deriving everything in math. We start with zero as a concept. But what about the sets that zero represents? That is, how many sets have zero elements? Only one -- the null set. But now that you have zero and one as concepts, you can derive everything.

Link to comment
Share on other sites

Addition doesn't have to be a group. ;)

I never said it does. Why would I even mention non-unique inverses otherwise¿

Just to give an example, consider non-commuting addition. In that case, we might want to introduce a left identity and a right identity. Specifically: 0L + a = a = a + 0R for all a that are not identities. If that is the case, and we want to preserve at least associativity, then we define inverses thusly. a + (-a) = 0R, (-a) + a = 0L. That way, a + b + (-B) = a. Note that (-(-a)) is not a.

This brings up the question, what is 0R + 0L? I can't find any reason not to define it to be 0R. Consequently, 0L = (-0R). This leads to an algebra where -(a + B) = (-B) + (-a), which may look odd, but is entirely self-consistent.

Finally, as means of simplifying notation, I call 0R simply 0, and 0L = -0.

Unless you just want to only have the two zeroes, you need to define how other elements are added. And what is 0_L + 0_R¿ If they actually are left and right identities, then it would be both 0_L and 0_R, making them equal.

If you use "+" for the structure of a group/monoid/whatever, then it is a pretty well established convention that this implictely means it is abelian/commutative. If your structure lacks that property, you should use other symbols like ·, * or just "nothing". As a result, 0 will always be inverse (on both sides) to itself.

Also note that "algebra" is already in use (essentially: a morphism of rings with a fixed source), "algebraic structure" is the better term for this.

When conceiving a new algebra, the breaking definition of 0 might or might not be necessary i don't see a fallacy in K2's post.

Then don't call it 0. Calling it 0 without it being the neutral element of "addition" is like calling your hamster "Zero": you can, but what is the point¿

But now that you have zero and one as concepts, you can derive everything.

Actually you can't: set theory needs several more axioms than just "the empty set exists" and the few axioms that construct new sets from old ones. The axiom of choice comes to mind, but even the simpler existence of the set omega must be explicitely required.

Link to comment
Share on other sites

Just to give an example, consider non-commuting addition. In that case, we might want to introduce a left identity and a right identity. Specifically: 0L + a = a = a + 0R for all a that are not identities. If that is the case, and we want to preserve at least associativity, then we define inverses thusly. a + (-a) = 0R, (-a) + a = 0L. That way, a + b + (-B) = a. Note that (-(-a)) is not a.

This brings up the question, what is 0R + 0L? I can't find any reason not to define it to be 0R. Consequently, 0L = (-0R). This leads to an algebra where -(a + B) = (-B) + (-a), which may look odd, but is entirely self-consistent.

Finally, as means of simplifying notation, I call 0R simply 0, and 0L = -0.

That's all well and good. But calling a duck a goose, does not make a duck a goose. ;)

Your entirely correct, we could redefine a new system where -0 and 0 are different. But by definition, are we not no longer talking about 0 and -0 as defined by the OP? Else we would have to prove your version of the definitions can be mapped/translated into the normal definitions and the "meanings/principles etc" still apply.

PS, as a good proof of this (for me, as I really only get visual proofs) is a number line. It only maps "zero" as "0". There is no -0 on the number line, though all other numbers can be represented as themselves and their inverse.

Even if we add new dimensions etc to the number line, all will pass zero in the same place, and we never get -0. We can however get other forms of "zero" such as 0,2 or -5,0 etc:

354px-Cartesian-coordinate-system.svg.png

In your system, "handedness" (Right or Left) is not a concept of the number, but another defining feature of it. I could for instance have a "blue" zero, a "green" zero and a "red" zero. However, no amount of addition or subtraction of those colours results in one becoming "negative blue zero" or translating to "negative zero".

Though I am no mathmatician, so may be wrong. A quick search (google, gah!) suggests -0 is purely a notation to show a number approaching zero, as said from previous calculations, as by definition, 0 and -0 are equal. So even if we do map the example of handedness, we only get two equal values of zero, but a "L" or "R" defining the direction of our calculations.

Edited by Technical Ben
Link to comment
Share on other sites

That graphical "proof" only talks about things that already contain the reals (probably you talk about a real vector space here). The concept of "0" exists in every abelian group, especially in every ring like e.g. IF_2 = IZ/2 = {0,1}, the field with two elements.

Link to comment
Share on other sites

Then don't call it 0. Calling it 0 without it being the neutral element of "addition" is like calling your hamster "Zero": you can, but what is the point¿

Why shouldn't someone name his hamster "Zero" ? It's actually a good name for a hamster.

Also when conceiving a new algebra 0 could be a good name for nothing or something or whatever it should stay for.

Still no break of logic in the statement of K^2 here.

Link to comment
Share on other sites

Those kind of calculations and definitions are way above my pay grade and qualifications (peanuts, I have none!). But everything I can find on the subject talks about zero being zero. We pass it as a big nothing, though as has been said many times already, can pass it from multiple directions.

(edit)

EG: http://en.wikipedia.org/wiki/Imaginary_unit#i_and_.E2.88.92i

Edited by Technical Ben
Link to comment
Share on other sites

After doing a bit of reading it seems like the closest thing to having more than one zero is in some semigroups where there can be multiple left neutral elements or multiple right neutral elements (I feel like there should be a more general word for these, maybe multiple same-sided neutral elements?), but if there is at least one of each (like in K^2's "proof", with 0L and 0R) then they all end up being equal and the neutral element is unique.

Link to comment
Share on other sites

Why shouldn't someone name his hamster "Zero" ? It's actually a good name for a hamster.

Also when conceiving a new algebra 0 could be a good name for nothing or something or whatever it should stay for.

Still no break of logic in the statement of K^2 here.

What logic¿ He just said that one "could conceive" such a thing and unlike your example, he still tried to adhere to the definition. The mathematical definition of 0 is to be the neutral element of addition and as such it is automatically its own inverse. If you conceive a new object, then you give it a new name; if it properly generalises the old one, then you can think about using the same name, but otherwise it would just be completely against convention and common sense.

The argument would sound like someone answering my claim that the moon is not made of cheese by "but I named that cheese in my hand 'Moon', yo sou are wrong"; no, I am not, that's just a plain equivocation fallacy.

Link to comment
Share on other sites

Unless you just want to only have the two zeroes, you need to define how other elements are added. And what is 0_L + 0_R¿ If they actually are left and right identities, then it would be both 0_L and 0_R, making them equal.

0L + 0R = 0L, naturally. It follows from all the other definition. And no, identities don't have to act as identity on other identities. Not in algebras that do not have commutative property. Having only left/right identity (or both) and having special rules for these is pretty common in math. Consult this table.

That's all well and good. But calling a duck a goose, does not make a duck a goose.

You should read some texts on abstract algebra and group theory.

but then again you can do all kinds of geeky, cool, and sometimes useful things by casting a float to int and operating on it as if it were an integer.

Indeed.

Link to comment
Share on other sites

]0L + 0R = 0L, naturally. It follows from all the other definition. And no, identities don't have to act as identity on other identities. Not in algebras that do not have commutative property. Having only left/right identity (or both) and having special rules for these is pretty common in math. Consult this table.

0L + 0R = 0R is just as natural, which implies that 0L = 0R. Therefore the identity element 0 = 0L = 0R is two-sided and unique. This is true in all the algebraic structures I've ever studied.

The definition of an identity is an element e of a set with an operation * such that either e*a = a or a*e = a for all elements a belonging to the set. This means that an identity would have act as an identity on any other identities in the set. If it didn't then it couldn't be a proper identity in the first place.

Link to comment
Share on other sites

0L + 0R = 0R is just as natural, which implies that 0L = 0R.

It might be just as natural, but it's not what you want if you want two distinct identities. 1+1 = 0 is quite natural, but not what you are going to go with if you are constructing a number line. The other definition is consistent with two distinct identities and every definition up to that point. It is a valid algebraic structure.

Again, you are thinking in context of a group. We are not building a group. We are building a group-like algebraic structure with distinct left and right identities. Argument that if left identity doesn't act as identity on the right identity is about as valid as saying that Reals aren't a field because zero has no inverse. Now, if you can find a self contradiction, that's another matter.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...