The new forums will be named Coin Return (based on the most recent vote)! You can check on the status and timeline of the transition to the new forums here.
The Guiding Principles and New Rules document is now in effect.
I'm working on the review for my midterm in my CEG320 class. I ran into a little bump. Allow me to explain:
It asks me the Integer equivalent for the 2's complement value stored at memory address x0 (xFFFD) is:
(a) -65,533
(b) -13
(c) -3
(d) 3
(e) 13
(f) 65,533
(g) None of the above.
Okay, so I started converting it:
xFFFD = 1111 1111 1111 1101
Great. Now it's negative because the first binary digit is 1.
Now here's the tricky part for me. The next 8 digits are 1's (1111 1111) which is supposed to represent the exponent (after you subtract 127). Which makes it 128. Now 2^128 is a ginormous number.
Would you say that it has to be (g) then? I mean... I don't know if I'm doing this right or not...
This shortcut only works because you can count the distance from either the 0 cutoff or the 65535/-65536 cutoff on fingers easily.
Remember that decimal zero is all zeros in all binary. If you subtract one, you wrap around to all one's in binary. So all one's in binary is -1 in decimal.
Start adding binary 1's to the bit string you have until you wrap around to a string of all 0's. That's number of binary 1's you had to add is the magnitude of the negative number.
So 1101 + 1 = 1110 + 1 = 1111 + 1 = 0000 -- I added three one's to wrap around, so this was negative three.
Also remember that a zero followed by all 1's, or a 1 followed by all zeros, is as far from decimal zero as possible, so that's your overflow boundary.
mspencer on
MEMBER OF THE PARANOIA GM GUILD
XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
This shortcut only works because you can count the distance from either the 0 cutoff or the 65535/-65536 cutoff on fingers easily.
Remember that decimal zero is all zeros in all binary. If you subtract one, you wrap around to all one's in binary. So all one's in binary is -1 in decimal.
Is that because:
0001 => 1110 + 1 => 1111? Sorry, still trying to grasp everything.
Start adding binary 1's to the bit string you have until you wrap around to a string of all 0's. That's number of binary 1's you had to add is the magnitude of the negative number.
So 1101 + 1 = 1110 + 1 = 1111 + 1 = 0000 -- I added three one's to wrap around, so this was negative three.
Also remember that a zero followed by all 1's, or a 1 followed by all zeros, is as far from decimal zero as possible, so that's your overflow boundary.
Suppose we have four bits instead of 16. We have sixteen possible values:
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
What happens if you only have four bits, your current value is 1111, but you add one? You overflow and wrap around to 0000 again. If you have 0000 but you subtract one, you end up at 1111.
Think about what happens when you're using signed integers in C and you overflow or underflow. A sixteen-bit int has values going from -32768 to 32767. Near zero this makes sense: -1 + 1 = 0, 0 + 1 = 1. Near the extremes the overflow behavior is a bit weird. 32766 + 1 = 32767. 32767 + 1 = -32768. -32768 + 1 = -32767.
These two things (overflow behavior of ints in C and two's complement format for signed integers) are related, but in decimal integers the strange-looking overflow condition happens between the numbers -32768 and 32767; in binary the strange-looking overflow condition happens between the numbers 0 and -1.
So let's take it back down from 16 bits to 4 bits, for simplicity. I'm going to draw my number line again but I'll repeat myself a couple times, so the points where we wrap around become clearer.
The thing to remember is that -1 is the final step before 0 -- if you add 1 to -1 you get 0, and if you add 1 to 111111111111111 you get 0. Going from -1 to 0 is conceptually easy in decimal, as the numbers are right next to each other on the number line, but in binary it looks like an overflow happened because you go from 1111111111 to 0000000000.
Similarly going from the highest possible to the lowest possible integers looks weird in decimal: if you have seven apples and someone gives you one more, you don't have eight negative apples. But in binary it looks like the most natural thing in the world: you have 0111, you add 1 to it, and you get 1000.
I'm not so much explaining how two's complement works here, like your professor would; I'm trying to form a conceptual link in your mind between the way two's complement binary numbers LOOK and the well-known integer overflow behavior in C we've already had to learn to anticipate.
mspencer on
MEMBER OF THE PARANOIA GM GUILD
XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
0
ASimPersonCold...... and hard.Registered Userregular
edited October 2008
Yeah, it's not a float.
Just think of two's complement as 2 steps. You've got the right binary number:
1111 1111 1111 1101
So invert all the bits (also known as 1's complement):
0000 0000 0000 0010
Ah hah! I see it now, seriously thanks for that explanation mspencer! I don't know why it took me so long to see that. I'm not nearly as good at binary as I'd like to be I suppose
ASimPerson thanks Yeah... I was definitely thinking floats. I was looking at my page of notes and I just kept putting them together for some reason. *smack*
Do either of you (or anyone else) know where I can get some additional information on converting a HEX to ASCII? See I know how to do something like: 4C and convert that (which is 'L') but what about something like xC024? I seem to remember something vaguely about how the first two digits are least important or something. But I don't think it's true. I tried converting it to binary (1100 0000 0010 0100) but that didn't seem to help me.
Ah hah! I see it now, seriously thanks for that explanation mspencer! I don't know why it took me so long to see that. I'm not nearly as good at binary as I'd like to be I suppose
ASimPerson thanks Yeah... I was definitely thinking floats. I was looking at my page of notes and I just kept putting them together for some reason. *smack*
Do either of you (or anyone else) know where I can get some additional information on converting a HEX to ASCII? See I know how to do something like: 4C and convert that (which is 'L') but what about something like xC024? I seem to remember something vaguely about how the first two digits are least important or something. But I don't think it's true. I tried converting it to binary (1100 0000 0010 0100) but that didn't seem to help me.
ASCII characters take up a byte each, which is two hexadecimal digits. To know what digits go to what characters you need to look at an ASCII table, of which there are many that are easy to find online.
With ASCII characters the order of the bytes and the order of the characters should be the same, so you shouldn't have to worry about that. So 0xC024 would be two characters, the one represented by 'C0' followed by '24'.
Endianness comes up with numerical values, where the position of the least and most significant bytes of a number can differ between different platforms. For example, the raw binary format of the number one in a 32-bit integer would be represented on an Intel x86 platform by the hexadecimal sequence "01 00 00 00", where as the other order used by some other platforms and as the network byte order would be "00 00 00 01". Within a byte or if you have a raw array of bytes instead of a numeric value then the order should be the same either way at the software level.
Posts
Remember that decimal zero is all zeros in all binary. If you subtract one, you wrap around to all one's in binary. So all one's in binary is -1 in decimal.
Start adding binary 1's to the bit string you have until you wrap around to a string of all 0's. That's number of binary 1's you had to add is the magnitude of the negative number.
So 1101 + 1 = 1110 + 1 = 1111 + 1 = 0000 -- I added three one's to wrap around, so this was negative three.
Also remember that a zero followed by all 1's, or a 1 followed by all zeros, is as far from decimal zero as possible, so that's your overflow boundary.
XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
Is that because:
0001 => 1110 + 1 => 1111? Sorry, still trying to grasp everything.
So basically you did this:
xFFFD => 1111 1111 1111 1101 + 1 = 1111 1111 1111 1110 + 1 = 1111 1111 1111 1111 + 1 = 0000 0000 0000 0000?
Which means I needed 3 +'s so that ends up being -3?
Is there another way to do this that you could explain easily without adding?
e: So for 1111 1111 1111 1101 you could just invert the numbers, and add 1 right?
1111 1111 1111 1101 => 0000 0000 0000 0010 +1 => 0000 0000 0000 0011 = 3, and since the first number is - it makes it negative.
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
What happens if you only have four bits, your current value is 1111, but you add one? You overflow and wrap around to 0000 again. If you have 0000 but you subtract one, you end up at 1111.
Think about what happens when you're using signed integers in C and you overflow or underflow. A sixteen-bit int has values going from -32768 to 32767. Near zero this makes sense: -1 + 1 = 0, 0 + 1 = 1. Near the extremes the overflow behavior is a bit weird. 32766 + 1 = 32767. 32767 + 1 = -32768. -32768 + 1 = -32767.
These two things (overflow behavior of ints in C and two's complement format for signed integers) are related, but in decimal integers the strange-looking overflow condition happens between the numbers -32768 and 32767; in binary the strange-looking overflow condition happens between the numbers 0 and -1.
So let's take it back down from 16 bits to 4 bits, for simplicity. I'm going to draw my number line again but I'll repeat myself a couple times, so the points where we wrap around become clearer.
1100 -4
1101 -3
1110 -2
1111 -1
0000 0 <
binary wraparound happens here
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 -8 <
decimal wraparound happens here
1001 -7
1010 -6
1011 -5
1100 -4
1101 -3
1110 -2
1111 -1
0000 0 <
binary wraparound happens here
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 -8 <
decimal wraparound happens here
1001 -7
1010 -6
The thing to remember is that -1 is the final step before 0 -- if you add 1 to -1 you get 0, and if you add 1 to 111111111111111 you get 0. Going from -1 to 0 is conceptually easy in decimal, as the numbers are right next to each other on the number line, but in binary it looks like an overflow happened because you go from 1111111111 to 0000000000.
Similarly going from the highest possible to the lowest possible integers looks weird in decimal: if you have seven apples and someone gives you one more, you don't have eight negative apples. But in binary it looks like the most natural thing in the world: you have 0111, you add 1 to it, and you get 1000.
I'm not so much explaining how two's complement works here, like your professor would; I'm trying to form a conceptual link in your mind between the way two's complement binary numbers LOOK and the well-known integer overflow behavior in C we've already had to learn to anticipate.
XBL Michael Spencer || Wii 6007 6812 1605 7315 || PSN MichaelSpencerJr || Steam Michael_Spencer || Ham NOØK
QRZ || My last known GPS coordinates: FindU or APRS.fi (Car antenna feed line busted -- no ham radio for me X__X )
Just think of two's complement as 2 steps. You've got the right binary number:
1111 1111 1111 1101
So invert all the bits (also known as 1's complement):
0000 0000 0000 0010
And then add 1:
0000 0000 0000 0011
Bam, 3.
ASimPerson thanks Yeah... I was definitely thinking floats. I was looking at my page of notes and I just kept putting them together for some reason. *smack*
Do either of you (or anyone else) know where I can get some additional information on converting a HEX to ASCII? See I know how to do something like: 4C and convert that (which is 'L') but what about something like xC024? I seem to remember something vaguely about how the first two digits are least important or something. But I don't think it's true. I tried converting it to binary (1100 0000 0010 0100) but that didn't seem to help me.
Ex.
Decimal:
4 * 10^4 + 9*10^3 + 1*10^2 + 8*10^1 + 8*10^0 = 49188
Hex
C(12) * 16^3 + 0 * 16^2 + 2 * 16^1 + 4 * 16^0 = 49188
Hmm. I can see that... I think.
But how would you get the ASCII Character out of 49188?
C0 doesn't exist, and 24 is $
Doesn't seem right to me... Hmmm. I thought that first.
ASCII characters take up a byte each, which is two hexadecimal digits. To know what digits go to what characters you need to look at an ASCII table, of which there are many that are easy to find online.
With ASCII characters the order of the bytes and the order of the characters should be the same, so you shouldn't have to worry about that. So 0xC024 would be two characters, the one represented by 'C0' followed by '24'.
Endianness comes up with numerical values, where the position of the least and most significant bytes of a number can differ between different platforms. For example, the raw binary format of the number one in a 32-bit integer would be represented on an Intel x86 platform by the hexadecimal sequence "01 00 00 00", where as the other order used by some other platforms and as the network byte order would be "00 00 00 01". Within a byte or if you have a raw array of bytes instead of a numeric value then the order should be the same either way at the software level.