In: Computer Science
a newer version of IEEE 754 defines a half precision floating point format that is only 16 bits wide. the left most bit is still the sign bit. the exponent is 5 bits wide and has a bias of 15, and the fraction is 10 bits long. A hidden 1 is assumed similar to single and double precision formats. what is the bit pattern to represent -0.5 using this format?

1011100000000000
Explanation:
-------------
-0.5
Converting 0.5 to binary
Convert decimal part first, then the fractional part
> First convert 0 to binary
Divide 0 successively by 2 until the quotient is 0
Read remainders from the bottom to top as
So, 0 of decimal is in binary
> Now, Convert 0.50000000 to binary
> Multiply 0.50000000 with 2. Since 1.00000000 is >= 1. then add 1 to result
> This is equal to 1, so, stop calculating
0.5 of decimal is .1 in binary
so, 0.5 in binary is 00000000.1
-0.5 in simple binary => .1
so, -0.5 in normal binary is .1 => 1. * 2^-1
16-bit format:
--------------------
sign bit is 1(-ve)
exponent bits are (15-1=14) => 01110
Divide 14 successively by 2 until the quotient is 0
> 14/2 = 7, remainder is 0
> 7/2 = 3, remainder is 1
> 3/2 = 1, remainder is 1
> 1/2 = 0, remainder is 1
Read remainders from the bottom to top as 1110
So, 14 of decimal is 1110 in binary
frac/significant bits are 0000000000
so, -0.5 in 16-bit format is 1 01110 0000000000