I'm confused.
Are the original two 16 bit 'integers' in the appropriate floating point (FP) bit format (sign, mantissa, exponent) to form a 32 bit IEEE floating point value, once moved to a word in FP format?
Or are the original two 16 bit 'integers' a 'long integer' with an assumed decimal point?
I'm assuming it is the first, because no decimal point was considered in the conversion.