Rob...
Lifetime Supporting Member
Ok, curiosity has gotten the better of me.
I have some code I'm working on written by others, an I've spotted this. I am not sure the reasoning behind it aside from trying to make the software as hard as possible to follow. (Which the rest of it, whoever has written it seems to have tried to do).
So I've created a dummy OB35 to show you a picture of what I am curious about. It isn't the actual software but basically doing the same thing.
Input byte 2 is moved into a temporary byte in the beginning of OB35 (called every 10ms).
Later in OB35 the first bit of the byte is accessed utilizing the L stack L20.0.
Is there any reason for doing this and not just simply using I2.0 and removing the move instruction. (No other bits in the byte, or the #temp_byte are used anywhere else).
I have some code I'm working on written by others, an I've spotted this. I am not sure the reasoning behind it aside from trying to make the software as hard as possible to follow. (Which the rest of it, whoever has written it seems to have tried to do).
So I've created a dummy OB35 to show you a picture of what I am curious about. It isn't the actual software but basically doing the same thing.
Input byte 2 is moved into a temporary byte in the beginning of OB35 (called every 10ms).
Later in OB35 the first bit of the byte is accessed utilizing the L stack L20.0.
Is there any reason for doing this and not just simply using I2.0 and removing the move instruction. (No other bits in the byte, or the #temp_byte are used anywhere else).