Hi Guys...
Can anyone tell me the difference between Integers and Words?
quite confuse because..
some PLC uses Integers some PLC uses word when it comes to analog input..
Thanks guys!
An integer is a whole natural number. The definition of an integer has absolutely nothing to do with a computer.
In a computer a number (whether it is an integer or a real number) is represented by a series of bits. The computer processor addresses these bits in groups, or words. Different computers use different size words. One of the computers I worked with early in my career was a DEC PDP8A. It used 12 bit words, and words is what they were called. Back then another popular processor was the Z80 processor. It used 8 bit words. Once again, we refereed to those 8 bit chunks as as words, although at 8 bits the term bytes was just as appropriate. Both term were used interchangeably when we were talking about z80 architecture, but they were not interchangeable when talking about DEC PDP8 architecture.
Now this thread has seen lots of chest beating and parading of credentials, but one fact remains, back in those days we used the term "Word" to refer to the native data size for both the DEC PDP-8A and the Z80 processor, and each one was different, and neither one was 16 bit.
Now if you are talking to someone who uses the ControlLogix PLC and you start talking about words then you are talking about 32 bit long chunks of data. The CLX normalizes its data to 32 bit words, so whether you define a single boolean tag, a 16 bit integer tag, a DINT tag, or a real tag, the CLX is going to allocate a single 32 bit word for the data.
So you see, the exact meaning of "word" changes within the context of processor architecture.
Now back to the integer. An integer is a whole natural number. In a computer, if you have a 12 bit word then the max size of a single word integer is 4095 unsigned, or +2047 to -2048 signed. If it is a 16 bit integer then the max size of a single word integer is 65536 unsigned, or +32767 to -23768 signed. You may wonder how that is determined. Since numbers are represented in binary in the computer, then the max unsigned integer is 2
n where n is the number of bits used in the integer word. If the computer is using two compliment numbers to represent negative numbers then the highest order bit is reserved for a sign bit and then the integer size is defined as -2
n-1 to +2
n-1-1.
Now I want to take a second and point out that I said 'max size of a single word integer' above. That is because it is possible to construct data so that a very large integer number can be represented by spanning across words. In that case the largest integer that you can represent is limited only by the memory available to your computer and your programmer's patience.
Now for your final question. You mentioned analog inputs. The analog input is processed through a circuit called an analog to digital converter. An A/D converter has a resolution defined in bits, and that is independent of what a processors word size might be. For example, a PLC might have a 12 bit A/D converter or a 16 bit A/D converter. The more bits an A/D converter has the better its resolution. The A/D converter converts the raw analog input to a number in the range 2
n, where n is the number of bits that the converter uses. The PLC starts with that value as a raw integer. You as the user may scale it to to whatever engineering units you desire and convert it to a floating point number.
If you want to know more about how floating point numbers are stored in your PLC do a search on this forum on IEEE 754. I and others have covered that topic in detail in other posts.
I hope that helps.