If you have a continuous task, CPU usage will be 100%. ...
^ Stated another way, this is why CPU usage cannot be measured directly: the CPU can always run the continuous task, so the CPU always has something to do, so CPU usage is 100%.
That said, looking at how long it
does take to complete each scan of the continuous task, and comparing that to the longest allowable/desirable time it
should take, is an expression of PLC loading: as the amount of work the CPU does per scan increases, the [does take:should take] ratio increases.
So, if this new AOI was run at the start of every scan, and subtracted its previous GSV-obtained WallClockTime from the current WallClockTime to get an overall per-scan duration, then that difference duration could be divided into a "watchdog" timeout duration, and the resulting ratio is a form of CPU usage. The nice thing about this per-scan approach is that higher-priority tasks, such as communications, scheduled tasks, housekeeping, etc., are all included in the calculated "usage", although they are likely to cause this "measured" usage to bounce around a bit.
The point here is that CPU "usage" is only an issue if it takes too long for the PLC to respond to a process change on the next scan after that change. The exact definition of "too long" is a specific property of OP's process, but it is essentially a form of watchdog timeout, so at least the approach above is consistent with that.
The details of how to calculate a difference between two WallClockTime DINT[2] or LINT[1] values, what the process' "too long"/watchdog divisor value should be, whether and how to filter this "measured" usage, etc., are left as an exercise for the OP.