In short, the analog 'standard' ranges are so defined because most equipment is designed to operate over those ranges.
The analog current ranges originally derived from the 'analog-i-zation' of typical digital (TTY) current loop signals, hence 0-5mA, 0-10mA, 0-20mA, 0-50mA, and 0-60mA. Because of the ranges of most analog equipment, and the drive capacities, the higher ranges were mostly dropped, and the 0-5, 0-10, and 0-20 were left. Then, since using a current loop, it was possible (and desirable) to have a 'live zero', the actual ranges were shifted to 1-5mA, 2-10mA and 4-20mA. Current loops are very useful, as they are more noise-immune than voltage devices (operating at a much lower impedance), and have inherent 'live zero' states to easily detect broken signal paths.
The 0-10V, and (-10)-(+10)V ranges again are out of just general industry usage, and again, are easily directly synthesized (D/A) and read (A/D), without having to have external scaling circuitry.
Each has it's applications; most 'short-haul' (distance) can be handled by a voltage signal, while longer distances (or instrumentation) is generally a current loop. Current loops can go a long way, but have a very limited fan-out (how many devices can be driven). Voltage's have a high fan-out capability, but are more subject to induced noise.
In most receiver devices, the ultimate conversion is from voltage, so current loops almost always have a precision loading resistor (usually between 120 and 1000 ohms) across the loop, and the voltage is measured across the resistor.
There have been other 'standards' of control voltages, (-5)-(+5)V, (-6)-(+6)V (anyone remember REFLEX?), (-15)-(+15)V, but most of those have fallen out of use.
------------------
For digital circuitry, again, the industry drove the standards to what we have today. It wasn't too long ago that almost all digital I/O was in the 120VAC to 240VAC range, but more and more things are coming to the 24VDC range. Why the change? Well, for one thing, 120VAC is much more destructive than 24VDC in the event of a circuit fault.
Also, control devices for AC have an inherent maximum switching frequency due to the 50/60 and even 400Hz standards. DC devices can switch much, much faster.
With the exception of large magnetic structures, as in large contactors, 24 VDC also saves power, and requires smaller overall supply capacity.
Then too, 24VDC is much safer to work around, if you must work live.
Another reason for the move to 24VDC controls, is again, simple switching devices (just transistors), and adequate sourcing capabilitys (0.5A at 24VDC is 12 watts). Lower level systems like the 5VDC TTL interfaces would need to supply 2.4A to switch 12 watts, which means more heating, larger devices, and more power wasted as heat.
Most PLCS out today support the above common ranges, and have relay's to support odd-balls or isolation-required ranges.
-------------
Servers? I'm not sure what you mean here... DDE (becoming obsolete) and OPC servers are software modules that make communication and data requests/writes to a PLC more or less generic, through a standardized interface. It would truly be a pain in the rear if you needed to manually write the software to hook a PLC up to an HMI for example.
On the larger scale, there are client/server networked HMI packages that can perform enterprise-wide monitoring, data acquisition, and control using a cluster of computers. Rockwell's RSView SE and Wonderware's Archestra fall into this catagory.