Recently I asked the product manager of the Z+ Series programmable power supplies a question: why have the design team used rotary encoders to adjust the output current and voltage in preference to potentiometers?
The answer is relatively simple… it’s all about resolution. The key advantage to using a rotary encoder over a potentiometer is that it can turn in the same direction indefinitely. In contrast, a potentiometer will typically turn one revolution, with the best resolution ones turning a maximum of ten times.
Let’s put this into context; if a potentiometer were to be used to adjust the output voltage on a 100V model, one full turn would represent 10V; this resolution is relatively low and therefore not precise enough for a programmable power supply.
Rotary encoders, on the other hand, are available with a very high resolution whereby each turn has 18 positions each represented by a tactile click. Since digital logic is used to read each position, rotary encoders can be set in coarse or fine modes.
In coarse mode, the encoder operates with a lower resolution (approximately six turns), which equates to 1% of the rated voltage. Using the 100V model example each click would represent 1V. For higher resolutions, the encoder operates in fine mode which equates to 0.01% of rated voltage, with each click representing 10mV.
To avoid inadvertent changes in the voltage setting during use, front panel locking is required; the encoder achieves this via software, whereas a potentiometer would require a mechanical locking mechanism. In addition, depending on the set up of the potentiometer, any change in setting after the power supply is shutdown will affect the output at start up. Conversely, since the starting position is not dependent on the encoder’s set up, should the encoder be moved after the power supply is shutdown, the output at start up remains unchanged.