I do not know the chronology of the design of Gray Code, but in the good old
days, i.e. the sixties, I had a few weeks of training at the NASA Wallops
Island small rocket launching facility. One of the topics was telemetry.
It was explained to us that telemetry of data was usually done in gray code
rather than straight binary code because much of the telemetered data was
from binary up and down counters of various kinds (time, responses, hits,
decay processes, etc. as well as many different kinds of ADCs [analog to
digital converters] that relied on counters in their conversion processes).
The methods of telemetry were less advanced at that time (integrated
circuits were just starting to take over from discrete transistors), clock
rates were slower, and the counters would be sampled every so often and the
data sent to the ground. Let us assume that a given counter is sampled once
per second, and that the process of sampling takes one millisecond.
Furthermore, let us assume that the counter is incremented ten times a
second, on the average. That means that about once in every 100 samplings
will take place when the counter is in the act of advancing or decrementing
from one count to another. If more than one counter bit is changing at the
moment of interrogation, then there is uncertainty in the reporting of EVERY
bit that is changing. For example, suppose a counter changes from
000001111111 to 000010000000, or, in decimal, from 127 to 128, during the
one-millisecond of sampling. Every one of those bits would be indeterminate
if it were sampled at the moment of change, so that any number from 0 to 128
might show up as a result of the sampling. Clearly, this is a situation to
be avoided. (I can tell, you are already having the "AHA!" reaction.) So,
by using the Gray Code instead of straight binary, there is a MAXIMUM error
of only ONE count, because only one binary digit changes when the count
increments or decrements. Thus not only will the maximum size of the error
and the average error be cut down to a maximum of 1 unit and an average of
about half of that, but also the frequency of error will be reduced as well
because only one digit is suseptible to error, rather than a maximum equal
to the number of bits in the counter. In short, the benefits are enormous
and the costs are minimal, a situation that every engineer loves.
I was interested by your explanation of the benefits that accrue from the Gray Code if the counter is mechanical. By having only one digit change at a time I suspect that every digit will, in the long run, change an equal number of times (I have not proved that -- it is just a hunch), so that all digits of a counter should wear out at about the same time. With a conventional binary counter, of course, each counter bit is "used" twice as often as the one to its left, so the rightmost will wear out first and all must be replaced (in some counters). With solid state logic that is not a problem, but with a mechanical counter, of course it is critical. A mechanical counter might have a life of 100,000 cycles, while a solid state one might cycle 10,000,000 times per second for a few years and still be indistinguishable from a new one!
Thanks to Ben Fairbank for his interesting and insightful information on the uses of gray code.