In a paper I gave in Georgia I picked up on a comment by Negroponte in Being Digital to the effect that error correction is one of the fundamental advantages of digital (vs analog) data. Automatic error correction makes lossless copying and transmission possible. Digital Revolution (III) – Error Correction Codes is the third in a set of Feature Column essays on the “Digital Revolution.” (The other two are on Barcodes and Compression Codes and Technologies.)
To exaggerate, we can say that error correction makes computing possible. Without error correction we could not automate computing reliably enough to use it outside the lab. Something as simple as moving data off a hard-drive across the bus to the CPU can only happen at high speeds and repeatedly if we can build systems that guarantee what-was-sent-is-what-was-got.
There are exceptions, and here is where it can get interesting. Certain types of data can still be useful when corrupted, for example images, audio, video and text – namely media data – while others if corrupted become useless. Data that is meant for output to a human for interpretation needs less error correction (and can be compressed using lossy compression) while still remaining usable. Could such media have a surplus of information from which we can correct for loss that is the analog equivalent to symbolic error correction?
Another way to put this is that there is always noise. Data is susceptible to noise when transmitted, when stored, and when copied.
Compression would seem to be another principle of computing in the sense that compression is about representation and coding. There is no difference in the end between the sender, channel and receiver. Just remembering something is a matter of maintaining it over time – a transceiver (sender/reciever) is a channel through time.
Comments are closed.