What are the fundamental algorithms of text analysis? One candidate from computational lignuistics and (CS) might be Levenshtein Distance. This is used in spell checking, speech recognition, and could be used in text analysis in comparison.
But, are there fundamental procedures for literary text analysis? Could the concordance be represented as such a procedure? Or, is the idea of a fundamental algorithm alien to humanities computing?
See also the talk by John Nerbonne who mentions the Levenshtein distance – Nerbonne: Data Deluge.