e First Computer-Generated Greek New Testament
fairly limited in scope without the aid of a computer.
ere have also been many other types
of statistical analysis providing a more objective basis for understanding scribal habits and
comparing variant units in manuscripts. Unfortunately, most of these eorts have had to be
done by hand, using only a few select manuscripts over relatively small passages of Scripture
as a sample size, from which the rest could then be extrapolated. E. C. Colwell and E. W. Tune
foresaw the need for computers to get involved in textual criticism way back in the s: “We
are working in a period when the data for textual criticism will inevitably be translated into
mathematics. In fact it is doubtful that NT textual critics can really hope to relate all of the data
now available to them without the aid of computers.”
ere have since been several examples of computer-assisted research over the decades in
fulllment of this sentiment, such as the Coherence-Based Genealogical Method (CBGM)
developed by Gerd Mink and the cladistics approach used by Stephen Carlson for the book
of Galatians.
But despite a popular misunderstanding, techniques like the CBGM do not
“provide a means of automating the reconstruction of the initial text,” as they are merely con-
sidered to be tools to help in the subjective decision-making process.
Part of the reason for
this is due to the signicant amount of genealogical corruption in the data. Many of the earliest
witnesses are clearly seen to be doing their own textual criticism, copying from multiple wit-
nesses already available to them. But despite some of its shortcomings,
the work of the CBGM
was particularly valuable in the sense that this work had to be done in order to know that this
was the case, demonstrating that most of the earliest witnesses do not have direct genealogical
relationships to each other.
Even with these technological advances, the crux of the matter is that textual criticism has
still been largely treated as an art, with scholars viewing scientic statistical analysis as merely
suggestions to help guide their subjective decisions.
at is why some of our best modern
critical texts, even those with similar philosophies considering the same evidence, still dis-
agree with each other in thousands of places.
Computer-Generated Text
e ultimate result of applying science to textual criticism was envisioned years ago in the au-
tomatic creation of a computer-generated text without any human subjectivity. Yet despite our
best eorts we were “nowhere near having computer tools that can algorithmically produce
Bruce Metzger, e Text of the New Testament, rd ed. (Oxford: Oxford University Press, ),
–.
E. C. Colwell and E. W. Tune, “Variant Readings: Classication and Use,” JBL (): –.
Gerd Mink, “e Coherence-Based Genealogical Method—What Is It About?” (online paper,
Münster: Institut für Neutestamentliche Textforschung, ), https://www.uni-muenster.de/
INTF/Genealogical_method.html; Stephen C. Carlson, “e Text of Galatians and Its History”
(PhD diss., Duke University Graduate Program in Religion, ).
Klaus Wachtel, “Towards a Redenition of External Criteria: e Role of Coherence in Assessing
the Origin of Variants,” in Textual Variation: eological and Social Tendencies? Papers from the
Fih Birmingham Colloquium on the Textual Criticism of the New Testament, ed. David C. Parker
(Piscataway, NJ: Gorgias, ), .
Stephen C. Carlson, “A Bias at the Heart of the Coherence-Based Genealogical Method (CBGM),”
JBL (): –; Jarrett W. Knight, “Reading between the Lines: Peter :, MS , and
Some Methodological Blind Spots in the CBGM,” JBL (): –.
For example, the results of the CGBM were not followed by the Nestle-Aland th edition ed-
itorial committee at Pet : because the CGBM does not make up conjectures. Instead, the
committee made up their own reading, which is not supported by any Greek manuscript.