Complexity in language and in law

 

Geoffrey Sampson

University of South Africa

 

 

Draft of a paper to appear in the proceedings of the Poznań Linguistic Meeting on ÒComplexity invariance in linguisticsÓ, Aug–Sep 2013.

 

 

 

Complexity invariance among natural languages was for many years treated as a linguistic axiom.  According to Benjamin Fortson (2010: 4), for instance, ÒA central finding of linguistics has been that all languages É are equally complex in their structureÓ.  This axiom has recently been denied or questioned.  A number of linguists have begun to suggest that languages may often differ in overall complexity (see e.g. Miestamo et al. 2008; Sampson 2008; Sampson, Gil, and Trudgill 2009). 

 

That suggestion is met in practice by two types of objection.  The purpose of this paper is to distinguish explicitly between these objections, and to argue that even the apparently weightier one is less serious than it first appears.

 

The two types of objection, briefly, are:  (i) as a matter of observation, languages are in fact about equal in complexity; or alternatively (ii) languages are incommensurable in complexity, so that it is meaningless to assert either that their overall complexity varies or that it does not vary.

 

The former of these seems to be the position which corresponds to what many twentieth-century linguists saw as axiomatic (cf. the Fortson quotation above).  But it is not easy to take seriously.  Fortson wrote as if there were some well-known body of Òfinding[s]Ó which establish point (i), but I am not aware of anything of that kind in the literature of linguistics (Fortson gives no reference).  In practice, if one were to ask a linguist who agreed with (i) what grounds we have for believing it, he would probably quote a well-known but quite brief passage in Charles HockettÕs 1958 Course in modern linguistics, a book which served to introduce the subject to my generation of students.  Hockett wrote (1958: 180–1) that the American Indian language Fox has a more complex morphology than English, therefore it Òought to haveÓ a simpler syntax, and indeed (Hockett told us) it has.

 

This idea that complexity-differences in various areas of language structure balance one another was far from new when Hockett wrote.  As long ago as 1899 Henry Sweet had written ÒIf a language is very regular and simple in one department, we may expect it to be irregular and complex in anotherÓ (Sweet 1899: 68; I am grateful to Kees Hengeveld for drawing my attention to this passage).  But neither writer offers us much reason to accept the idea.  Hockett justified his use of the word ÒoughtÓ by saying Òall languages have about equally complex jobs to doÓ.  This remark seems to assume that the job done by a natural language is in some sense definable apart from the properties of the language, which is surely at least questionable – one might alternatively feel that a language defines the range of tasks it is capable of executing, and that many of those tasks could not be identified independently of linguistic structure.  (Cf. Sampson and Babarczy 2013.)  But, leaving that point aside, a one-line comparison between just two languages is clearly very thin evidence indeed on which to establish a profound axiom about relationships among all the worldÕs languages.  It is vulnerable to any pair of languages in which morphological and syntactic complexity differences pull in the same direction rather than balancing each other.  At the Poznań conference, Ilona Koutny commented that her native language, Hungarian, has more complex morphology than English and a syntax which is at least no simpler than that of English.  That brief observation is already enough to counterbalance HockettÕs claim, and I believe similar remarks could be made about very many language-pairs.  John McWhorter (2001) has argued that creole languages as a class possess Òthe worldÕs simplest grammarsÓ.  One can debate that, as a number of linguists have done, but McWhorterÕs generalization is at least backed up with a great deal more concrete evidence than Hockett gave in his comparison of Fox and English.  Whether it is ultimately true or false, point (i) is certainly not axiomatic.

 

If the linguistic community was for many years inclined to accept an axiom of complexity invariance, this was not, I believe, because of the force of SweetÕs, HockettÕs, or any other linguistÕs arguments; it has to be explained in ideological terms.  Much of the motivation for the study of synchronic linguistics sprang from the desire to show that there are no such things as Òprimitive languagesÓ – as Edward Sapir put it ([1921] 1963: 22), Òthe delicate provision for the formal expression of all manner of relations É meets us rigidly perfected and systematized in every language known to usÓ.  In linguistsÕ minds it was often a short step from the political principle that all human societies are entitled to equal respect to the scientific idea that all human languages are equal in complexity.  But scientific assertions cannot be demonstrated or refuted by reference to political or ethical principles.

 

(Since HockettÕs book, one idea has emerged within the discipline which, if correct, would provide a separate reason for believing in complexity invariance:  namely the idea that most of the detailed grammatical structure of human language is determined by our biological inheritance and hence cannot vary between languages.  But, although this idea has won many converts, there seems to be no real evidence supporting it and plenty of evidence against it – Johansson 2005; Sampson 2005.  I shall not consider it further here.)

 

Turning to objection (ii), that it is meaningless to compare languages in terms of their relative complexity:  this was powerfully argued for instance by Guy Deutscher (2009).  Although it seems to have been voiced much less often down the years than objection (i), I see it as in principle a more weighty objection than (i).  To claim that languages may vary in complexity seems to imply that we have some kind of unit or metric in terms of which we can assign a numerical value to the overall complexity of a language, so that we might be able to say Òlanguage A is 1.07 times (or 5.8 times, or É) as complex as language BÓ.  Few linguists have claimed to be able to define any such metric even in crude, provisional terms.

 

I certainly have not done that, and shall not do so here.  For the sake of argument I am happy to concede that no such metric might in principle be available.  (There are linguists who have been trying to develop metrics of that sort; I do not know whether their aim is a reasonable one, but for present purposes I am willing to accept that it might not be.)  Nevertheless, it does not follow that natural languages cannot be said to differ in overall complexity.

 

In order to show that this does not follow, I shall draw an analogy between language and another aspect of human culture with which language has much in common (though one which is not often discussed by linguists), namely law.  The evolution of law offers a clear example of a process which yields a system that is indisputably much more complex at later periods than it was when it was young, but where nobody claims to be able to put figures on that difference and it is quite unlikely than anyone ever could.

 

As is well known, the legal systems of various Western countries fall into two families:  many Continental European nations have so-called Civil Law systems, which trace their ancestry to Roman law, whereas England, the USA, and many other English-speaking countries have Common Law systems, which have evolved ultimately out of the customs of the Germanic tribes who immigrated into southern Britain over the centuries immediately preceding and after the collapse of the Roman Empire.  The highly codified nature of Civil Law makes it less relevant for our present purposes; I shall develop my analogy by reference to English Common Law (for a laymanÕs introduction to which, see e.g. Elliott and Quinn 2008). 

 

The ÒCommon LawÓ as such was established over the period between the Norman Conquest of 1066 and about 1250, during which time the sometimes divergent legal customs of the tribes who inhabited different parts of England were merged into a single nationwide ÒCommonÓ system, and the fundamental principle of stare decisis (follow precedents) was adopted.  From the thirteenth century onwards, the body of English law grew through the accumulation of precedents set in individual cases.  Human life is so complex that no finite set of rules will ever explicitly cover all the issues which fall to be resolved; when a judge confronts an open issue, he makes a decision that harmonizes as well as possible with the existing body of law, and that precedent then becomes a fixed rule binding on future cases.  (This considerably simplifies the reality, of course; for instance, higher courts are entitled to overrule precedents set by lower courts.  But my purpose here is to describe the general complexion of the system rather than to go into detail.)

 

Judges are not supposed to make new law on their own initiative.  The traditional theory was that judges respond to novel questions by ÒdiscoveringÓ rules which were somehow implicit in the existing body of law but had not previously been spelled out.  However, this was a fiction.  In reality, precedents might often have been settled differently, but, whichever way they were settled, those decisions then became part of English law.

 

From the thirteenth to the nineteenth century this was close to a complete description of the process of English legal development.  The picture has changed over the past hundred years or so, because of an explosion of statute law.  Parliament can enact a statute on any topic it pleases, which will override any common-law rules with which it conflicts; by the present day probably most issues that arise are governed by some statute (though, since no form of words is ever unambiguous, the mechanism of precedent-setting comes into play again in determining what a given statute means in practice).  But the large current role of statute law is a new thing (and the impact of the rather different type of law emanating from the European Union is of course much newer still).  Until the late nineteenth century, Acts of Parliament were relatively few, and tended to relate to specialized purposes that did not affect the population as a whole.  (In the eighteenth century, for instance, divorces were individual Acts of Parliament.)  To keep things simple, let us consider just the 600-year development of the legal system from the thirteenth to the nineteenth century.

 

My point is that, on the one hand, English law at the end of that period was a far more complex system than at the beginning.  Anyone knowledgeable about legal history would, I believe, see that as too obviously true to be worth discussing.  But on the other hand, there is no way of putting figures on that complexity difference.

 

One might imagine that, if nobody has attempted to define a metric in this area, that is merely because doing so is not something that would interest legal scholars.  But there is more to it than that:  quantification is not possible.  One could not merely count precedents, for instance.  Some precedents are simple, others are individually complex; but also, most legal decisions do not constitute precedents (established law is adequate to resolve the cases), and there is no algorithm for deciding which particular cases extend the law and hence count as precedents.  (One might say that a case counts as a precedent if it is reported in one of the standard series of law reports, but it takes human judgement to decide whether a case merits reporting.)  Furthermore, new precedents do not invariably add complexity to the existing body of law; on occasion they simplify it, for instance by treating a series of previous precedents as special cases of a more general rule, or by deciding that two apparently-relevant precedents cannot be reconciled with one another in the circumstances of a new case and must be replaced by a more straightforward rule.  (For how precedent-based legal evolution works in practice, see Manchester and Salter 2000.)

 

I believe that knowledgeable people would not see it as controversial to maintain both that English law in the nineteenth century was (much) more complex than English law in the thirteenth century, and also that this difference is not open to formal quantification.  Thus, it can make sense to say that alternative systems in some area of human culture may vary in complexity although their respective degrees of complexity are incommensurable.  In other words, objection (ii) is not unanswerable.

 

Some might accept that this is true in principle, but yet argue that languages are so different in kind from legal systems that objection (ii) remains valid in practice.  Much twentieth-century linguistics was wedded to an assumption that human languages are definable in terms of clearcut sets of formal rules, akin to the Backus–Naur notation which defines the well-formed code sequences of a computer programming language.  (For Backus–Naur notation see <foldoc.org/Backus-Naur>, downloaded 14 Nov 2013, or the Wikipedia article ÒBackus–Naur FormÓ.)  If so, this would make languages very different kinds of animal from legal systems.  However, that assumption looks very threadbare, now that it is widely recognized after decades of effort that ÒNo-one has ever successfully produced a comprehensive and accurate grammar of any languageÓ (Graddol 2004).  I do not believe that human languages are formally-definable systems (Sampson and Babarczy 2012, 2013).  Languages, like the Common Law, are systems which evolve through usage rather than in accordance with clearcut plans.  Describing the current state of a human language is a task much more like specifying the current state of some area of Common Law, which is generally acknowledged as a task that cannot be done perfectly because the unavoidable vagueness and messiness of law will always escape precise definition, than like specifying what counts as well-formed code in a programming language (which is routinely done by a compiler – if code compiles, it is well-formed, and if not, not).

 

In sum:  of the two standard objections to the idea that human languages vary in overall complexity, objection (i) has never been seriously justified and seems intuitively implausible, while objection (ii) may look plausible at first sight but embodies a fallacy.

 

 

References

 

Elliott, Catherine and Frances Quinn.  2008.  English legal system (9th edn).  Pearson Longman (Harlow, Essex).

Deutscher, Guy.  2009.  ÒOverall complexityÓ: a wild goose chase?  In Sampson, Gil, and Trudgill (2009), pp. 243–51.

Fortson, B.W.  2010.  Indo-European language and culture: an introduction (2nd edn).  Wiley-Blackwell (Oxford).

Graddol, David.  2004.  The future of language.  Science 303.1329–31.

Hockett, C.F.  1958.  A course in modern linguistics.  Macmillan (New York).

Johansson, Sverker.  2005.  Origins of language: constraints on hypotheses.  John Benjamins (Amsterdam).

McWhorter, John.  2001.  The worldÕs simplest grammars are creole grammars.  Linguistic Typology 6.125–66.  Reprinted in McWhorter, Defining creole, Oxford University Press, 2005.

Manchester, C. and D. Salter.  2000.  Exploring the law: the dynamics of precedent and statutory interpretation (3rd edn).  Sweet & Maxwell (London).

Miestamo, M., K. SinnemŠki, and F. Karlsson, eds.  2008.  Language complexity: typology, contact, change.  John Benjamins (Amsterdam).

Sampson, G.R.  2005.  The Òlanguage instinctÓ debate (2nd edn).  Continuum (London and New York).

Sampson, G.R.  2008.  A linguistic axiom challenged.  <www.grsampson.net/ALac.html>, uploaded 30 Mar 2008.

Sampson, G.R. and Anna Babarczy.  2012.  Introduction to Sampson and Babarczy 2013.  <www.grsampson.net/BGwgCh1.htm>, uploaded 18 Dec 2012.

Sampson, G.R. and Anna Babarczy.  2013.  Grammar without grammaticality.  De Gruyter (Berlin).

Sampson, G.R., D. Gil, and P. Trudgill, eds.  2009.  Language complexity as an evolving variable.  Oxford University Press.

Sapir, Edward.  1921.  Language: an introduction to the study of speech.  Reprinted by Hart-Davis (London), 1963.

Sweet, Henry.  1899.  The practical study of languages: a guide for teachers and learners. Dent (London).