Construction Of Languages

Table of contents:

Video: Construction Of Languages

Video: Construction Of Languages
Video: How to Create a Language: Dothraki Inventor Explains | WIRED 2023, May
Construction Of Languages
Construction Of Languages

Why study artificial languages if there are about seven thousand living languages in the world and not all of them are clearly described? And if we study artificial ones too, then how to do it? These questions are answered by the book by Alexander Piperski "The Construction of Languages: From Esperanto to Dothraki" (publishing house "Alpina Non-Fiction"), which was shortlisted for the 2017 Popular Science Literature Prize "Enlightener". We publish a fragment from the Introduction to Piperski's book.



Constructed languages and linguistics

There are about 7000 languages in the modern world. This is so much that one person is not able to fully master even an insignificant share of this diversity. The famous polyglot Cardinal Mezzofanti (1774-1849) knew three dozen European and Middle Eastern languages and a dozen more exotic (although it is more difficult to verify whether he really knew them here) - and still this is less than 1% of the world's linguistic diversity. Moreover, most of the languages known to Mezzofanti belonged to the same language family - Indo-European: Italian, Latin, Portuguese, Spanish, Albanian, Greek, English, Russian, and so on.

But, despite the fact that all the variety of languages in the world is not subject to any of us, it is not limited to natural languages: many people are engaged in linguistic construction: coming up with their own new languages - conlangs (from English constructed languages). Some do it for fun, others for the benefit of everyone; some invent a language and forget about it after a week, others then improve their invention all their lives; some leave the description of the new language in a drawer, while others publish books with the expectation of widespread distribution. Many of these languages have become firmly embedded in our culture: any of us has heard something about Esperanto or Klingon.

But we have to say right away that linguistic science usually does not consider artificial languages a worthy object of research. Moreover, many linguistic scientists even look at the knowledge of artificial languages with skepticism. If it becomes known about someone that this person knows Esperanto, such linguists begin to shake their heads condescendingly - well, everyone has their little weaknesses (and if they don’t like this person, they say: “Well, everything is clear”). The traditional attitude of science to artificial languages is clearly illustrated by a fragment of the article "Artificial Language" in the encyclopedic dictionary "Language and Linguistics: Key Concepts", written by the famous British-American linguist Larry Trask in collaboration with Peter Stockwell:

Many of the early artificial languages were created by philosophers and were of an a priori nature; this means that they did not rely on existing languages, but were compiled according to arbitrary principles that pleased their inventors. Many of them were conceived as "universal" or "logical" languages and were based on grandiose classification schemes for all human knowledge. All these projects were completely unrealizable. Among the most successful attempts belong the inventions of the Frenchman Descartes, the Scotsman Dalgarno and the Englishman Wilkins.

Since the 19th century. artificial languages were usually a posteriori, that is, based in one way or another on existing ones. They were composed by linguists, logicians, priests, politicians, oculists and businessmen … In 1880 the German priest Schleier published a draft of the Volapuk language, a clumsy and complex mixture of several European languages with cumbersome endings of his own invention; it turned out to be something like Swedish, spiced with a bit of madness, but over the years this language has attracted hundreds of thousands of followers. In 1887 g. Polish optometrist Zamenhof presented to the public a draft of Esperanto, a simpler language also cut from pieces of several European languages, and it became the most studied and used artificial language in the world.

Esperanto still has a number of quirks, and so simplified versions of the language began to be created: Ido, Esperantido, Esperantuisho, and modern Esperanto. Their success was minimal. The Danish linguist Espersen created a highly modified version of Esperanto called novial, which did not arouse much interest … In the XX century. dozens of other projects were proposed that disappeared without a trace … Finally, loglangs (logical languages), in particular programming languages, may have practical value, but most of them (for example, Loglan and its descendant Lojban, which is maintained by the Logical Language Group) demonstrate a fundamentally misunderstanding of what a language is and what it is for.

The reader is probably already tired of a long quote, so I will not cite other articles from this dictionary for comparison. I will only say that they are completely devoid of such mockery and schadenfreude. It is indicative that this article is almost the only one where characters are contemptuously named by their surnames, without names: even in the article "Linguistic Myths", where Jespersen is debunked, it is said that his name was Otto.

But if you look closely, it turns out that artificial languages can be useful for linguistic theory - and even Trask and Stockwell, if you cleanse their text of layers of bile, give a hint of what this benefit might be. Yes, even if the early philosophical languages were unsuccessful, Volapuk is clumsy, and the creator of Loglan fundamentally misunderstood what language is. But who are the people who get it right? And can they understand this if they work only with natural languages and know what they are, but cannot renounce them and imagine what they are not are there?

Domestic linguist Lev Shcherba wrote:

Without expecting that some writer will use this or that phrase, this or that combination, you can arbitrarily combine words and, systematically replacing one with another, changing their order, intonation, etc., observe the semantic differences resulting from this, what we constantly do when we write something … After all, it must be borne in mind that in the "texts" of linguists there are usually no unsuccessful statements, while a very important component of the linguistic material is formed precisely by unsuccessful statements marked "do not say so", which I will call "negative language material." The role of this negative material is enormous and has not yet been fully appreciated in linguistics, as far as I know.

Shcherba has in mind a very simple idea, which actually underlies all modern syntax: it is necessary not only to investigate those sentences that seem correct to us, but also to change them a little in order to "break", make them incorrect. A sentence that seems unacceptable to native speakers makes the rule that it violates much more prominent and noticeable. So, if we study the agreement in Russian, we can look at sentences (1-3) and make sure that the first two of them are natural, and the third is clearly incorrect (the incorrectness is indicated by an asterisk; examples are taken from the article by Olga Pekelis):

  • (1) Serezha's tone and expression instantly changed.
  • (2) Serezha's tone and expression instantly changed.
  • (3) * Seryozha's tone and expression instantly changed.

From this we can conclude that agreement in such sentences is possible either by the plural or by the characteristics of the first of the members connected by a union and, but not according to the characteristics of the second: the masculine gender can be taken from the tone, but the neuter gender cannot be taken from the expression. Having looked at this, you can study the agreement further and think about what else it depends on: for example, will something change if you put the predicate after the subjects? You can check on a large collection of texts how often one or another of the options is used. You can try to give what is happening some kind of theoretical interpretation. But, be that as it may, the first step from which we started was to build what not it happens, and they thought: why is this?

The same role is played by artificial languages against the background of natural ones. Sometimes, looking at an artificial language, a linguist exclaims: “This is nonsense! This does not happen! " But at this very moment it is worth stopping and thinking: why doesn't it happen?

The famous Dutch linguist Mark van Ostendorp in 2000 wrote an article "Artificial languages and linguistic theory", in which he proposed to distinguish not only natural and artificial languages, but also real, potential and impossible languages. Any natural language is real by definition, but artificial ones are not so simple: they can belong to any of three categories. We are not yet able to fully answer the question of what is possible and what is not. It is unclear what features an impossible language should have, and it is equally unclear what the impossibility of language in general is: does this mean that Homo sapiens will not be able to learn it? Can't speak it? Will be able to learn and speak, but this language will not be passed on to children? But, despite all the difficulties and ambiguities, at least it is worth thinking about it.

Van Ostendorp mentions the Spokane language, invented by his compatriot Roland Tveheisen. "Spokan has features that are not at all attested in other (natural) languages," writes van Ostendorp. As an example of an impossible trait, he cites the close connection between the verb tense and the word order: the fact is that in Spokane in the present tense (Tweheissen calls it "neutral"), first comes the subject, then the predicate, then the addition, in the past ("definite") of time - first the subject, then the complement, then the predicate, and in the future - first the predicate, followed by the subject, and then the complement:

  • (4) Miko trempe ef român - 'Miko is reading a novel'.
  • (5) Miko ef român trempe - 'Miko read the novel'.
  • (6) Trempe Miko ef român - 'Miko reads the novel'.

From the point of view of the natural languages we are used to, this is very strange. It seems that this does not happen. But why is this more strange than the situation when in the present tense the verb changes according to persons, and in the past - according to gender, as in Russian (I read, read, read ~ read, read, read)? And, say, in the Georgian language, the time depends on which cases the participants in the situation are designated - and this is no longer so strikingly different from the situation in the Spokane language.

Moreover, if you broaden your horizons, it turns out that this is not an impossible trait. For example, this is exactly how time works in the Attiye language (Côte d'Ivoire). Here are some examples of sentences in this language (in simplified transcription), from which it is clear that in this language in the present tense the word order is used Subject - Complement - Predictable, and in the past tense - Subject - Predictable - Complement:

(7) aduu fje
Hell fish smoke
'Adu smokes fish.'
(8) aduu fi
Hell smoke fish
'Adu smoked the fish.'
(9) aduu japi fe
Hell Yapi tire
'Adu tires Yapi.'
(10) aduu fe japi
Hell tire Yapi
'Adu tired Yapi.'

So it turns out that even the impossible is in fact sometimes possible, which means that you should not be so skeptical about artificial languages. Sometimes, at the most unexpected points of grammar, they fully correspond to reality, even if this reality was not known either to their creators or critics.

That is why I will take the position of an interested observer in this book, and not a caustic critic: if an artificial language is in some way unlike natural, this is not a reason to throw stones at its author, but, on the contrary, an interesting food for thought. True, van Ostendorp emphasizes that linguists, striving for the natural-scientific ideal - to investigate only spontaneously developing objects that obey immutable laws, do not see the point in studying the manifestations of the free will of an individual person, which include artificial languages. But, on the other hand, manifestations of free human will are actively studied by literary criticism, art history, musicology. Perhaps, in order not to worry linguists and not to force researchers and creators of artificial languages to prove anything to them, it is worth recognizing that the science of artificial languages is just another, separate area of knowledge. But, be that as it may, it seems that linguistics should lead her by the arm, like a younger sister, and not contemptuously turn his back on her.

What kind of artificial languages are there?

The world of artificial languages is very diverse, and therefore the desire immediately arises to streamline it and make it visible. Obviously, this requires building a classification of such languages. Usually, such classifications include two parameters: for what purpose the language was created and whether it was created from scratch or based on some existing languages.

Let's start with goals, which can be quite varied. One of the most popular uses of artificial languages is to improve human thinking by creating a new, coherent and logical language. This goal is closely related to the so-called hypothesis of linguistic relativity, or the Sapir-Whorf hypothesis: language affects the thinking of people who speak it. It was formulated in the 20th century, but linguists and people interested in languages have thought about this problem in one way or another before. If the hypothesis of linguistic relativity is valid and language does affect thinking, doesn't that mean that the shortcomings of natural languages hinder our intellectual development and hinder thought processes? And if so, then why not invent a language that will be arranged strictly logically and in which there will be no flaws? It is these ideas, striving to improve the language, and through it, ultimately, thinking, usually guided by the creators of languages, which are called philosophical or logical … The term is also sometimes encountered anjlangs from English engineered languages. Philosophical and logical languages, as a rule, are known in rather narrow circles; Of the recent inventions of this kind, Loglan and Lojban are the most frequently mentioned.

But linguistic construction may not pretend to create an ideal, but pursue a much more practical goal: to ensure mutual understanding between people. It is clear that with the number of languages in the world, situations often arise when people need to interact with those who do not speak their native language. Sometimes in such cases, the role of an intermediary language is assumed by some existing language: for example, more than one and a half hundred languages are spread on the territory of Russia, but their speakers usually communicate with each other in Russian. At international conferences, people usually speak in English, whatever their native languages. However, there may be a feeling that this is not entirely fair - to single out one language from many, giving it a special status. It is this consideration that determines another direction of linguistic construction - the creation international auxiliary languages, or auxlangs (from English auxiliary language ‘auxiliary language’). The most famous and popular representative of such languages is, of course, Esperanto. And the science that studies them is called interlinguistics.

In addition, artificial languages can be created just for fun or artistic needs. If you are writing a fantasy novel about the inhabitants of a distant planet, in fact, it is rather strange if they speak Russian, English, or some other terrestrial language. Of course, few people will think about this, but if the author himself thinks about it, then he may want to create a special language for his characters or show at least a few strokes that it exists (Here an objection may arise: if we are talking about the inhabitants of a distant planet, why are they in general should use a communication system similar to human language, and not communicate with the help of electrical signals or something accessible only to our lack of sixth sense? But this question, perhaps, will lead us too far). Quite a lot of such languages, spoken by the inhabitants of fictional worlds, have been invented. They are called artistic languages, or arlangami (from English artistic language ‘artistic language’). Some of them are well developed and have detailed grammar, such as Klingon in Star Trek or Quenya and Sindarin in Tolkien, and some are worse - sometimes the authors limit themselves to a couple of odd-sounding words.

Of course, the boundaries between the types are rather blurred - for example, the starry language of Velimir Khlebnikov can be considered philosophical, because it claims to penetrate the deepest secrets of the universe, or it can be artistic, since it first of all became known as a constituent element of the poet's works of art.

The second parameter, in addition to the purpose of creation, by which artificial languages can be classified, is where their inventors get their lexical and grammatical material from. There are two main ways: you can take one or more existing languages as a basis, or you can come up with everything from scratch. Languages that are based on others are called a posteriori (after all, when we judge something a posteriori, we rely on already known facts and experience). On the contrary, languages invented out of nothing are called a priori (when we judge something a priori, we do not rely on anything). Esperanto, whose vocabulary is based on European languages, is an example of a posteriori language, while logical and philosophical languages are most often a priori. Of course, there are intermediate cases here too - for example, when some of the words in a language are taken from available sources, and some are created from scratch.

In a space that has two dimensions - the goal of creation and the source of linguistic material, all artificial languages can be located. But, perhaps, it is worth mentioning that some languages are not too far from them - the objects of study of quite traditional linguistics. First of all, we are talking about artificial literary standards, which are created by the willful decision of normalizers based on several dialects, such as, for example, modern German literary language. In fact, this is nothing more than a posteriori auxiliary languages, with the only difference that they are intended not for communication between different peoples, but for communication of representatives of one people speaking different dialects. Secondly, these are reconstructed ancient languages (it is no coincidence that the words reconstruction and linguistic construction have the same Latin root), for example, Proto-Indo-European. It would not be a great exaggeration to say that such reconstructions usually have a much more regular and clear grammar than the observed languages, since the very procedure of comparative historical reconstruction provides for the step-by-step removal of inconsistencies. Thirdly, these are languages written specifically for scientific needs - to describe semantics or linguistic experiments. We will also talk about all this, although usually such topics are not raised in books about artificial languages.

Classification by function is closely related to the chronology of the development of artificial languages, but does not fully correspond to it. It cannot be strictly asserted that one type in the history of linguistic construction was successively replaced by another, but it can still be said that the European Middle Ages, and especially the early New Age, is a period of interest in philosophical languages. The time of standardization and creation of linguistic norms is approximately the second half of the second millennium: somewhere these processes took place earlier, somewhere later. End of the 19th century and the first half of the XX century. became the heyday of international auxiliary languages, and this is easily explained by the fact that it was at this time that technological progress allowed people from different countries to communicate with each other much more actively. Around the middle of the XX century. there was a peak of interest in universal pictographic languages. The languages of works of art in one form or another have existed for a long time, but they have become especially numerous since the middle of the 20th century. - this was facilitated, firstly, by the popularity of J. RR Tolkien, and secondly, a wide passion for science fiction, which required inventing languages for the inhabitants of other worlds. Finally, engineered languages used in linguistics have been around for the past two hundred years.

Of course, since I allow myself to expand the concept of an artificial language, it would be possible to declare artificial languages and special-purpose languages developed in other sciences and areas of human activity: programming languages, musical notation, and so on. But since we still need to be limited to something, I will not go beyond the limits of linguistics proper - although it is musical notation that will be mentioned in Chapter 3, but not in itself, but in connection with the artificial language of Solresol.

Popular by topic