In layman’s terms, if a language has no word for a given concept, then its speakers will not be able to conceive of the concept. Such a condition would render language the ultimate straitjacket; like the state-engineered Newspeak in Orwell’s 1984, the language we speak as our native tongue would hold our thoughts hostage.
The Sapir-Whorf hypothesis was eventually discredited on the strength of obvious contradictory evidence — the fact that, e.g., Mandarin Chinese has no formal grammatical categories for the present, past, and future tenses certainly does not prevent Chinese speakers from conceptualizing the present, past, and future. In like manner, although English, unlike most modern European languages, including French, German, Spanish, and Russian, does not assign grammatical gender to nouns (as, for example, determining that an inanimate object like a bridge be masculine or feminine), most English speakers have no trouble grasping the notion of inherent gender when it is explained to them.
Now, however, some linguists (including Deutscher) are reconsidering that, at least in some conceptual domains, the grammars and lexicons of languages may indeed constrain the conceptual universes of their speaker communities. There is, for example, the fact that, as the eminent linguist Roman Jakobson pithily put it, “languages differ essentially in what they must convey and not in what they may convey.” That is to say, whereas in English the word “neighbor” is gender-neutral, it is not in either French or German. Thus, if an English speaker talks about spending time with his neighbor, he is not compelled by the grammar to reveal whether the neighbor was male or female. But in German, French, or Spanish, he will unavoidably supply this information, since the pairs Nachbar/Nachbarin, voisin/voisine, and vecino/vecina unavoidably reveal whether the neighbor was masculine or feminine, respectively.
But can the limitations of grammar also generate limitations on the ability to conceptualize? Deutscher cites examples of languages whose speakers allegedly cannot conceive of “egocentric” directional systems like right/left/front/behind, because their languages only use “geographical” directions like north and south. Speakers of such languages have a difficult time grasping notions like “to the right of my foot,” because their perception of space has always been from a general and invariant, rather than a personal, privileged point of view.
Alluring as such notions may be, they avoid the crux of the matter, a subtle if crucial point about language that most modern linguists are loath to acknowledge (although Roman Jakobson, to his everlasting credit, always insisted upon it), namely, that human thought is not at root linguistic but instead semiotic.
Semiotics, the study of signs or things that convey meaning, has always provoked faint embarrassment among linguists eager to portray their discipline as a science of equal rigor with physics or logic. Semiotics, at least in recognizable form, originated with Aristotle, was investigated minutely by certain of the Medieval schoolmen (Duns Scotus foremost among them), and nurtured to maturity by magisterial 19th-century thinkers like Charles Sanders Peirce and Ferdinand de Saussure.
Although many things lie in the domain of the semiotic (since most things in the perceptual and conceptual universe convey meaning), language is the archetypal sign system. It is certainly one element of human thought, but by no means the only one. No one, not even the most rational among us, thinks in sentences or even sentence fragments. Our thoughts are instead concatenations of pictures, words, impulses, feelings, and the like, all of which have long since been accounted for by Peirce’s wonderful semiotic theory. There are, it seems, many different categories of signs, from familiar terms like icons and symbols to more obscure categories like rhemes and legisigns. The words, grammar, and other features of language are all semiotic, but the semiotic of thoughts and concepts are, at least according to Peirce, pre-linguistic.
This is why human languages, even superficially dissimilar and unrelated ones, display many universal grammatical categories. Some are absolute — there is no language that does not have the noun-verb opposition or show predication, for example. Others are implicational — for example, the fact that languages whose canonical word order is subject-object-verb (SOV) tend overwhelmingly to use postpositions rather than prepositions. Such phenomena suggest strongly that there are inherent truths stemming from timeless logic that precede, rather than flow from, language.
While languages may select lexical items and grammatical categories, all languages are also analytical — that is, they may use other combinations of words to describe a concept that does not happen to be a discrete lexical item. Thus, for example, the Tamil language may possess the word kavadi, for which English has no single-word equivalent. But an English speaker can resort to descriptive language — “a bower-shaped ritual object used in the worship of the South Indian deity Murugan” — and the dilemma is resolved.
Finally, the fact that languages constantly modify their grammar and invent new words while discarding old ones is proof enough that human thought is in no significant degree limited by language. Die Gedanken sind Frei (“thoughts are free”) goes the old German song, and the evidence, now as always, suggests that the human mind will always be a free agent, and language its foot servant.