Cognitive Grammar and Symbols
In modern linguistic literature, attempts have often been made to represent meaning in terms of various artificial symbols, features, markers, and the like. But any artificial symbols have to be explained, and to explain them we need some other symbols, and so on, til we reach the level of symbols which are self-explanatory.
The term symbol is used in a wide range of works on early language as a key notion to describe or account for the speciWc nature of human language, but it has been defined in a number of different ways. It is usually agreed to distinguish symbols as:
(a) signs that, unlike indexical and iconic signs, exhibit an arbitrary relation between a meaning and a form used for its expression;
(b) objects whose reference is context-independent, including objects displaced in space and/or time,
(c) a convention, or shared cultural understanding, whereby different symbol users interpret the symbol the same way,
(d) signs that are intentional, or at least ‘‘functionally referential’’, and
(e) signs that are connected with other signs of the same kind in a network of internal relations, that is, not through relations between respective referents.
Language is also represented through symbols. The traditional text and symbols used to represent language can sometimes be a barrier to acquiring effective communication, though. Most notably, symbols are used in several models as a way to represent "knowledge".
Some of those models are based on the idea that knowledge is represented in a semantic network of symbolic nodes. Each node is symbolic in that it is assumed to represent a certain stimulus or concept. The properties and meaning of a concept are reflected in the associations in which the corresponding node is involved. As a typical example, the fact that birds typically have wings can be represented by the presence of an association between the node that represents the concept ‘‘bird’’ and a node that represents the concept ‘‘wings’’.
But there are other cognitive models that postulate the existence of subsymbolic networks (e.g., McClelland & Rumelhart, 1986). Like symbolic network models, subsymbolic network models postulate that knowledge is represented in a network of interconnected nodes. The crucial difference is that the nodes in a subsymbolic network do not symbolise stimuli, concepts, or events. Instead, knowledge is represented as patterns of activation across a large number of nodes.
The study of symbols is therefore the exploration of the sociocultural mechanisms by which language contextually indexes power relations, authorities, and identities, an indexing activity that reduces down to knowledge encoding.
It is in Sapir’s cosmographic theory where we find the idea that the psychological reality has other important implications for the cultural nature of the human mind. That is, unconscious, historical, and communal symbolic forms are ideologically projected onto the chaotic reality of discursive practices, precipitating the chaotic, rhapsodic, and romantic ‘‘flux of things into tangible forms, beautiful and sufficient to themselves’’ (Sapir, 1949), as they are found in poetically structured discursive texts and rituals.
However, no symbolic–demonstrative field exists in a social vacuum and however strong the impulse to generalize by way of rules, invariant structures, or procedures, contexts vary more radically than so far suggested, and on parameters not yet fully understood. This comes as no surprise to ethnographers, but it poses a real challenge to linguists because linguistic systems and practices articulate precisely and in detail with social phenomena beyond the reach of even the most sophisticated semiotics or "symbology".
For what concerns non-human communication, there is evidence of the fact that animals might have a concept of an object that is not present in a given situation of interaction. This rises the question as to what extent does animal behavior involve conceptual as opposed to specific learning, or abstract symbols at a representational level as opposed to contextually appropriate uses. Connected to this question it is the one that asks whether the categorization processes such as the ability to form hierarchies of inclusion relations are entirely dependent on language, or if such skills could have appeared prior to the emergence of language (see Tomasello and Call for a discussion on this).
But if, as stated above, any artificial symbols have to be explained, and to explain them we need some other symbols, and so on, then we need to first explain the emergence of this recursion mechanism. We need to explain where does recursion come from?. A number of answers have been proposed to this question. One line of answers invokes symbolic reference as a prerequisite for the rise of recursion: rather than being a necessary ingredient for the emergence of language, recursion (or ‘‘non-degrading recursivity’’) is suggested to be a consequence of symbolic reference and/or symbolic verbal language (see Deacon 2003). Hauser, Chomsky, and Fitch rightly observe that proponents of the idea that recursion is an adaptation would need to supply additional data or arguments to support this viewpoint, and that existing hypotheses are hardpressed to explain how ‘‘the capacity of language for infinite generativity’’ would have resulted from a series of gradual modifications. Of course, the capacity of language for infinite generativity is not yet proven, and if we are to take into consideration current computer models it seems more reasonable to accept that this capacity, though remarkable, is finite.
True, grammatical structures are shown to be symbolic composites of phonological units with units of meaning. Such constructions are typically multi-level, in that they are built up by successively combining smaller symbols into larger ones. At the level of the clause, cognitive linguistics posits that nominals are assigned varying degrees of prominence, either by their position in the clause or by some kind of conventional marking. The key question here is "conventional". Save for this question, a cognitive grammar account highlights the importance, among other things, of semantic correspondences between component structures in building composite structures, a view where both lexicon and grammar form a continuum of symbolic structures from which to build these composite structures. Granted this continuum (much contested by the timeless fragmented grammar theory), cognitive grammar seems to offer a good theoretical foundation to account for the categorization stemming from polysemy and the quantifying role of classifiers.
Recent focus on the dissolution of traditional ties to space on the one hand, and on new ways of symbolizing belonging in spatial terms (cf. place-making activities) on the other, unveils too some serious deficiencies in the cognitive grammar approach. Spatially differentiated speech is now known to not just provide the base medium for the interactive constitution of social and cultural systems, themselves perceived in relation to space. Rather, linguistic categorizations and evaluations are an integral part of these systems, and language differences an indexical (socially symbolic) expression of them.
In the end, we still need to answer two questions posed by Geeraerts:
First, what could be the notional, conceptual characterization of abstract entities like word classes?
Second, if you have a grammar with no rules but only symbolic units, how do you achieve compositionality, i.e. how do you ensure that different symbolic units may be combined to build larger units, like phrases or sentences?