Senin, 14 Juni 2010

A. The Study of Meaning
Although any language is a system for expressing meanings through sounds, the study of meaning itself, that is semantics has been one of the most neglected areas in linguistics, for only recently has a serious interest been taken in various problems. Such neglect is understandable, because many serious difficulties arise in discussing meaning.

MEANING: SOME DIFFICULTIES
Philosophers have long puzzled over what words mean, or what they represent, or how they relate to reality, whatever reality is. In what ways do words refers to the things the name? The men of swifts grand Academy of Lagado in Gulliver’s Travels preferred to carry around objects on their bodies rather than words in their heads so that they could make direct reference. Are words themselves names of some kind? Noun comes from the Latin nomen, ”name”.
Dictionary makers face considerable problems in dealing with meaning. A simple word like table can create all kind of problems. One may ask whether words do have essential meanings, so that table for example, can be said to have some essential meaning basic to the accidental meanings associated with it in particular contexts. However, table may have no essential meanings, but rather have multiple distinct meanings.
For example, table appears to have distinctly different meaning in water table, dining table, and table an amendment. May these then not be homophonous words table with distinctly different meanings, therefore, three homonyms? Are oculist, eye doctor, and ophthalmologist synonyms? Sphere and globe? Bachelor and unmarried male? How does one define synonyms? Antonyms are words of opposite meaning, for example, large and small. But what does the opposite itself mean? It means something different in pairs such as dead-alive (mutually exclusive) and hot-cold (ends of a continuum). Buy-sell is also antonyms. But, in this case the words show a reciprocal relationship. Furthermore, to return to large and small, a large insect is still small but a small whale is very large. So antonyms require some kind of reference to use and context too.
A word like good creates even more problems. What shared meaning does good have in good beating, good book, good night, good idea, and good woman? What does good mean and how does the speaker of English ever learn to use the word correctly? Even a word such as left in the expressions the left arm of a chair and the left drawer of a bureau may be said to have different meanings, because left is used in relation to a speaker’s position as he uses the object. The left of the chair requires left to refer to the left side of a person as he sits in a chair, but the left drawer of a bureau requires left to refer to the left side of the bureau as a person faces a bureau.
The connotative uses of words add further complications to any theorizing about meaning, particularly their uses in metaphor, and poetic language. Any understanding of connotation, metaphor, and poetic language must be based on an understanding of what may be called the “normal use language”. A serious problem, of course arises in determining where the norm is located.
Psychologists have tried to access the availability of certain kinds of responses to objects, to experiences, and to words themselves, particularly in laboratory experiments using verbal stimuli. Philosophers have proposed a variety of systems and theories to account for the data that interest them. Communication experts have developed information theory so that they can use mathematical models to explain exactly what is predictable and what is not predictable when massage are channeled through various kinds of communication networks. From these varying approaches a bewildering array of conception of meaning emerges; however, few of these conceptions are relevant to linguistic concern.

LINGUSTS AND MEANING
Most linguists make no attempt to understand how ideas or words arise in the mind. They are much more concerned with how ideas are expressed in words and combinations of words once they do arise. Nor are they really interested in the quantity and quality of responses to various kinds of verbal and nonverbal stimuli and in the mathematics of information theory. However, certain problems of meaning are of interest: for example, such problems as the role of syntax in meaning, the nature of synonymy (how utterances can be said to have the same meaning), and the question of semantic universals (what characteristics of meaning are common to all languages).
Additional reasons exist for this noticeable reluctance to deal with certain kinds of problems. For a long time, students of language intermingled statements about linguistic forms with statement about meanings: nouns were said to be naming words, sentences to be groups of words that made sense, and interrogatives to be groups of words that asked questions.
Many structural linguists decide to cut a way through the resulting jungle of confusion by removing considerations of meaning as far as possible from their work with linguistic forms and systems. They argued with considerable conviction that since a language is a system of forms used to convey meaning, an investigator who uses meaning to describe the properties of the system cannot hope to come to an adequate understanding of either the formal system of a language or meaning itself, nor to escape circularity and tautology.
Consequently, the prevailing methodological approach to linguistic analysis dictated that the phonological system of a language be described first, and that this description be done without appeal to syntactic or semantic information. Any syntactic description would follow the phonological one, and finally some attempt could be made at observations about meaning.
In such circumstances we should not be surprised to find that very little progress was made in coming to any understanding of meaning. Meaning was important only insofar as knowledge concerning whether or not two utterances had the same meanings helped linguists to decide questions about morphemic cutting and grouping; however, they ignored most other aspects of meaning in their work. Much linguistic work in the 1940s and 1950s was of this kind, and even the Chomsky revolution of the late 1950s did little initially to change the emphasis. However, a dramatic shift did occur in the mid-1960s, and the last decade or so questions of meaning have come to the forefront in linguistic investigation.



B. The types of meaning
1. Homonymy
Homonyms are unrelated sense of the same phonological word. Some authors distinguish between homographs, sense of the same written word, and homophone, sense of the same spoken word. Here we will generally just use the term homonym. We can distinguish different types depending on their syntactic behavior and spelling, for example:
a. Lexemes of the same syntactic category, and with the same spelling: e.g. lap ‘circuit of a course’ and lap ‘part of body when sitting down’.
b. Of the same category, but with different spelling: e.g. the verb ring and wring.
c. Of different category, but with the same spelling: e.g. the verb keep and the noun keep;
d. Of different categories, and with different spelling: e.g. not and knot.

Of course variations in pronunciation mean that not all speakers have the same set of homonyms. Some English speakers for example pronounce the pairs click and clique, or talk and torque, in the same way, making these homonyms which are spelled differently.

2. Polysemy
There is the traditional distinction made in lexicology between homonymy and polysemy. Both deal with multiple sense of the same phonological word, but polysemy is involved if the senses are judge to be related. This is an important distinction for lexicographers in the design of their dictionaries, because polysemous senses are listed under the same lexical entry, while homonymous senses are given separate entries.
Lexicographers tend to use criteria of ‘relatedness’ to identify polysemy. These criteria include speakers’ intuition, and what is known about the historical development of the items. We can take an example of the distinction from Collins Dictionary of the English Language (Hanks 1986: 736) where, as 3.25 below shows, various sense of hook are treated as polysemy and therefore listed under one lexical entry:

3.25 hook (huk) n. 1. a piece of material, usually metal, curved or bent and used to suspend, catch, hold, or pull something. 2. Short for fish-hook. 3. a trap or snare. 4. Chiefly U.S. something that attracts or is intended to be an attraction. 4. Something resembling a hook in design or use. 6.a. A sharp bend or angel in a geological formation. b. a sharply curved spit of land. 7. boxing. A short swinging blow delivered from the side with the elbow bent. 8. cricket. A shot in which the ball is hit square on the leg side with the bath held horizontally. 9. golf. A shot that causes the ball to go to the player’s left. 10. surfing. The top of a breaking wave, ect.

Two groups of senses of hooker on the other hand, as 3.26 below shows, are treated as unrelated, therefore a case of homonymy, and given two separate entries:

3.26 Hooker1 (‘hukə) n. 1. a commercial fishing boat using hooks and line instead of nets. 2. a sailing boat of the west of Ireland formerly used for cargo and now for pleasure sailing and racing.

Hooker2 (‘hukə) n. 1. a person or thing that hooks. 2. U.S. and Canadian slang. A. a draught of alcoholic drink. B. a prostitute. 3. Rugby. The central forward in the front row of a scrum whose main job is to hook the ball.

Such decisions are not always clear cut. Speakers may differ in their intuition, and worse, historical fact and speaker intuitions may contradict each other. For example, most English speakers seem to feel that the two word sole ‘bottom of the foot’ and sole ‘flatfish’ are unrelated, and should be given separate lexical entries as a case of homonymy. They are however historically derived via French from the same Latin word solea ‘sandal’.
So an argument could be made for polysemy. Since in this case, however, the relationship is really in Latin, and the words entered English from French at different times, dictionaries side with the speakers’ intuitions and list them separately. A more recent example is the adjective gay with the two meanings ‘lively, light-hearted, bright’ and ‘homosexual’. Although the letter meaning was derived from the former in recent history, for many speakers the two senses are quite distinct, and they may seem like homonym to some, especially younger, English speakers.

3. Synonymy
Synonyms are different phonological words which have the same or very similar meanings. Some example might be the pairs below:

3.27 Couch/sofa boy/lad lawyer/attorney toilet/lavatory Large/big

Even these few examples show that true or exact synonyms are very rate. As palmer (1981) notes, the synonyms often have different distributions along a number of parameters. They may have belonged to different dialects and then become synonyms for speakers familiar with both dialects, like Irish English press and British English cupboard. Or the words may belong to different registers, those styles of language, colloquial, formal, literary, etc. that belong to different situations.
Thus wife or spouses are more formal than old lady or missus. The synonyms may portray positive or negative attitudes of the speaker: for example naïve or gullible seem more critical than ingenuous. Finally, as mentioned earlier, one or other of the synonyms may be collocationally restricted. For example the sentences below might mean roughly the same thing in some contexts:

3.28 She called out to the young lad.
3.29 She called out to the young boy.

In the other contexts, however, the words lad and boy have different connotations; compare:
3.30 He always was a bit of a lad.
3.31 He always was a bit of a boy.
Or we might compare the synonymous pair 3.31 with the very different pair in 3.32:
3.32 a big house: a large house
3.33 my big sister: my large sister

As an example of such distributional effects on synonyms, we might take the various words used for the police around the English-speaking world: police, officer, cop, copper, etc. Some distributional constraints on these words are regional, like Irish English the guard (from the Irish garda), British English the old bill, or American English the heat. Formality is another factor: many of these words are of course slang terms used in colloquial contexts instead of more formal terms like police officer. Speaker attitude is a further distinguishing factor: some words, like fuzz, flatfoot, pigs, or the slime,reveal negative speaker attitudes, while other like cop seem neutral. Finally, as an example of collocation effects, one can find speakers saying a police car or a cop car, but not very likely are a guard car or an old Bill car?

4. Opposites (Antonymy)
In traditional terminology, antonyms are words which are opposite in meaning. It is useful, however, to identify several different types of relationship under a more general label of opposition. There are a number of relations which seem to involve words which are at the same time related in meaning yet incompatible or contrasting; we list some of them below.
Simple antonyms this is a relation between words such that the positive of one implies the negative of the other. The pairs are also sometimes called complementary pairs or binary pairs. In effect, the words form a two term classification. Example would include:

3.27 dead/alive (of e.g. animal)
Pass/fail (a test)
Hit/miss (a target)
So, using these words literally, dead implies not alive, ect. Which explain the semantic oddness of sentences like:

3.28 My pet python is dead but luckily it’s still alive.

Of course, speakers can creatively alter these two-term classifications for special effects: we can speak of someone being half dead; or we know that in horror films the undead are not alive in the normal sense.
Gradable antonyms this is a relationship between opposites where the positive of one term does not necessarily imply the negative of the other, e.g. rich/poor, fast/slow, young/old, beautiful/ugly. This relation is typically associated with adjectives and has two major identifying characteristics: Firstly, there are usually intermediate terms so that between the gradable antonyms hot and cold we can find:

3.29 hot (warm tepid cool) cold
This mean of course that something may be neither hot nor cold. Secondly, the terms are usually relative, so a thick pencil is likely to be thinner than a thin girl: and a late dinosaur fossil is earlier than an early Elvis record. A third characteristic is that in some pairs one term is more basic and common, so for example of the pair long/short, it is more natural to ask of something how long is it? Than how short is it? For other pairs there is no such pattern: how hot is it? And how cold is it? Are equally natural depending on context. Other examples of gradable antonyms are: tall/short, clever/stupid, near/far, interesting/boring.
Reverses the characteristic reverse relation is between terms describing movement in one direction, , and the other the same movement in the opposite direction, ; for example the term push and pull on a swing door, which tell you in which direction to apply force. Other such pairs are come/go, go/return, ascend/descend. When describing motion the following can be called reverses: (go) up/down, (go) in/out, (turn) right/left.
By extension, the term is also applied to any process which can be reversed: so other reverses are inflate/deflate, expand/contract, full/empty or knit/unravel.
Converses these are terms which describe a relation between two entities from alternate viewpoints, as in the pairs:

3.30 own/belong to
above/below
employer/employee

Thus if we are told Alan own this book then we know automatically theis book belongs to Alan. Or from Helen is David’s employer we know David is Helen’s employee. Again, these relations are part of a speaker’s semantic knowledge and explain why the two sentences below are paraphrases, i.e. can be used to describe the same situation:
3.31 My office is above the library.
3.32 The library is below my office.

Taxonomic sisters the term antonymy is sometimes used to describe words which are the same level in a taxonomy. Taxonomies are classification systems; we take as an example the color adjectives in English, and give a selection below:
3.33
Red Orange Yellow Green Blue Purple Brown

We can say that the words red and blue are sister-members of the same taxonomy and therefore incompatible with each other. Hence one can say:
3.33 His car isn’t red, it’s blue

Other taxonomies might include the days of the week: Sunday, Monday, Tuesday, etc. or any of the taxonomies we use to describe the natural world, like types of dog: poodle, setter, bulldog, etc. some taxonomies are closed, like days of a week: we cant easily add another day, without changing the whole system. Others are open, like the flavors of ice cream sold in an ice cream flavor: someone can always come up with a new flavor and extend the taxonomy.
In the next section we see that taxonomies typically have a hierarchical structure, and thus we will need terms to describe vertical relations, as well as the horizontal ‘sisterhood’ relation we have described here.


5. Hyponymy
Hyponymy is relation of inclusion. A hyponym includes the meaning of a more general word, e.g.

3.34 dog and cat are hyponyms of animal.
Sister and mother are hyponyms of woman.

The more general tem is called the superordinate or hypernym. Much of the vocabulary is linked by such system of inclusions, and the resulting semantic networks from the hierarchical taxonomies mentioned above. Some taxonomies reflect the natural word, like 3.31 below, where we only expand a single line of the network:


3. 31 Bird


Cow Hawk Duck ect.


Kestrel Sparrowhawk etc.



Here kestrel is a hyponym of hawk, and hawk a hyponym of bird. We assume the relationship is transitive so that kestrel is a hyponym of bird.
Another lexical relation that seems like a special sub-case of taxonomy is ADULT-YOUNG relation, as shown in the following examples:

3.35 Dog Puppy
Cat Kitten
Cow Calf
Pig Piglet
Duck Duckling
Swan Cygnet

A similar relation holds between MALE-FEMALE pairs:
3.32 Dog Bitch
Tom Queen
Bull Cow
Hog Sow
Drake Duck
Cob Pen

As we can see, there are some asymmetries in this relation: Firstly, the relationship between the MALE-FEMALE terms and the general term for the animal varies: Sometimes there is a distinct term, as in pig-hog-sow and swan-cob-pen; in other examples the male name is general, as in dog, while in others it is the female name, e.g. cow and duck. There may also be gaps: while tom or tomcat is commonly used for male cats, for some English speakers there doesn’t seem to be an equivalent colloquial name for female cats (through others use queen, as above).

C. Word, sentence, and utterance meaning
c.1. Word meaning and sentence meaning

If an independent component of semantics is identified, one central issue is the relationship between word meaning and sentence meaning. Knowing a language especially one’s native language, involve thousands of words. As mentioned earlier, we can call the mental store of these words a lexicon, making an overt parallel with the lists of words and meaning published as dictionaries. We can imagine the mental lexicon as a large but finite body of knowledge, part of which must be semantic. This lexicon is not completely static because we are continually learning and forgetting words. It is clear though that at any one time we hold a large amount of semantic knowledge in memory.
Phrases and sentences also have meaning, of course, but an important difference between word meaning on the one hand, and phrase and sentence meaning on the other, concern productivity. It is always possible to create new words, but this is a relatively infrequent occurrence. On the other hand, speakers regularly create sentence that they have never used or heard before, confident that their audience will understand them. Noam Chomsky in particular has commented on the creativity of sentence formation (for example Chomsky 1965: 7-9). It is one of generative grammar’s most important insight that a relatively small number of combinatory rules may allow speakers to use a finite set of words to create a very large, perhaps infinite, number of sentences. To allow this the rules for sentence formation must be recursive, allowing repetitive embedding or coordination of syntactic categories.
See Lyons (1968: 221-2) for discussion of such recursive rules in syntax. This insight has implications for semantic description. Clearly, if speakers can make up novel sentences and these sentences are understood, then they obey the semantic rules of the language. So, the meaning of sentences cannot be listed in a lexicon like the meaning of words: they must be created by rules of combination too. Semanticists often describe this by saying that sentence meaning is compositional. This term means that the meaning of expression is determined by the meaning of its component parts and the way in which they are combined.
This brings us back to our question of levels. We see that meaning is in two places, so to speak, in a model of grammar: a more stable body of word meaning in the lexicon, and the limitless composed meanings of sentences. How can we connect semantic information in the lexicon with the compositional meaning of sentences? One approach is to let the compositional work be done by the syntactic rules, as in Chomsky’s (1965) model of grammar, shown schematically in figure 1.2. In this model the phrase structure rules build sentences and thus provide the link between individual words in the lexicon and the semantic component, which then combines the meaning of the individual words into overall sentence meanings. This was in essence the approach of generative grammar following Katz and Fodor (1963) and Chomsky (1965), though there are some differences between these works. We look at such an approach in more detail in meaning components.
We can see that it is the syntactic rules, which are the compositional engine in this system and which provide the bridge between words meanings in the lexicon and sentence meaning.

c.2. utterance and sentence meaning

These three terms are used to describe different levels of language. The most concrete is utterance: an utterance is created by speaking (or writing) a piece of language. If I say Ontogeny recapitulates Phylogeny, this is one utterance. If another person in the same room also says Ontogeny recapitulates Phylogeny, then we would be dealing with two utterances.
Sentences, on the other hand, are abstract grammatical element obtained from utterances. Sentences are abstract because if a third and forth person in the room also says Ontogeny recapitulates Phylogeny with the same intonation, we will want to say that we have met four utterances of the same sentence. In other words, sentences are abstracted, or generalized, from actual language use. One example of this abstraction is direct quotation. If someone reports he said’ Ontogeny recapitulates Phylogeny’, she is unlikely to mimic the original speaker exactly.
Usually, the reporter will use her normal voice and thus filter out certain types of information: the difference in pitch levels between men, women and children: perhaps some accent differences due to regional or social variation: and certainly those phonetic details which identify individual speakers. Speakers seem to recognize that at the level of the sentence these kinds of information are not important, and so discard them. So we can look at sentences from the point of view of the speaker, where there are abstract elements to be made real by uttering them; or from the hearer’s point view, where they are abstract elements reached by filtering out certain kinds of information from utterances.
One further step of abstraction is possible for special purposes: to identify propositions. In trying to establish rules of valid deduction, logicians discovered that certain elements of grammatical information in sentences were irrelevant; for example, the differences between active and passive sentences:

3.36 Caesar invaded Gal.
3.37 Gaul was invaded by Caesar.
From a logician’s perspective, these sentences are equivalent for whenever 3.32 is true, so is 3.33. Thus the grammatical differences between them will never be significant in a chain of reasoning and can be ignored. Other irrelevant information (for these purposes) includes what we will in context call information structure, i.e. the difference between the following sentences:

3.38 It was Gaul that Caesar invaded.
3.39 It was Caesar that invaded Gaul.
3.40 What Caesar invaded was Gaul.
3.41 The one who invaded Gaul was Caesar.
These sentences seem to share a description of the same state of a affairs. Once again, if one is true all are true, if one is false all are false. To capture this fact, logicians identify a common proposition. Such a proposition can be represented in various special ways to avoid confusion with the various sentences which represent it, e.g. by using capitals:

3.42 CAESAR INVADED GAUL.
Thus the proposition underlying the sentence the war ended might be written:

3.43 THE WAR ENDED.
Logicians commonly use formulae for propositions in which the verb is viewed as a function and its subject and any objects as arguments of the function. Such formulae often delete verb endings, articles, and other grammatical elements, so that corresponding to 3.38 and 3.39 we would get 3.40 and 3.41 below:

3.44 Invade (Caesar, Gaul)
3.45 End (war)

Some semanticists have borrowed from logicians both this notion of proposition and the use of logical formulae. We will see various applications of such formulae. As we shall see, some linguist employ this notion of proposition I their semantic analysis, often to identify a description of an event or situation which might be a shared element in different sentences.
So, for example the statement Joan made the sorbet, the question Did Joan made the sorbet? And the command: Joan make a sorbet! Might be seen to share a propositional element: JOAN MAKE HE SORBET. In this view, these difference sentences allow the speaker to do the different things with the same proposition: to assert it as a past event; to question it; or to request someone to bring it; or to request someone to bring it about.
Propositions then can be a way of capturing part of the meaning of sentences. There are more abstract then sentences because, as we saw in example 3.32 above, the same proposition can be represented by several different statements. Moreover in non-statements like questions, orders, etc. they can not be complete meaning since such sentences include an indication of the speaker’s attitude to the proposition.
To sum up: utterances are real pieces of speech. By filtering out certain types of (especially phonetic) information we can get to abstract grammatical elements, sentences. We can get to propositions, which are descriptions of states of affairs and which some writers see as a basic element of sentence meaning.





Chapter III
Summary

• We have taken a brief look at the task of establishing semantics as a branch of linguistics. We identified three challenges to doing this: circularity, context and the status of linguistic knowledge. We will see examples of these problems and proposed solutions as we proceed through this paper. We noted that establishing a semantics component in linguistic theory involves deciding how to relate word meaning and sentence meaning.
• We have looked at some important features of word meaning. We have discussed the difficulties linguists have had coming up with an airtight definition of the unit word, although speakers happily talk about them and consider themselves to be talking in them. We have seen the problems involved in divorcing word meaning from contextual effects and we discussed lexical ambiguity and vagueness. We have also looked the several types: homonymy, synonymy, opposites, hyponymy, etc.
• Utterances are real pieces of speech. By filtering out certain types of (especially phonetic) information we can get to abstract grammatical elements, sentences. We can get to propositions, which are descriptions of states of affairs and which some writers see as a basic element of sentence meaning.




Bibliography

• Saeed, l. John. 1997. Semantics. Cambridge, Massachusetts USA: Blackwell Publishers Ltd
• Wardhaugh, Ronald. 1977. Introduction to Linguistics. USA: McGraw-Hill Inc

Tidak ada komentar:

Posting Komentar