can better deal with the fact that subtle differences in word meaning correlate with other differences in the syntactic structure that the word appears in.(Levin, et al, 1991). The way this is gone about is by looking at the internal structure of words. These small parts that make up the internal structure of words are termed semantic primitives. (Jackendoff, 1990).
A linguistic theory that investigates word meaning. This theory understands that the meaning of a word is fully reflected by its context. Here, the meaning of a word is constituted by its contextual relations. Therefore, distinctions between degrees of participation as well as modes of participation are made. In order to accomplish this distinction any part of a sentence that bears a meaning and combines with the meanings of other constituents is labeled as a semantic constituent. Semantic constituents that cannot be broken down into more elementary constituents are labeled minimal semantic constituents. (Cruse, 1998).
Computational semantics is focused on the processing of linguistic meaning. In order to do this concrete algorithms and architectures are described. Within this framework the algorithms and architectures are also analyzed in terms of decidability, time/space complexity, data structures they require and communication protocols. (Nerbonne, 1996).
Terms such as semantic network and semantic data model are used to describe particular types of data models characterized by the use of directed graphs in which the vertices denote concepts or entities in the world, and the arcs denote relationships between them.
The Semantic Web refers to the extension of the World Wide Web via embedding added semantic metadata, using semantic data modeling techniques such as Resource Description Framework (RDF) and Web Ontology Language (OWL).
In psychology, semantic memory is memory for meaning – in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. The memories may be transferred intergenerationally or isolated in one generation due to a cultural disruption. Different generations may have different experiences at similar points in their own time-lines. This may then create a vertically heterogeneous semantic net for certain words in an otherwise homogeneous culture.(Giannini, 1975). In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include part of, kind of, and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, neural networks and predicate calculus techniques.
Ideasthesia is a rare psychological phenomenon that in certain individuals associates semantic and sensory representations. Activation of a concept (e.g., that of the letter A) evokes sensory-like experiences (e.g., of red color).
Types of meaning
Structural, or grammatical, meaning
First, one must recognize that the meaning of any sentence comprises two parts, the meanings of the words it contains and the structural or grammatical meaning carried by the sentence itself. In English the dog chased the cat and the boy chased the cat differ in meaning because dog and boy are different words with different word meanings; the same applies to equivalent sentences in other languages. The two sentences the dog chased the cat and the cat chased the dog, though containing exactly the same words, are different in meaning because the different word orders distinguish what are conventionally called subject and object. In Latin the two corresponding sentences would be distinguished not by word order, which is grammatically indifferent and largely a matter of style, but by different shapes in the lexical equivalents of dog and cat. In Japanese the grammatical distinction of subject and object, normally marked by the word order subject-object-verb (SOV), can be reinforced by a subject particle after the first word and an object particle after the second.
The formal resources of any language for making distinctions in the structural meanings of sentences are limited by two things: the linear (time) dimension of speaking and the limited memory span of the human brain. Writing copies the time stream of speech with the linear flow of scripts. Diagrams and pictures employ two dimensions, and models employ three; but writing is partially relieved of memory-span restrictions by the permanence of visual marks. Because written texts are almost entirely divorced from oral pronunciation, sentence length and sentence complexity can be carried to extremes, as may be observed in some legal and legislative documents that are virtually unintelligible if read aloud.
Within these linear restrictions, distinctions corresponding to the main uses of language can be made. All languages can employ different sentence structures to state facts (declarative), to ask questions (interrogative), and to enjoin or forbid some course of action (imperative). More delicate means exist to soften or modify these basic distinctions-e.g., its cold today, isn’t it? Isn’t it still raining? Shut the door, if you don’t mind; don’t be long, will you? Languages use their resources differently for these purposes, but, generally speaking, each seems to be equally flexible structurally. The principal resources are word order, word form, and, in speech, pitch and stress placement. In English, as an example, a word or phrase can be highlighted by being placed first in the sentence when it would not normally occur there: compare he can’t bear loud noises with loud noises he can’t bear or loud noises, he can’t bear them. The object noun or noun phrase can also be put first by making the sentence passive; this allows the original subject to be omitted if one does not know or does not want to refer to an agent: the town was destroyed (by the revolutionaries). Within and together with all these possibilities, almost any word can be made contrastively prominent by being stressed (spoken more loudly) or by being uttered on a higher pitch, and very often these two are combined: I asked you for RED roses (not yellow); I meant it for YOU (not her); HE knows nothing about it (someone else may). Prominence is especially associated with intonation, itself an important carrier of structural meaning in speech. One may state facts, ask questions, and give instructions with a variety of intonations indicating, along with visible gestures, different attitudes, feelings, and social and personal relations between speaker and hearer.
The possibilities of expressing structural meanings are a highly important part of any language. They are acquired along with the rest of one’s first language in childhood and are learned more slowly and with more difficulty in mastering a second or later language. Scholars are still only at the beginning of a full formal analysis of these resources, as far as most languages are concerned, and are still farther from an adequate understanding of all the semantic functions performed by means of these resources.
Semantic relations (meaning relations)
In the narrow sense are semantic relations between concepts or meanings. The concept [school] should be distinguished from the word ‘school’. [School] is a kind of [educational institution]. This indicates a hierarchical (or generic) relationship between two concepts or meanings, which is one kind among a long range of kinds of semantic relations.
The concept [School] may, for example,
be expressed by the terms or expressions ‘school’, ‘schoolhouse’ and ‘place for teaching’. The relation between ‘school’ and ‘schoolhouse’ is a (synonym) relation between two words, while the relation between ‘school’ and ‘place for teaching’ is a relation between a word and an expression or phrase. The relations between words are termed lexical relations. ‘School’ also means [a group of people who share common characteristics of outlook, a school of thought]. This is a homonym relation: Two senses share the same word or expression: ‘school’. Synonyms and homonyms are not relations between concepts, but are about concepts expressed with identical or with different signs.
Relations between concepts, senses or meanings should not be confused with relations between the terms, words, expressions or signs that are used to express the concepts. It is, however, common to mix both of these kinds of relations under the heading “semantic relations” (i.e., Cruse, 1986; Lyons, 1977; Malmkj?r, 1995 & Murphy, 2003), why synonyms, homonyms etc. are considered under the label “semantic relations” in in a broader meaning of this term.
Some important kinds of semantic relations are:
• Active relation: A semantic relation between two concepts, one of which expresses the performance of an operation or process affecting the other.
• Antonym (A is the opposite of B; e.g. Cold is the opposite of warm)
• Associative relation: A relation which is defined psychologically: that (some) people associate concepts (A is mentally associated with B by somebody). Often are associative relations just unspecified relations.
• Causal relation: A is the cause of B. For example: Scurvy is caused by lack of vitamin C.
• Homonym. Two concepts, A and B, are expressed by the same symbol. Example: Both a financial institution and a edge of a river are expressed by the word bank (the word has two senses).
• Hyponymous relationships (“is a” relation or hyponym-hyperonym), generic relation, genus-species relation: a hierarchical subordinate relation. (A is kind of B; A is subordinate to B; A is narrower than B; B is broader than A). The “is a” relation denotes what class an object is a member of. For example, “CAR – is a – VEHICLE” and “CHICKEN – is a – BIRD”. It can be thought of as being a shorthand for “is a type of”. When all the relationships in a system are “is a”, is the system a taxonomy. The “generic of” option allows you to indicate all the particular types (species, hyponyms) of a concept. The “specific of” option allows you to indicate the common genus (hypernym) of all the particular types.
• Instance-of relation. (“Instance”, example relation) designates the semantic relations between a general concept and individual instances of that concept. A is an example of B. Example: Copenhagen is an instance of the general concept ‘capital’.
• Locative relation: A semantic relation in which a concept indicates a location of a thing designated by another concept. A is located in B; example: Minorities in Denmark.
• Meronymy, partitive relation (part-whole relation): a relationship between the whole and its parts (A is part of B) A meronym is the name of a constituent part of, the substance of, or a member of something. Meronymy is opposite to holonymy (B has A as part of itself). (A is narrower than B; B is broader than A).
• Passive relation: A semantic relation between two concepts, one of which is affected by or subjected to an operation or process expressed by the other.
• Paradigmatic relation. Wellisch (2000, p. 50): “A semantic relation between two concepts that is considered to be either fixed by nature, self-evident, or established by convention. Examples: mother / child; fat /obesity; a state /its capital city”.
• Polysemy: A polysemous (or polysemantic) word is a word that has several sub-senses which are related with one another. (A1, A2 and A3 shares