An interview with professor James D. McCawley F.G.A.M. Aarts
Samenvatting. - Dit interview begint met een blik op de Nederlandse taalkunde, waarvan McCawley behoorlijk op de hoogte blijkt te zijn en waar hij met waardering over spreekt.
McCawley studeerde aanvankelijk wiskunde, maar zwaaide in 1961 om naar de linguistiek. Zijn ontwikkeling als taalkundige verliep volgens de ook van anderen bekende richting: Na een dissertatie over Japanse fonologie (onder leiding van Chomsky) kreeg hij pas na zijn studie aan het M.I.T. belangstelling voor syntaxis en van daaruit voor semantiek. De kern van het interview betreft een diskussie over een aantal belangrijke onderwerpen uit de huidige taalkunde. Zo komen aan de orde de generatieve semantiek, en de status van semantische dieptestructuur en presupposities, en het werk van Labov en in verband daarmee de relatie taal - taalgebruik en spreker - taalgemeenschap.
Aan het eind is er nog ruimte voor meer persoonlijke zaken: In hoeverre is humor een exclusieve eigenschap van transformationeel-generatieve grammatici, welke hobby's heeft McCawley en wat is zijn politieke overtuiging. (Red.)
Nijmegen, March 17, 19771
The Dutch have a certain reputation for being good linguists in the old-fashioned sense of the word, for having a good command of foreign languages. But what is their reputation as modern linguists? Are you familiar with what is happening in linguistics in this country at the moment?
I've read a fair number of things by contemporary Dutch linguists. I think the Dutch have a reputation for being good linguists in all senses of the word, the new as well as the old. I've seen stuff of quite recent vintage done by transformational grammarians of several varieties, and I find much of it very impressive, particularly work on Dutch syntax by persons like Klooster, Verkuyl, Evers, Henk van Riemsdijk, Jan Koster, and Simon Dik. I actually taught an informal course on Dutch syntax recently and went through some of this literature. Certainly Dutch is in
a very good state as compared with most other languages, with regard to the quantity of thorough, explicit, and penetrating analysis that is being done on it. I'm also familiar with a lot of other work by Dutch linguists. In some work that I've done on the phonology of Bantu languages, I've derived a lot of benefit from reading the extremely insightful studies that have been published by several Dutch and Belgian linguists, particularly Professor Meeussen at Leiden (actually, I believe he's a Belgian, but at least he teaches in the Netherlands). I have also been very stimulated by Pieter Seuren's work on logic and syntax. And I of course know Professor Uhlenbeck at Leiden quite well and have read a number of his works, both on general linguistics (such as his Critical comments on Transformational-generative grammar) and on the Javanese language, on which he has contributed extremely important morphological and sociolinguistic studies. Linguistics in the Netherlands is really thriving, and I think its general level of excellence is widely recognized.
The Dutch contribution to what Gleason has called ‘scholarly traditional grammar’ is, of course, considerable. Do you know the grammars of Poutsma, Kruisinga, and Zandvoort?
Certainly. In my work on English syntax, I've often had occasion to use the grammars of Poutsma and Kruisinga. In my course on Dutch syntax I made some use of Kruisinga's Dutch grammar and also of Zandvoort's English grammar, which provides Dutch translations of all the English examples and is thus very useful for finding additional data about Dutch, particularly on matters where I wanted to contrast it with English.
How do they compare with another famous continental scholar, Jespersen?
It's an interesting coincidence that you should ask about Jespersen, because my offering the course on Dutch syntax is to a large extent due to the fact that no one registered for a course on Jespersen that I had been scheduled to teach, and so I had to cancel it and offer something else instead, and I was able to find a couple of students interested in a course on Dutch syntax. Of course, Jespersen is in a class by himself. He was a fantastically original, broad, and deep thinker, and one of the reasons I was disappointed when the course was cancelled was that the course would have given me a chance to get into a lot of the parts of the Jespersen corpus that are not particularly widely read, where he
did very interesting work, as in phonetics, for example, where his terminology and analysis is often remarkably like things of Morris Halle, and also his quite voluminous work on international languages, which nobody looks at much these days, largely because many of the international languages that he's talking about have pretty much died out; however, his observations about them are of a great deal of importance for questions of language planning that are quite alive today.
What made you decide to study linguistics and what sort of linguistic tradition were you brought up in?
I was originally in mathematics and I only got into linguistics after doing quite a lot of study of mathematics. I have a master's degree in mathematics and spent a couple of years ostensibly working towards a Ph. D., but not really getting very far with it. Actually the reasons I had for being in mathematics to begin with to some extent would have been equally good reasons for being in linguistics, except that for a long time I did not know there was such a thing as linguistics. I'm leaving aside here a rather trivial reason I had for being in mathematics to begin with, namely that I found it sort of fun and sort of easy, so there was a gravitational pull towards it. The one positive reason that I had for being in mathematics was that in mathematics people were taking seriously the question of the relationship between symbols and what they stood for and saying what at least struck me when I was in High School as being sensible things, as compared with what I had seen social scientists writing, which I generally regarded as stupid, at least as far as their treatment of language was concerned. Somehow the attitudes of mathematicians towards language and its relationship to the world appealed to me then. As to my training in linguistics, during the latter part of my studies at the University of Chicago, where I was a student from 1954 to 1961, I sat in on some courses taught by Eric Hamp, particularly a historical linguistics course, and by Raven McDavid on English linguistics, and did a lot of independent reading. I finally switched to linguistics in 1961, when they started the linguistics Ph. D. programme at M.I.T. and went there to study linguistics. My thesis adviser was Chomsky. This was, however, a phonology dissertation, indeed a dissertation on Japanese phonology. That was back in the days when Chomsky would take my word for the facts of Japanese, which I don't think he does anymore. When I was at M.I.T., I was really more of a phonologist than anything
else. I became a
syntactician in a serious way only after I'd finished my studies at M.I.T. and started teaching at the University of Chicago in 1964. I took the usual syntax courses at M.I.T., but wasn't really doing it in a big way, but of course I had to teach syntax at the University of Chicago. That forced me to really get into syntax and think through stuff that I really had not thought that seriously about while I was a student. That of course led me into semantics, which I attempted to do right, having rapidly come to the conclusion that Katz and Fodor had not done it right. I had what amounts to a post-doctoral course in syntax that took place via many very long telephone-conversations, particularly with George Lakoff and Haj Ross, who, at least at that time were more advanced in syntax than I was. Talking with them for many long hours over the telephone, which, I'm sure, perceptibly increased the profits of the American Telephone and Telegraph Company, greatly increased my understanding and my abilities in syntax and semantics.
There has been a lot of criticsm of Noam Chomsky lately and it has almost become fashionable to disagree with him. How do you assess his contribution to linguistics? In 1969 The Sunday Times described Chomsky as one of the thousand ‘Makers of the Twentieth Century’. Would you agree with this?
I think that's an accurate estimate. He certainly has had a tremendous impact on the field. I think it is worth while to separate out, on the one hand the question of which of his ideas are correct and, on the other hand, the question of what sort of influence he's had on the field. Of course, the influence need not be through acceptance of his ideas, but simply by raising questions that had not otherwise been raised. While I disagree strongly with a lot of his ideas, I think his influence on the field generally has been very favourable. He has rehabilitated the human mind as an object of study. To a large extent my criticisms of Chomsky are simply that he hasn't gone far enough in that direction; I've said in public on a number of occasions that Chomsky is not mentalistic enough to suit me. There's a number of more technical points regarding the doing of linguistics that I think also were very valuable. One thing that has not been discussed much that, I think, is tremendously important, is his wiping out the asymmetry that there previously had been between what I'll call positive data and negative data. There's been a strong tradition among linguists of talking explicitly in terms of positive data: giving examples of things that are possible and only
alluding very indirectly to what is not possible, by using words like only. You can only say this by doing so-and-so. I think getting into the explicit details of what it is that is not possible made it a lot easier to raise questions that otherwise could have easily gotten avoided. I reject the specific conception of generative grammar that Chomsky is associated with, that is, a grammar as a device that will specify what are and what are not the sentences, in the sense of ‘grammatical’ strings of words, of a language. I accept instead a notion of a generative grammar in a different sense, a grammar that distinguishes explicitly between what is possible and what is not possible, where, instead of talking about sentences, you're talking about more complex objects, like complexes of sentence plus a description of a use of the sentence, plus a description of the context in which the thing can be used.
In modern linguistics we have theorizing on the one hand and careful descriptive work on the other. Would you agree that there is still a great need for the latter? I'm thinking, for example, of the work of grammarians like Dwight Bolinger and Randolph Quirk's Grammar of Contemporary English, which does not subscribe to a particular linguistic theory, but adopts a compromise position.
I completely agree. I reject entirely the position taken by a lot of transformational grammarians that you should work only with what you can handle with your theory or what isn't very far from your theory. I think to have the necessary perspective to make decisions about what is within the grasp of your theory, you really need to have a good idea of what there is in general, what is there to talk about that you may not have much of a way of talking about. I'm all in favour of people doing descriptive work on things that they do not have any way of integrating at present into what they would consider a satisfactory theory, simply because you can achieve an appreciation of what is there only by beating a path into this untrodden, overgrown territory. It's greatly to one's benefit just to be aware of what there is, through whatever means of awareness happen to be available to him.
In your article ‘Prelexical Syntax’ (1971) you wrote:
‘Finding a system of transformations that yield combinations of semantic material which are exactly the possible lexical items of English is a programme of research of which so far only fragments have been carried out. However, the hypothesis that an adequate grammar of any language will involve such a system of trans-
formations currently appears to me to be the only approach to linguistic structure that has much chance of providing an explanation of the contribution of individual lexical items to the content of sentences in which they appear’.
What sort of progress has been made in this programme of research? One sometimes gets the impression that people keep concentrating on a number of stock examples (kill, persuade, apologize, forgive, remind, etc.) without seeming to be able to make up their minds as to whether this approach will work for the language as a whole or not. Is this an unfair sort of criticism?
No, I think it is a pretty reasonable criticism. There are an awful lot of words that have been grossly overrepresented in the literature and I would be very happy to see that particular situation changing. That sort of study of the lexicon is still in a pretty fragmentary state. I've been trying myself to ‘fill gaps’: the whole field is, to a large extent, one big gap that's got various cases in it that have been filled in; but we've got lots more fragments than there were when I wrote that and I've done studies in which I've attempted to make sense out of phenomena in various restricted areas. I haven't gotten much nearer the sort of fuller picture of the lexicon that I was suggesting there. As far as specific things are concerned, I think there has been progress. There at least is now awareness of the importance of certain factors that I at least, and some others, were at best peripherally aware of in 1971, when I wrote that. Take, for example, what I was talking about in the talk that I gave here in Nijmegen two days ago, ‘Conversational Implicature and the Lexicon’. I am now acutely aware of a tremendously important thing that I just ignored totally back in 1971, when I wrote that paper, namely that what sentences will convey depends not only on the meaning per se of the sentence, but on the fact that the speaker, in saying the sentence, is choosing among an infinite number of things that he could have said. He's making this choice relative to the context that he happens to be in and is cooperating with the people that he's talking with and assumes cooperation on their part and so, of course, what is conveyed by what he said depends not only on the meaning of what he says, but on the fact that he chose to say that and not one of the other things that he could have said. What he conveys depends just as much on what he didn't say as on what he did say. That fact lets you explain away lots of otherwise weird and complicated-looking
particular lexical items; I discussed a number of such examples the other day, for example, the fact that pale red is a much less common expression than pale green, pale blue or pale yellow and that it refers to a much less pale color than those expressions do. There are a number of as yet unpublished dissertations that are important contributions to the research programme about which you asked. There is a particularly outstanding one by a student of mine, Judith Levi, on adjectives that can occur in attributive but not predicative positions, for example, the adjectives in electrical engineer and former husband. Levi did a very nive job of making sense of the kinds of relationships between the content of the adjective and the content of the noun in such expressions and of relating the special cases to general syntactic rules. She argues that these expressions reflect an interaction between productive rules of compounding and ellipsis and morphological details of individual words. For example, she argues that it is solely because of morphological idiosyncracy that we say musical clock and not *music clock, and she argues that noun compounding is basically the reduction of relative clauses into participial phrases, with the participle being deletable if it is one of a limited number of semantically basic predicates; thus, her derivation of musical clock is clock which makes music → music-making clock → music clock → musical clock, the last step by a morphological rule whose applicability is a matter of lexical idiosyncracy. I was delighted to see Levi discovering a lot of regularity in an area where most people, myself included, had not really believed there was that much regularity.2 Another domain of investigation that I and some others have gotten into recently is contrastive crosslinguistic studies of the lexicon. There's a course called ‘English-Japanese comparative vocabulary’ that I taught for the first time a year ago at the University of Chicago and will do again this coming summer at the Linguistic Institute at the University of Hawaii. The topics that I covered included various semantic domains that one language splits up in a different way than the other, and various cases where the one language has something in word-formation or derivation that the other one does not, and where it is of interest to work out the details of what the other language does instead. In teaching this course, I think I greatly improved my understanding of what is involved in a number of English constructions, for example, verb-plus-particle combinations.
Japanese has got something that English really doesn't have at all, namely verbs compounded from two verbs. English, however, in a large proportion of the cases where Japanese has that kind of compound, has combinations of verb plus particle or verb plus preposition, and the particle or the preposition corresponds semantically to the second verb of the Japanese compound. But the correspondence is influenced by the constraints that the morphology of each language imposes. For example, there are about half a dozen compound verbs in Japanese that can all be translated into English as ‘pull down’ but are all distinct in meaning: there is a verb hiki-orosu, a compound of hiku ‘pull’ and orosu ‘set down off of a vehicle or a support’, which is used for pulling, say, a boat down off a rock that is was grounded on; there's hiki-sageru, which is ‘pull’ compounded with a verb that simply means ‘lower’, which means ‘move to a lower place by pulling downwards’; there's hiki-taosu, which is ‘pull’ plus a verb that means ‘cause to cease to be upright’, used for, say, toppling over a statue by pulling on a rope that has been thrown around it. The fact that these are all expressed by ‘pull down’ in English reflects the fact that English particles are a fairly small closed class whereas pretty much any verb can be the second element of a Japanese compound. I claim that both languages are compressing two clauses into one in basically the same way, but the results of this compression are tied to morphology in different ways in the two languages, and distinctions that are maintained in Japanese must be neutralized in English because the content of the second verb must be expressed in English by a particle. I should mention in this connection an extremely
interesting dissertation that has just been completed at the University of Southern California, Lexical Structures: a Semantico-syntactic study of Japanese and English vocabularies, by Taro Kageyama.
Since the publication of Chomsky's Aspects in 1965 there have been two important developments in transformational grammar. The first is associated with Chomsky himself (e.g. ‘Deep structure, Surface Structure and Semantic Interpretation’, 1970) and with Jackendoff, (Semantic Interpretation in Generative Grammar, 1972), both of whom preserve the syntax/semantics dichotomy. The second is assocated with linguists like Lakoff, Ross, Postal and yourself, who claim that there is no dividing line between syntax and semantics. This has led to what is now known as ‘generative semantics’. May I ask you ‘what is the
future of generative semantics?’ I ask this because in an interview with Herman Parret3 George Lakoff observes that ‘we are trying to provide a theory of a subpart of linguistics, the relationship between sentences and what they mean in limited sets of contexts. We are operating under the gratuitous assumption that such a theory can be constructed without taking into account the actual processes of speech production and perception, among other things. I have a creepy feeling that such an assumption will turn out to be wrong, just as the generative grammarians were wrong in assuming that a coherent theory was possible without taking meaning and use into account. But for the present I think it is wise, at least for me, to stick to generative semantics, since it has turned out over the past six years to be very fruitful as a mode of inquiry and has not yet outlived its usefulness’. This is a pretty pessimistic attitude to generative semantics. I wonder if you share this creepy feeling with Lakoff.
Let me separate two questions: whether that is a pessimistic statement about generative semantics, and whether it is a pessimistic statement. Whether those two questions have the same answer will depend, among other things, on how broadly or narrowly you interpret the term ‘generative semantics’. In talking about the future of generative semantics or the future of current molecular biology or the future of what have you, you're talking about something that isn't going to stand still, but can develop in lots of different ways. It is arbitrary where you stop applying a term like ‘generative semantics’ or a term like ‘quantum mechanics’. The issue that George was talking about there was that of how long it will be profitable to make the separation that we have been making between, on the one hand, linguistic competence and, on the other hand, devices for processing language both in production and in comprehension. That particular question is something that will arise in connection with any approach to grammar that you could conceive of, whether or not you accept the division between syntax and semantics, or any other controversial point that you could talk about. You can still raise the question ‘Are we deluding ourselves by acting as if this domain that we have taken for ourselves is a coherent domain that will provide the source for answers to the questions that we are asking?’. Suppose that a practitioner of some variety of generative semantics or some variety of Chomskyan interpretive semantics or some variety
of stratificational grammar develops his thinking in such a way as to conclude: ‘to make sense out of the problems that I've been setting myself, I have to talk more directly in terms of what people are doing in processing language’. In each case you can ask ‘Is what he will be left with afterwards appropriately called “generative semantics”, “interpretive semantics”, “stratificational grammar”, .....?’ Each of these terms refers not to a specific theory or approach but to a scientific community that encompasses a variety of related theories and approaches and of judgements as to what questions are most important and which claims one should be least ready to give up (that is, judgements as to what parts of a theory are ‘baby’ and what parts are ‘bathwater’). Just as the vast majority of biological species that exist at any one time will eventually become extinct and those that will survive will do so in forms that their ancestors might well not recognize, so with scientific communities: many will eventually die out, and those that survive will have along the way rejected, superceded, or changed the status of the claims and approaches that now typify them. One could call Lakoff's statement a pessimistic view of generative semantics only if one held (or attributed to Lakoff) the belief that the viability of ‘generative semantic’ approaches (‘viability’ in the sense of the capacity to change so as to accomodate new problems, while maintaining continuity as a research community) depended crucially on the strict separation of linguistic knowledge from linguistic processing mechanisms. I in fact am fairly convinced that you can't make a strict separation: that to make sense out of what a person's knowledge of a language consists in, you will have to talk about language processing: what goes on in the production and comprehension of
language. There are a number of cases where otherwise recalcitrant linguistic facts can be made sense of in terms of interactions between linguistic rules per se and processing difficulties that prevent a speaker from producing a sentence that is otherwise ‘possible’ or prevent the hearer from comprehending such a sentence. However, it is not clear that when you separate out processes of production and comprehension, the remaining ‘linguistic rules per se’ will add up to a complete grammar rather than just to a fragment of one.
Talking about meaning and use, do you agree with William Labov's statement (in Sociolinguistic Patterns, 1972, p. 187) that ‘it is difficult to avoid the common-sense conclusion that the object of linguistics must ultimately be the instrument of communication used by the speech
community, and if we are not talking about that language, there is something trivial in our proceeding’?
Its's misleading to speak of ‘the instrument of communication’ and ‘the speech community’. Any individual is not a member of just one speech community; there are large numbers of overlapping speech communities which he belongs to, and it is not implausible to extend the term ‘speech community’ even to cover really minimal speech communities, like a speech community consisting of a single person in his communications with himself (in, say, notes that he writes for his own future reference-it should be observed that the language one uses for that purpose often differs markedly from the language of larger speech communities that he belongs to). Also, a speech community may have several instruments of communication. What Labov says in the passage you cited is correct in the sense that any instance of language use is connected with a community within which that language use is taking place: a person's choice of what he does in speaking, how he does it, and what he does along with speaking are influenced by this knowledge of the community that provides part of the setting of his utterance. Labov's work has been extremely valuable for pointing out respects in which the community and the individual speakers interact, ways in which the speaker's knowledge of his language involves not just what have usually been thought of as linguistic units but also factors relating to the setting and purpose of utterance. I thus am quite happy to have direct reference to linguistic communities and to factors of speech situations appear in the statement of linguistic rules. One thing, though, that has been suggested in some of Labov's work that I don't agree with is the idea that it doesn't make sense to speak of individual speaker competence. I think you can perfectly reasonably talk about individual speaker competence for something that involves interaction between that individual and a community. Of course,
you have to keep in mind that the individual speaker competences are not restricted to private languages or to speech within totally homogeneous communities, but they involve one's knowledge of communities that he belongs to, as well as strategies for communicating with people whose language is not exactly the same as his. Sorting out what goes on in actual speech must be in terms of interactions between linguistic knowledge, knowledge of the difference among various varieties of language, and ways of accommodating this knowledge to unfamiliar varieties of the language. That,
by the way, is one aspect of language that I don't think has been given as much attention as it deserves, though at least Charles-James Bailey, of the Technical University at Berlin, has laid a lot of emphasis on it.
Grammar and Meaning, which contains virtually everything you wrote on syntax and semantics during the years 1964-1971, was published in Japan in 1973. Have there been any major changes in your views since you wrote the last article in that anthology?
I have already mentioned that I now pay much more attention than before to conversational implicature, or more generally, relationship of language to matters of language use and relationship of speakers to situations. To answer the question properly, I'd really need to consult a list of my publications, since my memory is not up to the task of providing an instant overview of my last 5 years' intellectual development. I've been attempting to do more serious and more detailed studies of areas of lexicon, just in order to have more perspective on problems of lexicon than I had in some of my earlier papers, which were very much open to the criticism that you mentioned before, that they are very anecdotal and the same examples appear over and over again. I've also been very concerned with logic and with working out the details of the relevant branches of logic, so as to obtain a clearer picture than I had when I wrote the papers in Grammar and Meaning of what can be done in logic and how it can relate to natural language. I'm in the process of writing a textbook called Everything that linguists have always wanted to know about logic (but were ashamed to ask), in which I deal with both the sorts of things that are normally covered in elementary logic courses and a lot of areas that are not normally covered but are of some relevance to linguistics, for example, presuppositional logic, fuzzy logic, and various applications of possible world semantics. I attempt to go critically through these things, identifying both real and apparent discrepancies between things in formal logic and things in natural language, showing how various apparent discrepancies can be explained away (say, by conversational implicature), and where the discrepancies are real, exploring what you can do to revise the logic so as to get rid of the discrepancies. There are real discrepancies in the logic of conditional propositions: the logicians' ‘material
conditional’ is nowhere near an adequate rendition of conditional propositions. In particular, the logicians' material conditional satisfies the law of contraposition: from p Ɔ q (‘if p, then q’) you can infer ~qƆ~ p (‘if not q,
then not p’). But that doesn't work in general in natural language: if you do contraposition on a sentence of English, you reverse temporal and/or causal connections between the clauses. For example, the contrapositive of ‘If I do heavy exercise, my heart beats faster’ is ‘If my heart doesn't beat faster, I don't do heavy exercise’, which doesn't mean the same thing and, in fact, has the reverse temporal and causal connections. The popular claim that ‘p only if q’ means the same as ‘if p, then q’ is also wrong, though the contrapositive of the latter, ‘not p if not q’ turns out to be a good paraphrase. ‘My heartbeat goes over 100 only if I lift heavy weights’ can be paraphrased as ‘My heartbeat doesn't go over 100 if I don't lift heavy weights’, but not as ‘If my heartbeat goes over 100, then I lift heavy weights’. In fact, only if is not, as most logicians seem to think, an idiom, but is a perfectly productive combination of only with if. I've attempted to develop an analysis of conditional propositions that agrees with standard logic in the realm where standard logic works but diverges from it in appropriate ways, for example, accounting for the reversal of temporal and causal relations under contraposition, and accounting for the range of modification that if allows in combinations like only if, even if, except if, and especially if.
How does one justify putative semantic structures? One piece of evidence, as you have pointed out, is that there are sentences containing adverbs (as in I closed the door temporarily) where the adverb does not refer to a surface structure constituent, but modifies a constituent in the semantic deep structure. This would justify an analysis of close as CAUSE - BECOME - CLOSED.
What is the most important evidence for the justification of semantic deep structures?
I think it doesn't make sense to rank evidence in order of relative importance - such a ranking can only be made on the basis of subjective evaluations of individual arguments that ignore the historical context of the argument; how good something is as an argument for some conclusion depends on what it is an argument against, and what the alternatives one must consider are changes from one year to the next. To return to your original question, ‘How does one justify putative semantic structures?’, the justification must be in terms of interaction between your proposal and whatever you can get your hands on. The adverb arguments are, of course, a case in point: the proposed semantic
structures allow the adverbs to modify the sort of thing that they normally modify and allow you to avoid analyses in which you posit spurious extra senses for the adverbs. Of course, you don't have a special sense of temporarily in a sentence like He closed the door temporarily. I'm going to be sort of vague in my answer to this. As I said a minute ago, what you're after is interactions between the thing that you are hypothesizing and anything you can get your hands on. I mean literally that. I want to leave open the entire gamut of possible interactions, possible things that you could look at. The actual things that linguists have generally gotten into have been grammatical and relatively tractable semantic things, things such as adverb-scope, as in the example that you were talking about. I'd be a lot happier if the range of things that people were able to get their hands on and relate to these hypotheses could be expanded considerably. At the moment what I've got is a fragment of a list of things that I've been happy with as arguments for particular details of semantic structures, but the whole range of things in logic and in grammatical phenomena, like support for modification and so forth, the whole gamut of psychological correlates, which has been explored only fragmentarily, will have to be looked at. Jerry Fodor at MIT, who disagrees with me on a lot of particular analyses, has been getting into some interesting psychological experiments that may very well be very productive with regard to justifying particular analyses or to showing that certain supposed similarities aren't quite as much of similarities as they have been taken to be. This very interesting series of experiments I know about only through a five-minute conversation with Fodor, when I saw him at MIT in November, so I can't give you anything more than just a fragmentary sketch of what he's doing. He's applying the sort of experimental paradigm
that Pim Levelt developed in a number of papers over the last seven or eight years, presenting written sentences to subjects, taking all of the possible three-member subsets of the words in those sentences, and asking the subjects to judge relative relatedness: which two of these three words are most related to each other. Fodor, however, uses this procedure in a quite different way than Levelt did. Levelt treated it as a way of establishing experimentally what the syntactic constituent structure of sentences is. If you look at the actual figures in Levelt's papers, it becomes pretty clear that quite a large number of factors besides constituent structure affected the subjects' judgements of degree of relatedness among the words. For example, in sentences like John washed
the dishes and Mary swept the floor, where two sentences of parallel syntactic structure were conjoined, the informants gave high degrees of relatedness to words that were in parallel positions in the two clauses (say, John and Mary), though, obviously, the subject of the first clause and the subject of the second clause do not comprise a single syntactic constituent. Fodor is using this experimental procedure to isolate factors other than constituent structure. He takes the position that it is a test for any of the factors, and there are a lot of them, that affect informants' judgements of relatedness, and if you hold the syntactic structure constant and vary other things such as the choice of lexical items, you can get information having a bearing on the analysis of those lexical items. Fodor used sentences with words that clearly are semantically complex (say, doubt, which is generally agreed to be a combination of ‘believe’ and something negative), sentences having words that appear to be semantically simple (say, touch), and sentences with controversial words like kill, so as to see whether the underlying structures involving lexical decompositions that have been proposed might be reflected in the results of the experiment. According to Fodor, if the sentence is underlyingly simple, the subject and object nouns ought to be perceived as more closely related than if there is an underlying structure in which the surface subject originates in one clause and the surface object in another, as in my analysis of kill, in which the surface subject originates three clauses above the surface object. As I recall what Fodor told me in November, his subjects reported essentially the same degree of relatedness between subject and object with kill as with semantically simple verbs, though with some other semantically
simple verbs (I don't recall which ones) they reported a significantly lower degree of relatedness between subject and object. Assuming that the results are, in fact, consistent and fit this description, this would provide evidence that at least some of the hypothetical lexical decompositions have a kind of psychological reality that others don't. I'm saying this not so much because I'm convinced that his experiments work right - I just don't know enough about them to tell whether they do - but because I think this is a very valuable direction to pursue and I would like to see more experiments that attempt to get that sort of access to information about semantic structures.
Talking about semantic structures, are there limits to what can be expressed in a semantic deep structure? For example, in your article
‘Tense and Time Reference in English’ (in Fillmore and Langendoen, Studies in Linguistic Semantics) you give the following sentences:
1. Have you seen the Monet exhibition?
2. Did you see the Monet exhibition?
and you observe that the former of these carries the presupposition that the exhibition is still running. Nevertheless the latter would have to be used if speaking to a person who one knew would not be able to go and see the exhibition. How does one account for this aspect of the meaning of a sentence in a semantic deep structure?
For that particular sentence I'd go along with what I said in that paper, namely that this particular use of the present perfect involves a quantifier, so you have embedded within the question the sentence ‘There is an event of your visiting the Monet exhibition’, with an existential quantifier. The domain of the existential quantifier has to be a stretch of time in which it's possible for the embedded proposition to be true, ‘possible’ in the sense of ‘possible’ that would be relevant to the purposes of the interchange. This would mean, then, that, if you're talking about an exhibition that no longer is running, it's of course no longer possible for the person to visit it, and thus the period of relevant values for the time variable would not include the present. Those cases would require the use of the past tense rather than the present perfect: A condition for the present perfect is that the range of possible values for the variable has to include the present. Of course the other case that you were talking about, where the exhibition is still running but the person in question cannot visit it, because he's confined to bed or confined to a wheelchair and will remain there until the exhibition is over, would be another case of the same thing, where the judgment that you have to say ‘Did you see the Monet exhibition?’ rather than ‘Have you seen the Monet exhibition?’ depends upon your judgment that that factor rules the present out of the domain of possible values for the time variable. What I'm doing is dividing the problem into two parts. One is the semantic representation of each of these things. Does the range of values include the present? Does it not include the present? The other is, do the actual facts of the situation conform to this semantic structure?
Some linguists believe that is is necessary to distinguish between what is asserted and what is presupposed. I am thinking, for example, of
Fillmore's verbs of judging, like accuse, which presupposes that the situation one is accused of is bad, or the verb realize, which presupposes the truth of its object complement (as in John realizes that Mary is a fool) and of Lakoff's example Jóhn called Máry a vírgin and then shé insulted hím. Other linguists disagree. Ruth Kempson, for example, (in Presupposition and the Delimitation of Semantics, 1975) claims that ‘all the problems raised by presupposition are in fact pseudo-problems for semantics, since no concept of presupposition has any place within the semantics of natural language’. Do you agree with this?
No, I don't agree with it, and I also harbour the suspicion that Ruth Kempson doesn't really agree with it either, at least that's the way I interpret some of the things that she says furtheron in her book. Let me just comment on a number of things that raise their heads in your question. First, the term ‘presupposition’ has been used to cover quite a broad range of things and a lot of the criticism that Ruth had in her book related to linguists' invocation of the notion of ‘semantic presupposition’ (that is, a proposition that must be true in order for the proposition in question to have a truth-value at all). There are also various notions of pragmatic presupposition, in the sense of conditions that have to be met in order for it to be felicitous to use a sentence at all, which need not reflect any semantic presupposition. So you can perfectly well have a sentence that expresses a proposition that has got a truth-value, but where some felicity condition is violated. The arguments that Ruth gave in the first few chapters of the book were arguments against interpreting a lot of the standard examples in terms of semantic presupposition: she was arguing that you can perfectly well say that the sentences in question express propositions that are either true or false, rather than lacking a truth-value in cases where the supposedly presupposed proposition is not true. That of course leaves open the question whether you've still got a viable and perhaps even necessary notion of pragmatic presupposition. There's been some extremely good work done on pragmatic presupposition by Lauri Karttunen in various papers. The most accessible one is the one in the first issue of Theoretical Linguistics,4 in which he presents a notion of pragmatic presupposition in terms of
well-formedness relative to a context, where he has this special notion of context as that set of
propositions that at the particular point in the discourse can be taken for granted by the parties to the discourse. He treats sentences and, in some cases, pieces of sentences, as incrementing the context in this sense and speaks of particular sentences as being well-formed, if the context, this set of sentences, entails the ‘presupposed’ proposition. I think Karttunen's treatment is beautiful and solves a lot of problems that, I think, are really serious problems about what is it that is odd about various sentences and why it is that certain others that contain the same sort of embedded sentences are not odd. So I'm happy with the notion of pragmatic presupposition. I share Ruth Kempson's worry that semantic presupposition may be irrelevant to linguistics. When I said a minute ago that I had doubts as to whether Ruth herself really disbelieves in presupposition entirely, I was thinking of, for example, her discussion of definite singular noun-phrases, the celebrated Bertrand Russell example The King of France is bald. Russell's analysis was in terms of a formula that's literally: ‘There is an X such that X is King of France, no one other than X is King of France and X is bald’. Discussion of this, for reasons that I do not understand, has always concentrated on the first and third of these terms, but has ignored the second, the term that says ‘No one else is King of France’. Ruth Kempson in her discussion of this says, in a very off-hand manner, in the process of doing something else, that the proposition that there is a King of France and the proposition that there's only one King of France, have quite different status.5 She claims that ‘The King of France is bald’, is false rathed than truth-value-less, if there is no King of France. But she isn't so happy with saying that in the case where there's more than one King of France, and so she
at least wants to distinguish between the roles of those two things. I would do likewise and my understanding of the sentence is that the proposition that there's only one King of France is a presupposition in one or other of these senses. I would doubt that it's a semantic presupposition, but it's at least something that the speaker must be committed to in order for this to be a felicitous utterance.
As far as I know there is no introduction to transformational theory in terms of the generative semantics model. Why is it that even the most recent introductions to transformational grammar (e.g. Akmajian and Heny, An Introduction to English Transformational Syntax (1975),
Huddleston, An Introduction to English Transformational Syntax (1976) and Keyser and Postal, Beginning English Grammar (1976) have little or nothing to say on generative semantics?
There actually is one introductory book with a very strongly generative-semantic approach, by Parisi and Antinucci, published last year, which is an English translation of a book that was published in Italian a year or two before. It's got a very non-descript title, Essentials of Grammar, but it does adopt a strongly generative-semantic approach. In looking through it, when I got hold of it a couple of months ago, the thought struck me that it's pretty close to the sort of book that I might well have written in 1970, had I ever written a book in 1970, which means that there are lots of things that I'm not happy with in it, but it is still a pretty good presentation of what generative semanticists were doing in the early seventies. Regarding the other books you mentioned, the Keyser and Postal book at least displays generative-semantic attitudes at various places. Keyser and Postal give a lot more attention than the other authors to semantic considerations and a lot less consideration to purely formal matters, and are less concerned with the question ‘Is this ungrammatical?’ than with ‘What is it that's odd about this?’. The Keyser and Postal book is sort of agnostic with regard to a lot of the theoretical issues, but it takes points of view at various places that are at least consonant with things that would be near and dear to a generative-semanticist's heart, and it is much closer to hard-core generative semantics than to Chomsky's ‘Extended Standard Theory’. To answer the remaning part of your question, the sole reason why there isn't a real generative-semantics textbook, is sloth on the part of the potential authors. I can offer no explanation other than that.
Would you agree that, upon the whole, there is very little humour in linguistics? I ask you this question because you seem to be one of the few exceptions. Linguists tend to take themselves very seriously, don't you think?
Well, some linguists do, some linguists don't. I certainly can think of lots of linguists with whom I've had rollickingly funny times. My impression is that linguists, on the average, have got much more in the way of humorous competence than do practitioners of a lot of other subjects. There also seems to be quite a bit of variation among different
schools of linguistics as to the amount of humour that can be found. I guess linguistics up until, say, the early sixties was fairly uniformly pretty straight and it was only after that that humour became more visible, not exclusively among transformational grammarians, though I think the way in which the art of constructing example sentences was developed to an art form in the middle to late sixties, partly by people like Paul Postal, who is a great constructor of bizarre and beautiful examples. Interesting, amusing and enlightening example sentences can be found in the works of linguists of quite a variety of different persuasions and there are a number of linguists that I can think of that are masters of the art of joke-telling, such as Haj Ross. I certainly would not agree with the suggestion that there is little humour in linguistics. A large quantity of linguistic humour appears in a festschrift that was published in my honor a few years ago.6 There is also a new journal that is about to appear which will contain a lot of linguistic humour, namely Maledicta, subtitled ‘the journal of verbal agression’, edited by Reinhold Aman in Milwaukee. It will contain articles on cursing, dirty jokes, and offensiveness of various kinds, and the first issue will contain a paper by Quang Phuc Dong entitled ‘Three lexicographic notes on individuation’, which is, among other things, a refutation of the popular misconception that the word turd means ‘piece of shit. As Quang points out, while it is true that every turd is a piece of shit, it is not true that every piece of shit is a turd, because if you divide a turd in two, you
have two pieces of shit but not two turds. The individuation, Quang observes, is that imposed by the contraction of the anal sphincter and not just any arbitrary individuation.
Do you have any hobbies? I understand that you are a gourmet and an excellent Chinese cook. Apart from cooking, do you find time for anything else? Do you read novels, for instance?
I read quite a lot of science fiction, though not much other fiction. Science fiction actually ties in with my professional concerns, since it is such a great source of linguistic examples. For example, it was only in reading a science fiction story by Isaac Asimov that I realized something quite important about the muchdiscussed expressions ‘The Morning Star’ and ‘The Evening Star’. At one point in this story
‘Heredity’ one of the characters uses the expression ‘The Morning Star’ to refer to something in the sky that he's looking at. But this scene takes place on Mars, and the celestial body that this character is refferring to is not Venus buth Earth. The fact that this is a perfectly normal use of the expression shows that the expressions ‘The Morning Star’ and ‘The Evening Star’ do not have constant reference, that they are ‘shifters’, in Jakobson's sense. They mean ‘celestial body that manifests itself briefly in the morning/evening’, and what celestial body that is depends on where the observer is. I'm also very much into music. I play piano, recorder, and guitar and spend much time reading through the collected works of various composers. Indeed, over a period of a couple of years I carried out a project of playing through all 555 Scarlatti sonatas, leaving out about a dozen that I simply couldn't bang my way through. I have a record collection that includes about 170 of the Bach cantatas, plus lots of Mozart and Mahler and Carl Nielsen and Hindemith and a number of other composers that I'm fond of. I'm also involved in political thinking and action of a very anti-political sort. Specifically, I'm quite active in the Libertarian movement in the United States. While my activities have consisted mainly in publishing letters in newspapers, in which I advocate the dismantling of those state institutions that I find especially pernicious (public schools, the Social Security System, the CIA), I was a Libertarian Party candidate for Trustee of the University of Illinois last year and received 27,542 votes (only 980,000 less than I needed to win). While I'm a Libertarian of the anarchist variety, I am happy to use existing political machinery such as elections as a vehicle for propaganda against state oppression and state robbery. I take a strongly anarchistic position in all
of my thinking, as in the talk that I gave here a couple of days ago on ‘Scientific revolutions and the market for ideas’, which was an argument for a very anarchistic, specifically, anarcho-capitalist approach to the philosophy of science. My thinking in this regard has been helped greatly by the work of Paul Feyerabend, particularly by his wonderful book Against Method: Towards an Anarchist Theory of Knowledge (London: New Left Books, 1975), and I have also derived much benefit from reading works by economists of the Austrian School (Ludwig von Mises, Friedrich von Hayek, Murray Rothbard), whose view of economics as dealing with choice by individuals is of immediate applicability to the study of intellectual transactions (such as an individual's acceptance of an idea) as well as to exchanges.
This interview was conducted by F.G.A.M. Aarts on the occasion of the Conference on Empirical and Methodological Foundations of Semantic Theories for Natural Language
, which was held at the University of Nijmegen, March 14-18, 1977. The interviewer would like to thank Professor McCawley for his cooperation and Carlos Gussenhoven for help with the transcription of the text.
A revised and expanded version of Levi's dissertation is to be
published in 1978 by Academic Press, New York.
Herman Parret, Discussing Language
, The Hague: Mouton, 1973, pp. 151-178.
Lauri Karttunen, ‘Presupposition and linguistic context’, Theoretical Lin
guistics 1 (1974), 182-194.
A Zwicky, P. Salus, A. Vanek, and R. Binnick, editors, Studies out in Left Field: Defamatory Essays presented to James D. McCawley on the Occasion of his 33rd or 34th Birthday
, Edmonton: Linguistic Research, Inc.