## March 31, 2005

### Deutsche Mavens

I suppose this should not come as a surprise, but it isn't just English that has language mavens. Bastian Sick (sic) writes a column called "Zwiebelfisch" in Der Spiegel, in which he chastises Germans for their non-standard grammar, and gives advice about pronunciation. I didn't know what "Zwiebelfisch" meant, but it has an entry in the German Wikipedia, from which I learned that it is a printing and publishing term for a single letter that is erroneously set in a different font. Like this, I guess.

Here Sick berates native speakers for forming the perfect of "anfangen" (to begin) with the auxiliary verb "sein", e.g.

- Ich bin angefangen.
- Er ist angefangen.
- Sie sind angefangen, etc.

Sick says that the only correct way is to use the auxiliary "haben":

- Ich habe angefangen.
- Er hat angefangen.
- Sie haben angefangen, etc.

Duden agrees with him on the formation of the perfect of "anfangen" of course, as did all my old German teachers. Sein + angefangen can actually make sense as a *Zustandspassiv* (statal passive) construction, but construed that way it does not mean what the speakers intend (namely that, say, she has begun (something), but rather that she is (in the state of having been) begun.) So there is no haven for those Duden-bucking non-conformists there.

In other posts Sick discusses the perfect of "stehen" (to stand), and answers questions about the correct way to pronounce "Mecklenburg," and whether one knocks "an der Tür" (on the-DAT door) or "an die Tür" (on the-ACC door.) He also takes on the idea that Rechtschreibung (correct spelling/grammar) plays no role in email, and opines that "Smileys ersetzen keine Interpunktion." (Smileys don't replace punctuation.)

This reminded me of Lynn Truss in *Eats, Shoots and Leaves: the Zero Tolerance Approach to Punctuation:*

Anyone interested in punctuation has a dual reason to feel aggrieved about Smileys, because not only are they a paltry substitute for expressing oneself properly; they are also designed by people who evidently thought the punctuation marks on the standard keyboard cried out for ornamental functions. What's this dot-on-top-of-a-dot thing for? What earthly good is it? Well, if you look at it sideways, it could be a pair of eyes. What's this curvy thing for? It's a mouth, look! Hey, I think we're on to something. (193)

The German case is interesting in that, when they talk about "Rechtschreibung" they are talking about a set of rules that were amended the the 1990s in what is referred to as *die Rechtschreibungsreform* and there is a recognised *document* which lays down these new standards for German written text. The reforms were introduced to German schools in 1996 - while I was teaching in a German High School near Kiel, (yes, I too was an English Language Teacher. It's sort of like the foreign legion for British university students.) Overnight the brilliant and lunatic "Schifffahren" (travel by boat) (formed by joining "Schiff" (boat) and "fahren" (travel)) was replaced with the more mundane "Schiffahren." The German media adopted the new rules en masse in 1999, but the leading broadsheet *Die Frankfurter Allgemeine Zeitung* only stuck it for a year before going back to the old ways.

The end of this post seems like a good excuse to sneak in a link to Mark Twain's "The Awful German Language." I actually used this text when teaching English to a group of talented 18 year-olds in a German highschool. Bad experience.

What is this passage about?

The man is being very rude about our language. I think it is stupid.

That's interesting. What do the rest of you think? Is Mark Twain trying to insult German?

(silence)

I agree with Inga. He is stupid. He writes stupid things.

So, er...no-one thinks it meant to be humourous then?

No, because it is not funny.

45 minutes to go.

And there is no recovering from accidentally offending your audience...

Posted by logican at 1:51 AM | TrackBack

## March 29, 2005

### Life and Logic

Martin Davis reviews the Fefermans' biography of Tarski in American Scientist.

Posted by logican at 12:53 PM | TrackBack

## March 28, 2005

### The Lone Linguist

Brett Hyde has accepted a permanent offer from Wash U, which will mean that he continues in his role as the university's only linguist. Brett is extremely popular with students, even scoring a site where they list memorable things he has said. I think my favourites are:

"Usually my first speech after an assignment is the 'say no to drugs' speech."and:

"Who can use 'razzmatazz' in a sentence?"

Posted by logican at 9:51 PM | TrackBack

### Tomorrow's World: quote.u.licio.us?

A number of logic- and language-related weblogs have been reporting on the usefulness of del.icio.us, a site to help one collect and manage bookmarks, and citeUlike (glowing references here and here,) a site for collecting and managing references, which allows the importing of BibTeX files. I hear they are very simple to use, which makes it especially galling that I can't get the bookmarklets from either site to work in Safari 1.03. With "log javascript exceptions" turned on in the debug menu I get the error:

:2: TypeError - Value undefined (result of expression encodeURIComponent) is not object. Cannot be called.

But I was thinking that what I really need is citeUlike combined with del.icio.us, with links to print.google.com thrown in, as well as an efficient method for saving *quotations*. I spend hours leafing though books looking for the place where the author said that thing that I kind of remember. Half the time, when I find it, the author hasn't said it in a sufficiently committed or quotable form anyway. (The notable exception to this is A.J. Ayer, who can be relied upon to say everything you kind of thought he said, in simple and bite-sized chunks.) I am sure it would save me time if I could just log these things when I see them. Even now I am tempted to waste 30 minutes thumbing though *Language, Truth and Logic*, looking for the perfect quote to support my semi-humourous point...

29/03/05 Update: And this, after a year and a half of perfect contentment with Safari, is how I ended up with Firefox. Relationships are funny things.

Posted by logican at 6:15 PM | TrackBack

## March 25, 2005

### How to put proof trees in weblog entries

One thing that helped me in the setting up of this weblog was the trail left by people who had gone before me, documenting their techniques and methods on the web.

In particular, after drawing a blank with Wash U's helpdesk, I learned about my current host from this amazingly helpful post, about Multiblog from elise and about anti-spam measures from Matt's comments.

In homage to the same spirit, here is how I put prooftrees into the previous post. No doubt there are better and more elegant and more techie ways to do this, but then, that isn't really the point, is it? But if anyone does have some alternatives or improvements to suggest, the comments would be a very good place for them. This method presupposes some familiarity with LaTeX, and some familiarity with html and CSS. If you don't have this familiarity, but are interested in learning, you can pick up the basics from the sites I have linked to. You'll also need to know how to use ftp. A much more serious flaw: this method presupposes that you have access to Adobe Photoshop. (Added 28/03/05: see the comments for non-photoshop alternatives.)

Still not deterred?

1. Download Paul Taylor's prooftree package from c-tan.

2. Get LaTeX to call it by putting the following line in your document's preamble:

\usepackage{prooftree}

3. Use LaTeX to create your proof tree. I put each proof tree in a separate .tex file. Greg's page PnC Example Prooftrees provides a good introduction to the prooftree package, as well as a couple of useful definitions for resizing line-labels. Here is the source for the third proof tree in my previous post:

\documentclass{article}

\usepackage{prooftree}

\usepackage[mathcal]{euler}

\def\usng#1#2{\using{\textrm{\tiny{$[#1$#2$]$}}}}

\usepackage{latexsym, amsmath}

\begin{document}

$$

\begin{prooftree}

\[\[\[ A \rightarrow B \quad \[ \justifies A \usng{}{1}\]\justifies B \usng{\rightarrow}{E}\] \[\justifies B\rightarrow C \usng{}{2}\] \justifies C \usng{\rightarrow}{E}\] \justifies A\rightarrow C \usng{\rightarrow}{I,1} \] \justifies (B\rightarrow C)\rightarrow(A\rightarrow C) \usng{\rightarrow}{I, 2}

\end{prooftree}

$$

\end{document}

And here's what it produces:

4. Get the output file into .pdf format. If you're a TeXShopper, this is the normal kind of output file from LaTeX, but if you're using a LaTeX application that outputs a .dvi file, you'll have to use one of the many .dvi to .pdf conversion procedures, which are described elsewhere. Try here.

5. Open the .pdf file in Photoshop. Photoshop will ask you if you want to "rasterise" the file and for some details about how you want to do this. Don't just use the default resolution, or your images will be very fuzzy and low-definition. My images were produced by setting the rasterising resolution to 300 pixels per inch.

6. Put the kettle on. Rasterizing takes *ages* (up to 5 mins!)

7. When it is finished, Photoshop will have your page open in a window. Do not be alarmed if it is covered in grey and white gingham, that will not be a part of the final image.

8. Zoom in on your image so that you can see it properly. Use the crop tool to cut the page down around the proof - you don't want an entire page on your website with only a tiny proof in the middle. (This step can be skipped if you want an entire page on your website with only a tiny proof in the middle.)

9. In the file menu, choose "save for web" and when the save options come up, chose "gif 128" as the format in which to save the picture. There are other acceptable options, but this is one way to get a file that is small (my proofs are all 4k), and in a format that can be used on the web. Small image files are especially important to readers on a modem connection, but they help everyone's page-loading speed and they keep your bandwidth and diskspace useage down. This can save you time and money. (This step can be skipped if you don't care about other people, time or money.)

10. Ftp your finished file to your server in BINARY mode. (In ftp in a unix terminal you can change the transfer type to binary by typing "bin" after the ftp prompt.)

11. You might need to set the permissions of the .gif file. 644 seems to work fine...Help!

12. You can add the image to your entry using the html img tag like this:

<img alt="proof of C from A->(B->C), B and C" src="http://your-site.org/filepath/filename.gif" width="300">

The src attribute is essential as it gives the path to the image file. In Movable Type I found I had to put the full path to the image in, otherwise the images didn't show up in the archives. (If your directory structure is different I imagine you might be able to use relative path.) The alt attribute is strongly encouraged by the web standards guys. It means that those who are using text-only browsers or who are otherwise visually impaired will be able to get a sense of what they're missing. The width attribute is optional and allows you to adjust the size of the image on your page.

13. I wanted the images centered, so I wrapped the img-tag in div tags like this:

<div class="img"> <img src="http://www.logicandlanguage.net/images/entry/blogproof.gif" width="300"> </div>

which are governed by the following bit of css.

.img {

text-align: center;

}

(If you are using movable type, you can add this to the "stylesheet" template.) Otherwise you need to find the right file that ends in .css - try it out; you can always go back and delete it later.

14. That's it. Feel free to post advice/improvements/alternatives in the comments.

Posted by logican at 1:43 AM | TrackBack

## March 23, 2005

### Normalisation/local reduction (Ch2 of Proof & Counterexample)

In a previous post I discussed A.N. Prior's 'tonk' connective and local reduction, and lots of smart people left comments, or discussed the issue on their own blogs. Greg Restall, in particular, pointed me to chapter two of the proof theory book he is writing. Greg's book project is likely to be particularly interesting to e-philosophers (that's you), because he is posting it online as a Wiki *as he writes it*, and inviting participation and comments.

Anyway, I have now read chapter two, and I thought I would write a little about what I have learned, as a follow-up to the Tonk post, and perhaps as a prelude to some more philosophical posts on 'tonk', the truth predicate and meaning.

1. First of all, I have been reminded of the tree method of writing natural deduction proofs, which I haven't really thought about since I first read Gentzen. Simple tree diagrams are pretty easy to read. Here is one:

The formula on the bottom line is our conclusion, any formula which does not have a horizontal line above it is a premise. The line above a formula tells us how it was derived; when we want to derive something from (say) two formulas, we draw a single horizontal line under them both and write the derived formula under that line. In the simple tree above, C is the conclusion, A→(B→C) and A, and B are all premises, and B→C is an intermediate step, derived from A→(B→C) and A using the rule of [→E]. Here is a simple proof in which premises get discharged:

In this case we have assumed A at the beginning, then discharged it using [→I] on the second line. We indicate that a premise has been discharged by drawing a line above it and writing the number of the step where it was discharged next to that line (number assignment is arbitrary; it is just a way of matching up discharged premises with the places where they were discharged. The orderly-minded will, no doubt, wish to start at 1 and move up through the natural numbers, and the rest of us should refrain from alarming our readers by following suit.) Finally we use [→I] again to get the conclusion A→(A→A). It's worth noting here that in this case we have 'discharged' a premise we never assumed. Some logics take a dim view of this kind of thing, as we shall see...but first, a less simple example of a proof, for you to look on in awe:

2. Different logics have different policies about what kinds of premise-discharging are allowed. At the most liberal end, we have standard logics (including classical logic.) They are liberal in two degrees; they allow vacuous premise discharges (as in the last step of the second proof above) and they allow more than one premise to be discharged at once, like in the last line of this proof:

If we allow more than one premise to be discharged at once, but do not allow vacuous discharging, then the result is a relevant logic. The second proof above is thus not relevantly acceptable, since it uses vacuous discharge in the last step. If we allow neither vacuous discharge nor duplicate discharge, the result is a linear logic (important to computer scientists, I believe) and if we allow vacuous but not duplicate discharging, the resulting logic is *affine.* Affine logics include, but are not exhausted by, intuitionistic logics.

We are dealing with logics quite generally here, independently of questions about which is the "one true logic," so we want to be able to talk sensibly about all of them. If we are going to talk about logics which do not allow multiple discharges, it is no use thinking of the premises, or premises and assumptions in force at any particular step, as a *set*. These logics care about whether there are two occurrences, or merely one occurrence, of A in the premises and assumptions, but the set {A, A, B, C} is the same set as {A, B, C}. So we think of our premises as forming a *multiset*, which is a bit like a set, except that elements can be members more than once, so that "A, A, B, C" is a different multiset from "A, B, C" (disappointingly, multisets don't come with any cool bracket notation, we just write them out as lists in quotation marks.)

In this chapter we are restricted to logics with just two rules: [→I] and [→E]. This is because Greg is attempting to keep his treatment of normalisation simple (he believes that the basic idea *is* simple and that you shouldn't have to be a mathematician to grasp it.) Check out his introduction for more about the goals of the book.

3. Methodology: in a sense, it is easy to demonstrate that there is a proof of some formula or argument; we just exhibit the proof. Without resorting to model theory, it is trickier to show that there is no proof of something. It turns out that one way to do it is to show that there is no *normal proof* from the premises to the conclusion, since i) it is often easier to show that there is no normal proof than it is to show directly that there is no proof at all and ii) it turns out that if there is a proof, there is a normal proof. The climax of the chapter is the proof of that last conditional. I am not going to go through the proofs here - this is just an explanation and summary of the results. There are two main results, the subformula theorem, and the normalisation theorem.

5. Normal Proofs: a proof is normal iff the concluding formula A→B of an [→I] step is not at the same time the major premise of an [→E] step.

In the schematic proof on the left below, the formula A→B is both the conclusion of an [→I]-step and the major premise of a [→E]-step. For now I will call such pairs "non-normal pairs" (Greg calls them critical pairs, but from the discussion with Aaron Stump on the website (scroll down to see it) it seems as if he no longer likes this terminology.) We can take non-normal pairs out by rewriting the proof as in the schematic proof on the right:

When we generalise the definition of "normal proof" to cover the other connectives, it seems that the 'tonk' proof that I was interested in here is bound to count as non-normal:

6. The Subformula Theorem: normal proofs satisfy the Subformula Theorem: if π is a normal proof from premises X to conclusion A, every formula in π is in sf(X,A), where sf(X,A) is the set of subformulas of X and A. Non-normal proofs do not, in general, satisfy the subformula theorem, as this example shows:

Here the formula A→A is a subformula of neither the premise, nor the conclusion. Clearly the inference with 'tonk' would not satisfy a generalised version of the subformula theorem either, since it contains formulas which are not in sf(P, Q), namely 'P tonk Q.'

And, finally

7. The Normalisation Theorem: if there is a proof π from X to A, there is a normal proof π' from X' to A.

- If π is linear, so is π' and X=X'.
- If π is affine, so is π' and X' is a submultiset of X.
- If π is relevant, then so is π', and X' is a modest super-multiset of X containing more repeats.
- If π is standard, then so is π', and X' is a variant of X found by possibly deleting some elements from X and duplicating others.

X' is a submultiset of X iff whenever a is an n-fold member of X, a is an m-fold member of X for m less than or equal to n. (m may be zero.)

e.g. "A,A,B,C" is a submultiset of "A,A,B,B,C,D"

X' is a modest supermultiset of X if whenever a is an n-fold member of X for some n>0, then a is an m-fold member of of X' for some m greater than or equal to n and whenever a is a member of X' to some non-zero degree, it is also a member of X.

e.g. "A,A,A,B,C" is a modest super multiset of "A,B,C"

The normalisation theorem turns out to be easy to prove in the linear and affine cases (where there are no duplicate discharges), and a bit harder to prove in general. The basic idea in the easy case is that you take the non-normal proof, and remove the non-normal pairs one at a time, using the method outlined in 5 above. Proofs have a finite length, and the proof gets shorter each time a pair is removed, so eventually there will be no more pairs to remove. The result is a normal proof.

But if we allow duplicate discharges, as we do in standard and relevant logics, the proof might not get smaller when we remove a non-normal pair, since we may be writing the proof of A many times, in place of every time we assumed A. In these cases, how can we know that the process of normalisation will come to an end?

Complexity: the complexity of a formula is the number of connectives it contains.

(In order to make the following definition readable on the web, sequences are written within parentheses (curved brackets.) This is not the correct notation and I am looking for a better alternative.)

Non-Normality: the measure of non-normality of a proof is a sequence (c1, c2,..., cn) of numbers such that ci is the number of critical pairs of formulas of complexity i. The sequence for a proof stops at the last non-zero value. Sequences are ordered with their last number as most significant, that is (c1,...cn) > (d1,...dm) iff n>m, or if n=m, when cn>dn, or if cn=dn, when (c1,...cn-1) > ( d1,...dn-1).

Lemma - Non Normality reduction: given a proof π with some critical pair π reduces to π' with a lower measure of non-normality.

So even though you might not have fewer non-normal pairs, when you take one of them out, you will have a proof with a lower non-normality measure. Keep going, and eventually the process will return a normal proof. Thus every proof can be transformed into a normal proof.

Caveat: as the detail of the normalisation theorem makes clear, except in the linear case, the normal proof will not necessarily proceed from the same multiset of premises as the original.

Posted by logican at 3:22 PM | TrackBack

### Gödel in the New Yorker

Jim Holt tells the story of Einstein and Gödel to readers of the New Yorker.

Posted by logican at 2:22 AM | TrackBack

## March 22, 2005

### Logic Happens

Logic happens, but *where* does it happen? Australia, apparently, and if your argument is good enough, if could happen to you too...

Posted by logican at 6:21 AM | TrackBack

## March 21, 2005

### Inference vs Implication

When I was a graduate student at Princeton (many days ago), we used to joke that Gilbert Harman had only three kinds of question for visiting speakers:

- Aren't you ignoring < insert recent result in psychology >?
- Aren't you assuming that there is an analytic/synthetic distincton?
- So you say, < insert one of the speaker's claims >, but isn't that just conflating inference and implication?

These questions had a tendency to induce harmania in the subject and the inference/implication point, (which is what this post is really about) sometimes seems to me to have the following odd property: hardly anyone gets it if you explain it to them in conversation (I didn't get it that way), but everyone understands it if they read Chapter 1 of Change in View.

So naturally I want to try it out here. The following claims are ubiquitous and false:

- Logic is the study of the principles of reasoning.
- Logic tells you what you should infer from what you already believe.

Each overstates the responsibilities of logic, which is the study of what follows from what - implication relations between interpreted sentences; one can know the implication relations between sentences without knowing how to update one's beliefs.

Suppose, for example, that S believes the content of the sentences A and B, and comes to realise that they logically imply C. Does it follow that she should believe the content of C? No. Here are two counterexamples:

1. Suppose C is a contradiction. Then she should not accept it. What should she do instead? Perhaps give up belief in one of the premises, but which one? Logic does not answer the question - as we know from prolonged study of paradoxes - because logic only speaks of implication relations, not about belief revision.

2. Suppose she already believes not-C. Then she might make her beliefs consistent by giving up one of the premises, or by giving up not-C. Or she might suspend belief in all of the propositions and resolve to investigate the matter further at a later date.

Hence these questions about inference and belief revision - about what she should believe given i) what she already believes and ii) facts about implication - go beyond what logic will decide. That's not to say that logic is never relevant to reasoning or belief revision, but it isn't the science *of* reasoning and belief revision. It's the science of implication relations.

Convinced? Gil has a short and very clear discussion of this, and the pernicious consequences of ignoring it, in the second section of his new paper (co-authored with Sanjeev Kulkarni) for the Rutger's Epistemology conference.

Posted by logican at 9:05 PM | TrackBack

## March 20, 2005

### Alternative APA

The American Philosophical Association holds its Pacific Division Meeting in San Francisco from 23rd to 27th March, 2005, and at the moment they still plan to hold it at a hotel under boycott from the union UNITE HERE. Sally Haslanger has posted the letter from UNITE HERE asking philosophers to boycott the hotel, as well as links to information about the alternative APA (also hosted on the APA website) and a very sensible letter about it all from Lawrance Blum. Brian Weatherson hosts a serious discussion about whether the benefits that young philosophers get from the Pacific APA should outweigh the needs of the hotel workers to some extent. Brian thinks 'yes.' I think 'no.'

I am a young philosopher. Among other things that involves having an excellent education, which I can only expect to get better, and some official recognition of that fact (my PhD.) I have research skills, highspeed access to the internet on my beloved 12-inch powerbook, and, these days, I have a weblog. I am fortunate in all these things and they give me a kind of power which most hotel workers don't have: they allow me a voice and a *respectful* audience, and with that comes some influence over the world around me.

A year ago or so I read Barbara Ehrenreich's *Nickel and Dimed*, in which she takes minimum- or low-wage jobs (waitress, maid, cleaner etc) for a year and reports on her experiences. One of the most obvious features of those experiences - familiar from my own undergraduate experiences with such jobs - is that you get no respect; your employer doesn't listen to you because she doesn't care what you think; employees may be expected to empty out their pockets and submit to searches at any time; their schedules are rearranged with little respect for their convenience, and the contract between the employee and employer is treated like a benificent gift from the employer, which she may retract with a moment's notice. The employee is treated as if they were almost powerless. And they kind of are - on their own. The thing about *organisation*, of which unions are a form, is that it lets a lot of people with very little power work together. And together they have more power and stand a better chance of getting all the benefits of respect: being listened to, being treated carefully, not being taken for granted, not being threatened.

Of course, power is just power, and both employees and employers can do good or bad things with it. But that's true of the vote too, and we don't need to know how someone is going to use their vote in order to decide whether or not they should have one. For the most part people in low-paying jobs are too vulnerable and too easy to exploit. They should have some more power over their lives. If a group of them have got together to try to take back some control and respect, then yes, they might inconvenience us, there might be interests of ours that are harmed in the process and it might be that young philosophers have their interests harmed more than most. But not so much that it would justify stamping on a serious attempt to improve the lives of low-wage workers.

I've opened the comments on this, because, well, it's that kind of issue. Comments are welcome from ANYONE, no matter what they think. The Clinical Attitude is encouraged. Respect for other commentors is required.

Added 21/03/2005: Here's the link to Brian's post giving the APA's reasons for staying at the boycotted hotel.

Posted by logican at 12:07 PM | TrackBack

### America's lexicographical sweetheart

Saturday's New York Times has an article on hip, young dictionary writers.

Quick quiz: which of the following are cool?

- Playing the trumpet
- Wearing pink glasses
- Wearing Nike's Converse sneakers
- Having a phonetic chart tattooed on your back
- Serving drinks in blue glasses
- Being young
- Wearing bespoke suits
- Having written a history of the word 'fuck'

Part 2: Which of them make you a good lexicographer?

(Of course, the answers to these questions might vary if some candidates can read mirror writing, or have *really* flexible necks.)

The New York Times has an obnoxious registration policy, so feel free to sign in as "hateyoumore" ("hateyou" was already taken.) Your password is "hateyoumore" too, and if anyone asks, you're a clergyman who makes over US$150000 per annum.

Posted by logican at 1:04 AM | TrackBack

## March 19, 2005

### Hawaiian's Phoneme Inventory (yawn?)

Tenser, said the Tensor has a linguistics quiz designed to highlight the small core of languages that get used as standard examples in linguistics. The quiz contains questions like:

Name a head-initial language

and

Name a language with basic word order SOV

The author predicts that in many cases, the answers of those educated in linguistics will match his own (in this case, English and Japanese respectively.)

Languagehat is mildly critical of the situation, supposing it shows that linguistics education is unimaginative. And maybe Languagehat is right. But whether or not it is a bad thing to have a core of examples like this depends on what you tend to do with them.

It would be bad if these examples were all that was used to generate and support general theories. New and non-standard cases make good tests of generalities, so we should get off the beaten track and see if a theory that works at home also works in less familiar territory.

But there are some tasks that a well-known core of examples will perform *perfectly.* Want to show that there *are* languages with noun classifiers? Then Mandarin Chinese is all you will ever need, and it really doesn't matter if everyone studying linguistics in the last 30 years has learned the same example. Existential claims, unlike general theories, can be well supported by a single example.

As can negative generalisations (e.g. it is not the case that all...- negations of general theories) since they are really just existential claims in disguise. Certain examples may be standard because they are clear counterexamples to an otherwise plausible or popular theory. Think that dialects are always mutually intelligible? Uh-uh: Chinese shows otherwise (at least according to the quiz.) There's no need to leave the beaten path here.

So, while it might be right that the quiz shows that linguistics lacks imagination, it does not necessarily show a flaw in its methodology.

We might also note that other disciplines have a core of standard examples as well. Off the top of my head, some examples would probably include: physics - Young's double slit experiment in optics, the Michelson-Morley experiment, Rutherford's gold foil experiment. Philosophy - Putnam's robot cats from Mars, Jackson's "Mary"-example, Kripke's Goedel case. And maybe psychology: the Milgram experiments, split-brain patients, the Muller-Lyre illusion.

Posted by logican at 1:47 AM | TrackBack

## March 17, 2005

### House-Warming in the Blogosphere

I'm back from California and it has been interesting to track the ripples from the opening of this weblog. In my third post I wrote that a google search for "tonk" and "local reduction" returned only a multiple-choice exam for the Indian civil services, while one for "local reduction" and "deduction" returned sheaves of computer science lecture notes. CS guy pointed out yesterday that the same searches now return, in first place, my post, and Brian Weatherson's

Posted by logican at 4:18 AM | TrackBack

### Talk Postponed

My talk at the Univeristy of Alberta - originally scheduled for today - has been postponed until the 21st April. (If only such reprieves were to be had for the wishing...)

Posted by logican at 4:10 AM | TrackBack

## March 7, 2005

### Quietness

I will be away from my computer until the 16th March, so this site will be quiet until then.

Posted by logican at 3:44 AM | TrackBack

## March 6, 2005

### 'Tonk' and Local Reduction

Recently, I was talking to a computer scientist about the idea that the introduction and elimination rules for a connective give the connective's meaning, and I mentioned Prior's 'tonk' as an example of a connective that has introduction and elimination rules, though some argue that it fails to be meaningful. CS guy had never heard of 'tonk,' so I explained that 'tonk' is a binary connective with an introduction rule that allows you to get 'P tonk Q' from 'P' (which is like the introduction rule for '∨') and an elimination rule that allows you to get 'Q' from 'P tonk Q' (which is like the elimination rule for '∧.')

'Tonk' is a very exciting and powerful connective; for example, it allows me to demonstrate that the conclusion follows from the premises in the following argument:

There is life on Earth.

---------------------------

There is life after death.

Here is the proof:

Let P be 'There is life on Earth' and Q be 'There is life after death.'

P (our premise)

P tonk Q (by tonk-intro)

Q (by tonk-elim)

There, one of the fundamental questions solved in a heartbeat. Except that we can also give a proof of the following:

There is life on earth.

-----------------------------

There is no life after death.

So now we are back where we started, lost and confused and existentially angsty, and perplexed about 'tonk' to boot. 'Tonk' is too powerful and any natural deduction proof system in which you can prove any theorem at all - say, 'P → P' - will be trivialised by the addition of the tonk rules (by which I mean that it will be possible to prove P for arbitrary P.)

CS guy understood this all very quickly, of course. I tried to explain why this might seem to need explanation; if the rules for 'tonk' merely give its meaning, how can they be wrong? And if they are not wrong, how can it be wrong to add them to our proof system? But of course it must be wrong to add them to our proof system, because they make it unsound. We seem to need a way to distinguish a connective like 'tonk' from one like '∨' and explain what is wrong with 'tonk,' yet not wrong with the other connectives. Then I mentioned that I thought a standard response to the 'tonk' example was to say that 'tonk,' unlike the good connectives, failed to express a truth-function.

CS guy said brightly: isn't it just that 'tonk' doesn't have the local reduction property, wheras the other connectives do?

And I said: local reduct - huh?

It turns out that the usual cast of connectives - ¬, ∨, ∧, → and ↔ - have the following property: if anywhere in a proof there is an application of the introduction rule for a connective, followed immediately by an application of the elimination rule for that same connective * which takes the result of the introduction rule as its major premise*, then the introduction and elimination pair can be eliminated, leading to a more direct proof of the conclusion.

I should probably track down a presentation of the result, but for now I will give a couple of examples, which should convey the general idea of what this local reduction property amounts to. Suppose we have a proof that contains an instance of ∧-intro, followed immediately by ∧-elim on the result, like this:

P

Q

P ∧ Q (∧-intro)

P (∧-elim)

Then clearly the middle two steps are superflouous. We can obtain a more direct proof of P from the first premise. Here's a slightly less obvious case. Suppose we have a proof that contains an application of ∨-intro, followed by an application of ∨-elim, where the result of the ∨-intro is the major premise in the ∨-elim, e.g.:

¬Q

P

P∨Q (∨-intro_

(new subproof)

P

P

(end of subproof)

(new subproof)

Q

∧

P

(end of subproof)

P (∨-elim)

Again, the conclusion can be reached in a more direct way.

But it seems clear that 'tonk' does not have this local reduction property. Consider my argument

P (our premise)

P tonk Q (by tonk-intro)

Q (by tonk-elim)

which I used to show that 'there is life after death' followed from 'there is life on earth.' This just is an application of tonk-intro followed by tonk-elim. But, intuitively, without this step, the proof would not have been possible at all.

My impression is that discussions of 'tonk' have proceeded without awareness of the local reduction property, an impression that is reinforced by the fact that a google search for "local reduction" and "tonk" returns just a single hit - a multiple-choice geography exam for the Indian Civil Services. A google search for "local reduction" and "deduction" however, returns a host of computer science sites, the first of which reveals that the local reduction property has a twin, the local *expansion* property. Connectives which have this property can be reintroduced by their introduction rule after an application of the elimination rule. Connectives with the reduction property perserve the soundness of a system (tonk, of course, does not have it and destroys soundness,) while connectives with the expansion property preserve completeness.

Posted by logican at 11:10 PM | TrackBack

### On Comments

I feel ambivalent about comments. Should I allow them? Encourage them? I might have turned to my stated influences for guidance, but of course, consequently.org has comments, and languagelog does not. So they are no help.

Movable Type, the publishing platform behind this weblog, allows for three approaches to comments. One can allow the *hoi polloi* to trample all over one's carefully crafted weblog, misunderstanding one's posts, adding their comments about their dogs and getting into private and badly edited conversations. This is, of course, the path of the generous spirit (let them have their misunderstandings, and their dogs and their conversations - this weblog is big enough for it all.) But the generous spirit and the egomaniac seem to be going the same way (leave comments for me! leave comments for me!!!) And turning on comments would involve reliquishing control of the content of my weblog, which I am loathe to do, especially when it is still so small.

The option at the other extreme is to leave the comments off. But one of the attractions of a weblog it that it is a good way to get decent comments on ideas. What if Greg Restall already knows the answer to the problem I have been puzzling over in an entry? Do I not want to make it as easy as possible for him to put me straight? I might go years not knowing the answer to a question I care about, just because I was too much of a control freak to turn on the comments.

Moveable Type also offers a third way: comments are submitted as usual but have to be approved by me before going up on the site. This might look like a good compromise, but of course, compromise two bad options and you get an unmitigatedly bad option. Think about it, in practice, which comments could I quietly kill? The spam, the threats, the abuse. (Hey, it could get lively over here!) But suppose someone submits a comment that is on topic, but which I think is stupid, or confused, or just too long. Could I, in all conscience, kill it? No. Definitely not. If I am going to have the appearence of welcoming discussion then I have to take it all. So the third way only offers an illusion of compromise.

Here is a tentative plan. For posts like the present one, I will leave the comments off. If you want to tell me something, feel free to email me. For posts where I think discussion could be interesting and useful, I will..., well, ..., I will think about it some more.

Posted by logican at 12:39 PM | TrackBack

### Funakoshi and Language Change

Philosophers of language often think of language in very abstract, idealised ways, and so it can be useful for us to have a stock of stories about language "in the wild" to keep our feet on the ground. (Gareth Evans' Madagascar story seems like a good example of this; it makes us much more careful about how we formulate the causal theory of reference.) So here is a real-world story about language change:

In his autobiography, Gichin Funakoshi (1868-1957), the founder of shotokan karate, reports on a controversy over the interpretation of the expression 'karate.' First he tells us how the controversy arose:

The Japanese language is not an easy one to master, nor is it always quite so explicit as it might be: different characters may have exactly the same pronunciations, depending upon the use. The expression 'karate' is an excellent example. 'Te' is easy enough, it means 'hand(s).' But there are two quite different characters that are both pronounced

*kara*; one means 'empty,' and the other is the Chinese character referring to the Tang dynasty and may be translated 'Chinese.' [...] So should our martial art be written with the characters that mean 'empty hand(s)' or with those that mean "Chinese hand(s)"?

It turns out that a case can be made for both these interpretations. Funakoshi relates that before he moved from Okinawa to Tokyo in the 1920's:

it was customary to use the character for 'Chinese' rather than that for 'empty' to write 'karate'

But anyone who has seen *The Karate Kid* might recall that Daniel's sensei, Mr Miyagi, explains the meaning of 'karate' as *empty* hand, and indeed, when I was a kid, *everyone* (that is, everyone under 4 feet tall,) knew that 'karate' meant 'empty hand(s).' So how did we get from the predominant 'Chinese' interpretation on Okinawa, to 'empty' in popular Western culture?

Answer: Funakoshi found 'empty' more appropriate (Chinese boxing is very different from karate, karate is the art of fighting without weapons (with empty hands) and he appreciated the connection with the Buddhist doctrine of emptiness) and he simply *decided* to start writing it that way.

My suggestion initially illicited violent outbursts of criticism in both Toyko and Okinawa, but I have confidence in the change and have adhered to it over the years. Since then, it has in fact gained such wide acceptance that the word 'karate' would look strange to all of us now if it were written with the Chinese 'kara' character.

Funakoshi then goes on standardise the names of shotokan kata, but finishes the section modestly:

I have no doubt whatsoever that in the future, as times change, again and then again, the kata will be given new names. And that, indeed, is as it should be.

I think part of the reason that Funakoshi was successful in establishing the 'empty hand(s)' interpretation was the social deference offered him by the small linguistic community that is comprised of karate-ka worldwide. Hollywood probably deferred to them, and all the under-four-feet-tall humans of my acquantence deferred to Hollywood.

If everyone had ignored Funakoshi, the language would not have changed. So this seems to be a real-world example of deference as a mechanism of language change.

(I wonder what would have happened if Hollywood had made up an alternative interpretation of 'karate' from scratch?)

Posted by logican at 3:32 AM | TrackBack

### Beginnings

Despite my strong enthusiasm for my new weblog, I haven't a clue where to begin, so I shall opt for the old cop out of writing about writing (though I promise not to write about having nothing to write - it isn't true anyway.)

In my dreams, this weblog is a cross between languagelog and consequently.org, only even more beautiful than consequently.org, and even wittier and more eclectic than languagelog. In my dreams. In practice, as you can see, logicandlanguage.net is still growing into its beauty, and I don't have the breadth of knowledge of a single languagelogger (language lumberjack?), nevermind all sixteen of them. But I hope you are wondering what this weblog will be like, and sometimes aspirations can be a good guide to the future.

The explicit plan - open to revision as we progress - is to post a couple of hundred words a day on a topic related to logic, the philosophy of logic, or the philosophy of language, very broadly construed. Jokes, links to news stories and anecdotes are all on the cards, but so is formal logic, semantics, and maybe even a little philosophy of math(s).