November 13, 2012

Logic Teaching

There's a post about teaching logic (and related things) over on the Leiter Blog right now. I think I might try to get hold of a copy of the Panineau book just to get some good teaching ideas.

Posted by logican at 9:23 AM | TrackBack

October 22, 2008

Logic Job At Calgary

The Department of Philosophy at the University of Calgary is currently advertising for a tenure-track position (Assistant Professor). "The Department is seeking candidates who are able to teach a range of courses in logic, from elementary formal logic to the advanced levels, including the meta-theory of first-order logic, undecidability, incompleteness, and non-classical logics. The area of specialization for this position is Logic or a related field of study." Deadline for applications is November 21.

For more information, see

Posted by logican at 12:33 PM | TrackBack

April 3, 2008

Blackboard Tiles

This stuff is great. I've been teaching a slightly-harder-than-usual logic course this semester and I really wanted a blackboard for my office, for practicing proofs on.

One of those things that I think good logic students quickly realise is that it's one thing to be able to follow a proof in class, and quite another to be able to reproduce it yourself in homework or on a test. Well one of the things that I've learned from teaching logic is that it is one thing thing to be able to scribble a proof out on a notepad, and another to be able to present clearly on a blackboard during a lecture.

Why? Well, it has something to do with the fact that one's notepad is uebersichtlich - scrawling out some complicated instance of an axiom isn't that hard if the axiom is at the top of your page, but it can be a bit harder when that axiom is 2 blackboards back, or on the other side of the room. (My logic classroom has 6 huge boards that scroll past each other - I rather like that, but it can make it easy to loose the first part of a proof.) So I think that for me to write a proof on the board requires that I know more of the proof off by heart than when I'm just writing it on paper. Second, of course, there's just more pressure when 30, or 60, eyes are on you, all waiting to be reminded what the induction hypothesis 2 boards ago actually was. And third, when I'm putting a proof on the board I'm often talking at the same time. And as teachers everywhere know, talking goes faster than writing, so you're basically running two trains of thought at once anyway.

So I'd been yearning for a blackboard in my office, and then I found this stuff. . It consists of flexible blackboard tiles that stick to your wall (they're removable and re-positionable- they come off my white-painted wall easily, without leaving a mark, and stick right back on, and, surprisingly, it's really easy to write on them with chalk and clean them off. (I imagine if your wall is a different colour from your chalk you'll end up with a chalk-coloured "halo" around the board though.) They're a bit smaller than they look in the photo - each tile is about the size of a US letter sheet of paper - and I ended up buying 2 packs of 4. Also, I think the tiles are a little prone to getting scratched by the chalk - I can imagine having to buy some more after a couple of years or so. But they look great on my wall and they do the job (every Tuesday and Thursday morning before my logic lecture...)

Posted by logican at 3:06 PM | TrackBack

February 7, 2008

Initiatives in Logic

It looks like Jan Srzednicki (whom Melbourne logicians all refer to very familiarly as "Shredo") has a new book out. It's a little on the pricey side, even for a logic book, but it looks very interesting.

Posted by logican at 11:16 AM | TrackBack

March 7, 2007

Goedel's Theorem in the Wild

From Pynchon's Gravity's Rainbow:

''Yeah well," as film critic Mitchell Prettyplace puts it in his definitive 18-volume study of King Kong, ''you know folks, he did love her." Proceeding from this thesis, it appears that Prettyplace has left nothing out, every shot including out-takes raked through for every last bit of symolism, exhaustive biographies of everyone connected with the film, extras, grips, lab people... even interviews with King Kong Kultists, who to be eligible for membership must have seen the movies at least 100 times and be prepared to pass an 8-hour entrance exam...And yet, and yet: there is Murphy's Law to consider, that brash Irish proletarian restatement of Goedel's Theorem - when everything has been taken care of, when nothing can go wrong, or even surprise us...something will. So the permutations 'n' combinations of Pudding's Things That Can Happen in European Politics for 1931, the year of Goedel's Theorem, don't give Hitler an outside chance. So, when the laws of heredity are laid down, mutants will be born... (it's page 275 in the Penguin edition)

My favourite part is the idea of Pudding writing a book called Things That Can Happen in European Politics.

Posted by logican at 9:53 PM | TrackBack

February 27, 2007

JC's Column

I spent some time in the departure lounge of Calgary airport on Friday, with Agustin "the Mexican Multiplier" Rayo and JC Beall, and JC mentioned how annoying he found it that some philosophers used the expressions "philosophical logic" and "philosophy of logic" interchangably. In fact, he thought he might write something up about it and try to get people to take notice. Not being one to stomp on a worthy cause, I asked him whether he'd let me post such a thing to a blog. He agreed, and so I give you JC's Column (an occasional series?) Reform or perish...


Terminological Theme: Philosophical Logic, Philosophy of Logic, and Formal Philosophy.

There's reason to think that confusion exists over the terminology of "philosophical logic" and "philosophy of logic". It would do the profession -- and, perhaps, aspiring graduate students -- well to have uniform terminology. While terminological differences certainly exist across the English-speaking countries (e.g., in parts of the UK, "philosophical logic" is often synonymous with "philosophy of logic", though not so in Oz), here is a fairly standard -- though admittedly (perhaps perforce) vague -- classification, one that, if broadly adopted, would at least diminish some of the confusion.

A. Formal Philosophy: formal (mathematical) methods used in the service of philosophy.

(This comprises a lot, including philosophical logic, some areas of mathematical logic, decision theory, what Branden calls "formal epistemology", some areas of foundations of mathematics, some incarnations of philosophy of logic, some incarnations of philosophy of language, and much more. Similarly, some work in metaphysics -- particularly, formal ontology, formal mereology, etc. -- would certainly fall under this banner. So, this category is perhaps the broadest category, but it's worth including here. What is crucial is that formal, mathematical methods -- as opposed to just using symbols as abbreviations, etc. (!) -- is essential.)

B. Philosophical Logic: formal logic (usually, applied maths) in the service of philosophy; in particular, a formal account of *consequence* for some philosophically interesting fragment of discourse.

[If we take Logic to be concerned with *consequence*, then philosophical logic aims to specify -- in a formal, precise way -- the consequence relation over some philosophically significant fragment of our language. (Usually, this is done by constructing a formal "model language", and proposing that the logic of the target "real language" is relevantly like *that*.) Usually, philosophical logic overlaps a lot with formal semantics, but may often be motivated more by philosophical concerns than by linguistic data. Work on formal truth theories -- i.e., specifying the logic of truth -- is a familiar example of work in philosophical logic, as are the familiar modal and many-valued accounts of various expressions, and similarly concerns about 'absolute generality' and the *consequence* relation governing such quantification, and much, much else. What is essential, as above, is a specification of a given *consequence* relation for the target, philosophically interesting phenomenon. Whether the consequence relation is specified "semantically", via models, or proof-theoretically is not critical -- although the former might often prove to be heuristically better in philosophy.]

C. Philosophy of Logic: philosophy motivated by Logic; philosophical issues arising out of a given, specified logic (or family of logics).

[While competence in (formal) logic is often a prerequisite of good philosophy of logic, no formal logic or, for that matter, formal methods need be involved in doing philosophy of logic. Of course, philosophy of logic often overlaps with philosophy of language -- as with many areas of philosophy. The point is that philosophy of logic, while its target *may* be mathematical or formal, needn't be an instance of either philosophical logic -- which essentially involves formal methods -- or, more broadly, formal methods. A lot of work on "nature of truth" might be classified as philosophy of logic (though much of it probably isn't motivated by logic, and so shouldn't be so classified), and similarly for "nature of worlds" etc. Whether the classification is appropriate depends, in part, on the given project -- e.g., whether, as with Quine and Lewis, one is directly examining the commitments of a particular logical theory, as opposed to merely reflecting on "intuitions" concerning notions that are often thought to be logically significant. The point, again, is just that philosophy of logic is a distinct enterprise from philosophical logic, each requiring very different areas of competence, and each targeted at different aims.]

It would be useful if the profession, in general, but especially *practitioners* adopted terminology along the above lines. Of course, there's still room for confusion, and the foregoing hardly cuts precise joints. It might be useful to discuss refinements to the above terminological constraints.

One more -- just for those who might be wondering:

D. Mathematical Logic: formal logic in the service of (usually classical!) mathematics, as well various subfields of mathematics. (E.g., standard limitative theorems and classical metatheory is mathematical logic, as is reverse mathematics, many aspects of category theory, many aspects of set theory, areas of abstract algebra, areas of recursion theory, and so on. Mathematicians need have no interest in philosophy to engage in such areas, in contrast with the philosophical logician who is driven to use "mathematical methods" in an effort to clarify the consequence relation of some philosophically interesting "discourse". There's more to be said here, but this is chiefly a post about A, B, and C.)

** One note: it may well be that anyone talented in B is interested in C, but it hardly follows that one who is talented in B is talented in C. Similarly, one who is talented in C may well have little talent or interest in B. My hunch is that, on the whole, those who do B (or do it well) are usually talented in C. It's unclear whether those with a talent in D are naturals for B or C -- or A, for that matter -- but one can think of excellent philosophers who also engaged directly in D. (The obvious such folks were also good at A, B, and C, as well as D. Russell comes to mind, as does Kripke, but there are others.)

Posted by logican at 3:16 PM | TrackBack

February 20, 2007

Logicians taking a break

Low-resolution (though not necessarily fuzzy) logicians at Banff: Kenny Easwaran, Richard Zach, Audrey Yap

Posted by logican at 3:38 PM | TrackBack

February 19, 2007

the lark of a definite, precisely formulated formal system

I'm writing this post during the lunchbreak of the first day of Richard Zach's Mathematical Methods in Philosophy conference at Banff International Research Station for Mathematical Innovation and Discovery. It promises to be an awesome conference - Kenny is here as well, so maybe he'll be posting about it too. This morning we kicked off with Branden Fitelson, who read out Bob Meyer's manifesto:

Do not be deceived, Establishment pigs (this means you too, Establishment dogs). The subservience of past generations of logicians does not mean that we shall bear forever our treatment as animals (you barnyard fowl). We are human beings (you swine). You are living in a day when logicians will not any longer endure your taunts, your slurs, your insults (you filthy vermin). In the name of A. N. Whitehead and B. Russell we gather; in the spirit of R. Carnap and A. Tarski, we march; by the word of W. V. O. Quine, we shall prevail. Beware you snakes of the Philosophical Power Structure, which you have created and which you maintain to put down the logician; you have caged the eagle of reason, the dove of wisdom, and the lark of a definite, precisely formulated formal system, with exact formation rules, a recursive set of axioms, and clear and cogent rules of inference, and you have made them your pigeons. Oh, you filterable viruses, we will shake you off and fly once more.

(I've just realised - in searching for the text - that Greg Restall is blogging about the conference too. I guess reporting on this one is kind of overdetermined - look out for further posts from Richard Zach too. Greg has his laptop in the sessions, so that might be the place to go for the most up-to-the-minute reporting! I'm pretty amazed that Google had him linked already though - those little Googlebots must be much faster than they used to be.)

Branden also announced that the Stanford Encyclopedia of Philosophy is widening its "Inductive Logic and Decision Theory" area to "Formal Epistemology", with Branden Fitelson and Al Hajek joining Briain Skryms and Jim Joyce as editors.

And finally, he also said that Studia Logica is changing its broadening its scope to include "Formal Philosophy", and there are several new editors with special issues coming up, including:

According to Branden, the scope of "Formal Epistemology" is everything in formal philosophy that isn't metaphysics. Except that it also includes the foundations of probability. That doesn't leave a lot out, so if you have a good formal paper and you were wondering whether it would probably does.

OK, I'm off to walk through snowy pines before coming back to hot chocolate and JC Beall and Michael Glanzberg talking about paradoxes. (Did I mention that the programme is amazing?) Wish you were here...

Posted by logican at 2:22 PM | TrackBack

January 26, 2007


Well, the first 2 weeks of teaching from Fitting and Mendelsohn's First-Order Modal Logic have gone well (after some havering over which book I should be using). I'm loving teaching the class and the students seem to be coming to grips with their first problem set (due today!) So so far the book seems like a good choice.

A couple of minor thoughts on it: it seems to me that after the introduction of semantics for K,T, D, K4, B, S4 and S5 in the first half of the first chapter, the discussion of tense logic from pages 24-27 represents a missed oportunity to draw out the parallels. The section discusses four different ways to understand tensed sentences, two of which involve treating tensed sentences as equivalent to first order classical formulas in which one simply quantifies over time instants, and the suggestion that we use operators like 'G' ( it will always be the case that) and 'F' (it will be the case that) like '[]' (box) and '<>' (diamond) is discussed in less than half a page and left at the intuitive level, e.g.:

For example, the thesis []P->P, fails. Just because it will always be true that P, it does not follow that P is true now. (p. 27)

But this isn't justified with reference to any particular model theory - we're just supposed to get it from our intuitive understanding of 'it will always be the case that.' This would have been understandable if the writer couldn't assume that his reader would be able to follow a discussion involving putting the semantics for those operators into a definition of truth in a model, and considering what formulas involving them come out valid for different classes of frames, but the book has just done exactly that for ordinary modal logic, and considered what formulas come out valid if we assume that the accessibility relation is e.g. transitive and reflextive, or serial and so it would have been a great place to at least mention that you can get different tense logics by putting different restrictions on the "earlier than" relation (partial ordering, total ordering, total ordering with no maximum or minimum, density ... etc). Given this I've been supplementing our discussion of tense logic with John P. Burgess' "Basic Tense Logic" from volume II of the Handbook of Philosophical Logic

A second (even more) minor point: like any logic textbook, there are some typos. Here's one:

p. 26 - the interpretations of

are given as




respectively. But, first of all, 'John wins the election' is an infelicitous atomic sentence to use, (what do we make of: at every time in the future, John wins the election ?) 'John is happy' would have been much less confusing. (There's nothing funny about: at every time in the future, John is happy.)

And secondly, those conjunctions should surely be conditionals, otherwise 'G John is happy', means every time is later than the present time and John is happy at it and 'H John is happy' means every time is earlier than the present and John is happy at it!

This, of course, is small stuff, and so far I would recommend the book to anyone who needs to teach modal logic to philosophers who are not logicians. Also Fitting has his own site of errata for the book here.

Posted by logican at 12:24 PM | TrackBack

January 9, 2007


Call for Papers

Eleventh Conference on Theoretical Aspects of Rationality and Knowledge
June 25-27, 2007
Brussels, Belgium


University of Namur, CORE - University of Louvain, CEREC - Facultés
Universitaires Saint-Louis, and National Fund for Scientific Research

About the Conference

The mission of the TARK conferences is to bring together researchers
from a wide variety of fields, including Artificial Intelligence,
Cryptography, Distributed Computing, Economics and Game Theory,
Linguistics, Philosophy, and Psychology, in order to further our
understanding of interdisciplinary issues involving reasoning about
rationality and knowledge. Topics of interest include, but are not
limited to, semantic models for knowledge, belief, and uncertainty,
bounded rationality and resource-bounded reasoning, commonsense
epistemic reasoning, epistemic logic, knowledge and action, applications
of reasoning about knowledge and other mental states, belief revision,
and foundations of multi-agent systems.

Submissions are now invited to TARK XI. Strong preference will be given
to papers whose topic is of interest to an interdisciplinary audience,
and papers should be accessible to such an audience. Papers
should 1) contain enough information to enable the program
committee to identify the main contribution of the work; 2) explain the
significance of the work -- its novelty and its practical or theoretical
implications; and 3) include comparisons with and references to relevant
literature. For submission, click on the Submission tab at

Abstracts should be no longer than ten double-spaced pages (4,000
words). Optional technical details such as proofs may be included in an
appendix. An email address of the contact author should be included.
Papers arriving late or departing significantly from these guidelines
risk immediate rejection. One author of each accepted paper will be
expected to present the paper at the conference.
Economists should be aware that special arrangements have been made with
certain economics journals (in particular, with International Journal of
Game Theory, Games and Economic Behavior, Journal of Economic Theory,
Econometrica, Theory and Decision, and Mathematical Social Sciences) so
that publication of an extended abstract in TARK will not prejudice
publication of a full journal version.

Registration fees

Early registration (before 30th April 2007): 260 EUR (100 EUR for
Late registration (from 1st May 2007): 350 EUR (150 EUR for students).
The Registration fee includes: a reception at the Roy d'Espagne
(, three lunches, a gala dinner at
Hotel Metropole (,
morning and afternoon coffee breaks, one copy of the proceedings.

Accomodations: see
Key Dates

Submission of Abstracts: January 30, 2007 Notification of Authors: March
28, 2007, Camera ready copy of accepted papers: April 30, 2007

Conference: June 25-27, 2007

Program Committee:

Sergei Artemov, CUNY
Vincent Conitzer, Duke University
Lance Fortnow, University of Chicago
Aviad Heifetz, The Open University, Israel
Franz Huber, California Institute of Technology
Adam Kalai, Georgia Institute of Technology
Ron Lavi, Technion
Jerome Lang, Institut de Recherche en Informatique de Toulouse (IRIT)
Martin Meier, Instituto de Analisis Economico
Dov Samet (chair), Tel Aviv University
Burkhard Schipper, University of California, Davis
Robert Stalnaker, MIT
Marc Pauly, Stanford
Muhamet Yildiz, MIT

Program Chair

Dov Samet,
Faculty of Management
Tel Aviv University
Tel Aviv
phone: +972 3 640 6999
email: dovs at

Local Organizers

Vincent Vannetelbosch
University of Louvain
Voie du Roman Pays 34
B-1348 Louvain-la-Neuve
Tel: 0032 10 47 41 42
Fax: 0032 10 47 43 01
email: vannetelbosch at

Pierre-Yves Schobbens
Institut d'Informatique
University of Namur
Rue Grandgagnage, 21
B-5000 Namur
Tel: 0032 81 72 49 90
Fax: 0032 81 72 49 67
email: pys at

Ana Mauleon
Facultés Universitaires Saint-Louis
Boulevard du Jardin Botanique 43
B-1000 Brussels
Tel: 0032 2 211 79 35
email: mauleon at

Conference Chair

Joseph Y. Halpern
Computer Science Department
Cornell University
Itacha, NY 14853
phone: +1 607 255 9562
fax: +1 607 255 4428
e-mail: halpern at

Posted by logican at 1:32 PM | TrackBack

December 5, 2006

Kurt Gödel Centenary Research Prize Fellowships

(Organized by the Kurt Gödel Society with support from the John Templeton Foundation)

The Kurt Gödel Society is proud to announce the commencement of the research fellowship prize program in honor and celebration of Kurt Gödel's 100th birthday.

The research fellowship prize program sponsored by the John Templeton Foundation will offer:

two Ph.D. (pre-doctoral) fellowships of $60,000 US per annum for two years
two post-doctoral fellowships of $ 80,000 US per annum for two years
one senior fellowship of $120,000 US per annum for one year

The purpose of the fellowship is to support original research in mathematical logic, meta-mathematics, philosophy of mathematical logic, and the foundations of mathematics. This fellowship is to carry forward the legacy of Gödel, whose works exemplify deep insights and breakthrough discoveries in mathematical logic.

The selection will be made based upon an open, international competition.
An international Board of Jurors chaired by Professor Harvey Friedman will oversee the process.
The finalist papers will be published in a special issue of a premier journal in mathematical logic.



Goal and Criteria of Merit:

In pursuit of similar insights
and discoveries, we adopt the following criteria of merit for evaluating Fellowship applications:

1. Intellectual merit, scientific rigor and originality of the submitted paper and work plan. The paper should combine visionary thinking with academic excellence.

2. Potential for significant contribution to basic foundational understanding of logic and the likelihood for opening new, fruitful lines of inquiry.

3. Impact of the grant on the project and likelihood that the grant will make this new line of research possible.

4. The probability that the pursuit of this line of research is realistic and feasible for the applicant.

5. Qualifications of the applicants evaluated via CV and recommendation letters*
(*recommendation letters are not required for senior applications).


Original fellowship proposals from all fields of mathematical logic (such as Computability Theory, Model Theory, Proof Theory, Set Theory), meta-mathematics,
the philosophy of mathematics, and the foundations of mathematics insofar as the research has strong relevance or resemblance to the Gödelian insights and originality.

Preliminary Timeline

December 1, 2006. Announcement
June 15, 2007. Submissions deadline
October 2007. Jury decision due on papers to be publis=
December 15, 2007. Final versions due
January 2008. Jury decision on winners due
=46ebruary 2008. Award Ceremony
Mar.-Sept.2008. Commencement of the Fellowships

Posted by logican at 11:22 PM | TrackBack

November 16, 2006

Logic Textbooks

I'm teaching an advanced philosophy of language course next semester, and I've decided to focus on issues in the philosophy of language where it helps to have some technical background. The idea of the course will be to alternate between time spent doing techie stuff - which will be assessed by way of problem sets - and time spent doing philosophy that relates to the techie stuff, which will be assessed by way of papers.

We're going to start out with some modal logic, and I'm wondering about textbooks. I've actually already put my request in to the bookstore to get some copies of the latest Hughes and Cresswell, but since then my collegue José; has suggested that I switch to this book by Fitting and Mendelsohn instead. And at a first glance, it does look pretty promising, and it contains exercises and - this is important, I think - among the proof systems it employs are axiomatic systems. I remark on this because in the last couple of years there have been several logic textbooks - by authors who I otherwise love and respect - which use tableaux as the main proof method. And that isn't what I want.

But it's hard to know whether a logic textbook is good from a cursory glance. (They're kind of like universal statements; you can know that one is bad from a single data point, but knowing that one is good is very difficult.) So I was wondering, have any of you used the Fitting and Mendelsohn book? Do you have any thoughts about it, or other similar books?

Posted by logican at 3:01 PM | TrackBack

April 8, 2006

The world is a stranger and better place than you might think...

Greg Restall's papers via the I-tunes music store!

(And Amazon is trying to persuade Richard Zach to buy Goedel's album. I'd say something like "what's next? a promo video for Harvey Friedman's next paper?!" but the facts are already so strange...)

Posted by logican at 7:39 PM | TrackBack

December 12, 2005

New Goedel's Theorems Book

Via FOM I note that the first 15 chapters of Peter Smith's book An Introduction to Goedel's Theorems are available online. He's looking for feedback, so if you have a comment on any part of the book, you might contact him...

Posted by logican at 1:26 PM | TrackBack

October 24, 2005

Every Natural Number is Interesting

Reading through a written version of John P. Burgess' lecture "Tarski's Tort" (described by John as "a sermon on the evils of calling model theory "semantics", preached at Notre Dame on Saint Patrick's Day, 2005" (Amen)) I came across the following proof that every natural number is interesting:

Suppose that not every natural number is interesting. Then there is non-empty class of natural numbers which are non-interesting. Since the natural numbers are well-ordered, this class must have a least member - call it n. But if n is the least uninteresting natural number, then n is interesting for that reason. Contradiction. So every natural number is interesting.

John will be in Calgary along with Kit Fine and Alasdair Urquhart for the U of C's logic mini-symposium on November 4th and 5th. See you there...


Note to self: stop calling model-theory "semantics."

Posted by logican at 12:13 PM | TrackBack

August 10, 2005

A note on Kit Fine's Rigidity Axiom

I've been reading Kit Fine's "The Logic of Essence" (Journal of Philosophical Logic, vol. 24 1995 pages 241-273) and just wanted to record a puzzle I had about one of the axioms. It's the axiom of rigidity that I am puzzling over, and it looks like this:


There are a two of bits of unusual notation in there. The first is Fine's essentiality operator. If you want to formalize the sentence

It is a property of singleton Socrates that it has Socrates as a member.

one way to do it is as Rab, where a refers to the set, b to Socrates and R expresses the set-membership relation. But suppose you want to formalise something like

It is an essential property of the singleton set of Socrates that it contain Socrates as a member.
and you want to do it in such a way that the logical force of "essential" isn't lost.

Well, you might try making "essentially" a kind of modal operator (represented here by the box) and write:


Fine argues (quite convincingly) that this would be wrong, since this next sentence would be translated the same way and is false, where the above sentence is true:

It is an essential property of Socrates that he is a member of the set singleton Socrates.

(Having trouble getting the right intuitions about these sentences? Fine thinks of essential properties of an object as being those that are had by virtue of its nature. The thought is that it is part of the nature of singleton Socrates that it contain Socrates, though it is not part of Socrates' nature that he be a member of any set---it doesn't even follow from his nature that sets exist. Fine's project in this paper is that of "developing a logic of essence, not now as a fragment of a modal system, but as a system in its own right" (241))

Instead we have, for each predicate F, an operator:


and the predicate F picks out the subjects of the essentialist claim - that is, the objects (if any) whose natures underwrite the truth of the claim. We call predicates playing this role "delimiters."

To formalise "Singleton Socrates essentially has Socrates as a member", we need a predicate which applies only to singleton Socrates (singleton-Socratises? lx(x=Socrates)? (l is for "lambda"), say, F, and then we use the associated operator:


This can be read as "Rab is true in virtue of the nature of the objects that F". So where a refers to singleton Socrates, R expresses the membership relation, b refers to Socrates himself and F expresses the property of singleton-Socratising, this formula says that Socrates is a member of singleton-Socrates in virtue of the nature of singleton Socrates.

There is another piece of unusal notation that you need to understand the rigidity axiom. Fine doesn't only represent predicates using single letters. He also uses lambda abstraction and some abbreviations of expressions so formed. Most relevant here, he writes (y) for lx(x=y), where x is the first variable distinct from y (i.e. an predicate for the property of being identical to y) and in using such abbreviations in delimiters, he sometimes abbreviates them even further by removing the parentheses. Hence


can be read as "A is true in virtue of the nature of objects which are identical to y" (that is, of course, in virtue of the nature of y.)

Finally, it seems that if we have more than one predicate in the delimiter space, separated by a comma, that means that the inside formula is true in virtue of the objects which fall under the disjunction of the predicates. (I say "it seems" because I can't find a place in the text where this is stated, but I think it makes sense of what comes later.)

So now we have the resources to understand the rigidity axiom:


Roughly what this says is that if Px is true, then it is true in virtue of the natures of x and the objects which fall under P. Even more roughly, Px is true (if it is) in virtue of the natures of the things which it is about. Fine says that the axiom is "clearly correct" and gives a proof to demonstrate this correctness:

For if x is one of the objects x1, x2..., say xi, then it is true in virtue of the nature of x that it is xi, and hence true in virtue of the nature of x1, x2...that x is one of x1, x2....

What I'm puzzling over is this. I'm not sure that sentences are always true in virtue of the natures of the objects that they are "about," that is, the objects which they refer to or which fall under the predicates they contain. Take a sentence like "fred is a frog or it is not the case that fred is a frog." Is that true in virtue of the nature of Fred, or froggy things? One reason why you might think it isn't is that it would be true regardless of Fred's nature, or the nature of Froggy things. The expressions could even apply to things entirely unfroggy objects which are entirely unlike Fred, and the sentence would still be true.

So does that mean that there's something wrong with Fine's proof - er, no, though in the spirit of the intuition I've just drawn on I might deny that it is true in virtue of the nature of x that x is identical to xi. (After all, things are self-identical regardless of their natures, aren't they?)

But what I suspect is really going on here is that there are several senses of "in virtue of". Channeling the Quine of "Carnap and Logical Truth" for a second, we might say that there's a sense of "in virtue of" on which "Fred is a frog or it is not the case that Fred is a frog" is true in virtue of Fred's nature, and a sense in which it is not. It really doesn't make any difference which we choose to use (and so Fine is perfectly justified in using his sense.)

Here's another place where the assumption comes in - in the following rule of Fine's system:


This says that if you've proved A as a theorem, then it is also a theorem that A is true in virtue of the natures of the objects contained in the proposition expressed by A (Fine is working with a very Russellian conception of propositions) which seems like a fairly explicit committment exactly the claim I was puzzled about, namely that sentences are always true in virtue of the natures of the objects the refer to or which satisfy their predicates.

"in virtue of" is puzzling...

Posted by logican at 9:12 AM | TrackBack

July 26, 2005

Adventures in Flatland

I have been reading Lloyd Humberstone's 2D modal logic retrospective, "Two Dimensional Adventures," (Philosophical Studies 118 (March 2004) 17-65) in the hope that it might help me with a problem in a paper I'm writing. I thought I would write up some notes about it here.

Interesting thing number one: Humberstone distinguishes the sense of "two-dimensional" with which his paper is concerned from a thousand other senses with which that expression is used. Did you know that Max gives a logic of two dimensional formulas? (As in "formulas constructed out of columns of ordinary formulas joined by connectives") or that Pratt refers to ordinary modal logics as two dimensional?

The kind of two dimensional logic we (Professor Humberstone, me, and you, gentle reader) are interested in is a semantics which treats the truth of a formula as relativised to elements of the set U0 × U1 (i.e. as relative to an ordered pair of say, a time and a place, or a possible world and...another possible world.) Humberstone goes on to distinguish two kinds of 2D semantics in this sense. A 2D semantics is homogenous if U0= U1, heterogenous where U0≠ U1. A semantics within which the truth of a formula is relativised to say a time and a possible world would be 2D in our sense, only heterogenously so, whereas Humberstone intends to restrict his study to homogenous cases and observes that "the clauses we offer for various operators in the definition of truth (w.r.t. a pair of points in a model) would not make sense without this assumption of homogeneity, since they involve things like moving the occupant of sone of the two coordinates into the position occupied by the other."

I thought it was interesting that Humberstone's conception of a two dimensional modal logic comes apart from David Kaplan's even at this early stage. Kaplan's formulas are true with respect to pairs of circumstances of evaluation (possible worlds) and contexts of utterance (ordered quadruples of agent, time, location and world) and the conceptual difference between the two is an important part of his theory.

Humberstone ends the section with a joke:

This distinction between homogeneous and hetergeneous cases of two-dimensionality arises at the level of informal motivation, it should be added, since the ordered pairs could themselves be considered as single indices in their own right. The fact that we are privately thinking of these indices as ordered pairs of points need not be mentioned as long as the models...are subjected to conditions guaranteeing that they are either isomorphic to or at least indistinguishable for the logical purposes at hand from models in which truth is relative to pairs of points. Kuhn calls this procedure ... "flattening" a two-dimensional modal logic, though the term is not quite apt, since a two-dimensional object is already about as flat as things can get.

Posted by logican at 11:45 PM | TrackBack

July 25, 2005

Tea with Lloyd and Su

I went along to the philosophy department at Monash yesterday to hang out with logicians Su Rogerson and Lloyd Humberstone. They have been teaching a rather advanced logic course to a small group of brave undergraduates, and so I got to see them in action before tea.


Here's just one of the things that I learned from Su and Lloyd yesterday:

Some famous Axioms


The system which takes B, C and I as axioms (with uniform substitution and modus ponens as the rules) is known as BCI. BCI is the implicational fragment of Linear Logic. If we add contraction (W) we get BCIW, which is the implicational fragment of the best known relevant logic, R. Adding K (known to relevant logicians as "irrelevance"(!)) instead of W gets us BCK (I follows from K and C.) BCK (under that name, at least) was first studied by Meredith. If we add both K and W we get BCKW, the implicational fragment of intuitionistic logic. To get the implicational fragment of classical logic we need only add Peirce's Law:


So what we have looks like this:


All these axioms possess a confusing multitude of names, so I should mention that B is also known as "prefixing" and C gets called "permutation."

Posted by logican at 8:42 PM | TrackBack

July 10, 2005


There's a new volume of the Australasian Journal of Logic up on the web, and I'm happy to say that it includes my first ever book review - for Goldfarb's Deductive Logic textbook. There are also (a little less self-centeredly) articles entitled "Constant Domain Quantified Modal Logics Without Boolean Negation", "Tonk Strikes Back", "Basic Relevant Theories for Combinators at Levels I and II" and "Justification of Argument Schemes". The AJL is a referreed, internet-only, open-access (i.e., it's FREE) journal (virtues it shares with Philosophers' Imprint). I think they're both very good things.

Posted by logican at 8:18 PM | TrackBack

June 10, 2005

Kripke's Incompleteness Proof

Michael De, over at Lumpy Pea Coat, was wondering whether there is any literature on Kripke's version of Gödel's incompleteness proof. As he puts it:

Kripke gave a proof of the incompleteness theorem through ventriloquism in a lecture at Beijing University in 1984 using a very different (algebraic) approach, but I cannot find it anywhere. Does it exist? Has the lecture been transcribed? Anyone?

Richard Zach came through with the most important source, Putnam's "Nonstandard Models and Kripke's Proof of the Gödel Theorem" Notre Dame J. Formal Logic 41, no. 1 (2000), 53–58.

But I could also remember a talk that David Lewis gave at the University of Melbourne on Kripke and a version of the incompleteness proof, and Allen Hazen, who was in attendance, had turned out to have some rare notes that were relevant to the topic. Unfortunately it's not a lost Kripke manuscript, or even notes on a Kripke lecture, but, as Allen writes:

Notes Graham Solomon (Philosophy Dept, Wilfried Laurier University, deceased) sent me, coming from Bill Demopoulos (Philosophy Department, U of W Ontario), of -- not Kripke's original lecture, but -- a lecture by ... you guessed it ... Hilary Putnam.

Allen went on, in subsequent email:

Kripke, when he actually does write, gives marvelously followable expositions, and the proof is interestingly different from G's. More different than is George Boolos's proof, which I assume you are familiar with (ch. 26 of "Logic, Logic and Logic").

(In my opinion, it's worth listening when Allen says "I assume you are familiar with..." All through my Australian breaks from grad school he completed that sentence with the names of brilliant and wonderful things I'd never come across.)

He has also kindly given me permission to make the Putnam/Kripke/Solomon/Demopoulos notes available here, for anyone who's interested:

Posted by logican at 1:43 AM | TrackBack

May 31, 2005

Some Animadversions on Hume's Law

(Note added 1/6/05: you know, that word doesn't mean anything like what I thought it meant...)

I'm back from FEW. Actually, I was back yesterday, but I returned with an idea for a paper which I might be able to write relatively quickly - as in, maybe a couple of weeks, rather than a couple of years - so I've been writing away and putting off this blog entry until a time when I'm feeling less productive.

But I really want to reply to the three people who commented on the last post. This reply might not be all that satisfactory for now, but it's probably more satisfactory than being ignored while your interlocutor writes a different paper...

So, hi to Brendan, who wrote:

Suppose ought implies can. So if I ought to do it, I can. Presumably what I can do is descriptive: it follows from my personal abilities, human nature, social influences etc.

Now if ought implies can, then if I cannot do it, I ought not to. Modus Tollens.
But now if what I can do is descriptive then what I ought to do is determined by what I can do. In which case(here is the question), are we not making out normative claims based on descriptive claims and Hume is wrong?

Well, the contra-positive of "if I ought to do it, then I can do it" is "if I can't do it, then it's not the case that I ought to do it" (and not "I ought not to do it", which means something else.) But with that minor adjustment you have a great point, against which the following fact will no doubt seem horrendously inadequate: there's no alethic necessity operator in the langauge of the strong deontic logic for which we proved our two Hume's Law theorems.

Peter Vranas had a similar question about the paper at FEW, though what he said was that this is the implication we ought to worry about:


So, given our approach, one obstacle to answering this question properly is that we haven't been working in a rich enough framework; Greg and I need to take a look at bi-modal logics and see if we can still get our approach to work.

But I can make some points about how, intuitively, it ought to go. I used to think we had a great answer to the Ought-Implies-Can question, which was a variant of a structurally similar objection to our version of Russell's law (the claim that you can't derive a universal sentence from particular one.) Suppose someone argues that since Fa is entailed by ∀xFx, the contrapositive, i.e.


is a counterexample to Russell's law.

This is not a counterexample to our formulation of Russell's law, because the conclusion - though it contains a universal quantifier - is not genuinely universal on our definition, but rather a particular sentence. Intuitively, it's just ∃x¬Fx in disguise.

One obstacle to carring this over to Hume's Law - apart from our lack of a good model theory for a bi-modal logic - has been my messing around with the definition of normativity at the end of the paper. Unfortunately ¬OA really is normative on our dijunctive definition, even though it is perserved under normative extension, because it is fragile under normative translation.

I think there is lots more to be said here, but - for time reasons - I have to leave it there for now. (In particular I should probably have included a brief summary of the "Barriers" paper in the post above, for readers who don't have a clue what I'm talking about. ) Last time we talked to Gerhard Schurz about this he pointed out that the symmetric inter-model relations (like history-sharing and normative translation) support symmetric barrier theses (such as Hume's second law) whereas the anti-symmetric relations support directional barrier theses (e.g. you can derive a particular from a universal, but not vice versa.)

So for the next post I promise a concise summary of the ideas in the paper (which is linked to from the previous post) and proper replies to Ralph and Dilip, and I'll talk about another serious problem that Peter raised in his comments at FEW.

Posted by logican at 9:39 AM | TrackBack

April 4, 2005

Dummett on Harmony, Conservative Extensions and Local Reduction/Normalisation

This post will be a brief discussion of a family of related concepts - local reduction/normalisation, conservative extension and harmony - in the light of Dummett's "Circularity, Consistency and Harmony" in The Logical Basis of Metaphysics.

One thing that emerged in the comments here and here, it was that it is easy to get confused about whether Dummett's concept of harmony is identical with the satisfaction of local reduction. The idea of local reduction is reasonably straight forward: a logical constant has the local reduction property just in case, whenever its introduction rule is used to derive some formula A, and then its elimination rule is immediately used, taking A as the major premise, then the proof can be rewritten without that pair of steps. (See this post for more detail, with the caveat that the post ignores the complicated literature on the subject of local reduction with respect to negation.)

Harmony and Meaning
Harmony is slippery because it is something which Dummett has an intuitive grip on and is trying to make more precise. The idea is introduced through talk of meaning. Dummett thinks that verificationists and pragmatists have been investigating different parts of the meaning-elephant: verificationists identify the meaning of a statement with what is needed to establish its truth (verify it), pragmatists identify the meaning of a sentence with its consequences, but Dummett thinks neither sufficient:

Someone would not be said to understand the phrase 'valid argument', for instance, if he knew only how to establish (in a large range of cases) that an argument was valid but had no idea that, by accepting an argument as valid, he has committed himself to accepting the conclusion if he accepts the premises. The analogue holds good for a great many expressions...(213)

Rather, the meaning of an expression encompasses both principles governing how to verify it, and principles governing what follows from it. He wonders if these rules might somehow be in tension. Could they be inconsistent? Or might it be that one rule could be too lax or too restrictive given the other? Such a situation would be a failure of harmony. Since it is difficult to isolate the principles governing a particular expression in a natural language, Dumment takes the logical constants as a case study, since their verification and consequence principles are clearly established in their introduction and elimination rules.

Harmony and Logic
Dummett has other reasons for being interested in the logical constants as well. His interest in harmony stems from his interest in rejecting or accepting change in logic. He thinks that fear of lack of harmony makes us wary of changing logics. He gives two examples, first quantum logic's proposal to weaken the classical [VE] rule, so that this classically valid argument form becomes invalid:


And second the (fictional) proposal to strengthen counterfactual logics (John P. Burgess used to call these and modal logics "neo-classical logics") so that the following argument form is valid:


Wittgensteinian Conventionalisism
One possible view about such new proposals in logic - which Dummett attributes to Wittgenstein - is that any combination of introduction- and elimination- rules defines a legitimate logical constant. Some of these constants are not in use in any natural or formal language, but that is merely a matter of convention. In principle we could even adopt connectives like 'tonk' that allowed us to derive contradictions. (I suppose this might be the explanation of the Wittgenstein quotation at the beginning of Graham Priest's chapter on Paraconsistent Logic in the second edition of the Handbook of Philosophical Logic: "Indeed, even at this stage, I predict a time when there will be mathematical investigations of calculi containing contradictions, and people will actually be proud of having emancipated themselves from 'consistency.'")

Dummett is looking to resist the Wittgensteinian conventionalist view by finding a way to criticise some combinations of rules. The concept of harmony appears to promise a foundation for this criticism. It looks as if what is wrong with 'tonk' is that the introduction and elimination rules are not in harmony; we can derive far too much from a claim of the form 'A tonk B', given what was required to establish it.

Harmony and two notions of Conservative Extension
Dummett suggests that the notion of a conservative extension can help us to make the notion of harmony more precise. Conservative extension is normally defined over theories, and a theory T2 is a conservative extension of T1 if i) it can be obtained from T1 by adding new expressions, along with axioms (or, according to Dummett) rules of inference which govern those expressions and ii) "if we can prove in it no statement expressed in the original restricted vocabulary that we could not already prove in the original theory." (218)

(Just a thought, but it seems to me that the ideas of language, theory and proof system are all mixed up together here, a la Tarski and Carnap.)

Dummett then turns to natural languages, wondering whether there could be any disharmony in English, and says "if there is disharmony [between the rules governing an expression E,] it must manifest itself in consequences not themselves involving the expression E, but taken by us to follow from the acceptance of a statement S containing E."

And then: "A conservative extension in the logicians' sense is conservative with respect to formal provability. In adapting the concept to natural language, we must take conservatism or non-conservatism as relative to whatever means exist in the language for justifying an assertion or an action consequent upon the acceptance of an assertion. The concept thus adapted offers at least a provisional method of saying more precisely what we understand by 'harmony': namely that there is harmony between the two aspects of the use of any given expression if the language as a whole is, in this adapted sense, a conservative extension of what remains of the language when that expression is subtracted from it."(218-9)

I think this is wrong. It seems to me that it confuses lack of harmony (as he characterised it before: a lack of fit between introduction and elimination rules) with a particular KIND of lack of harmony. If the elimination rule of an expression licences too much, given what was needed to introduce the expression, then yes, we will be able to prove sentences not containing that expression which we were not able to prove before, and we will have a non-conservative extension. But if the elimination rule is too restrictive given the introduction rule, then it doesn't seem that we will be able to prove any more than before. Suppose, for an example, we start out with a classical propositional logic containing only '→' and '¬' and we add 'V', governed by the usual introduction rule, but taking as an elimination rule the weak, quantum version given in the picture above. Surely the result is a conservative extension of the original, on Dummett's definition. And if we had added the usual classical 'V' it would still be a conservative extension - all three systems are just classical proof systems, some of which are easier to use than others. Yet how could the introduction rule for 'V' be harmonious with two non-equivalent elimination rules?

Local Reducation
In chapter 11 of the same book, entitled "Proof Theoretic Justifications of the Logical Laws," we get another provisional definition of harmony, this time just for the logical constants, and it is here that local reducation/normalisation comes in:

"The analogue, within the restricted domain of logic, for an arbitrary logical constant c, is that it should not be possible, by first applying one of the introduction rules for c, and then immediately drawing a consequence from the conclusion of that introduction rule by means of an elimination rule of which it is the major premiss, to derive from the premisses of the introduction rule a consequence that we could not otherwise have drawn."

(I couldn't find an argument for this identification.)

Dummett has just claimed that susceptibility to local reduction (aka normalisation) (within some system) is the formal analogue of being a conservative extension of a theory (once that idea is adapted for natural language.) But since the notion of a conservative extension was originally a formal notion anyway, it looks as if he thinks that a logical constant will be susceptible to local reduction with respect to some proof system just in case adding it to that proof system results in a conservative extension. And, as noted earlier, he thinks that the rules governing a connective are in harmony with respect to a proof system just in case adding that connective to a proof system results in a conservative extension. So it seems that, for logical constants anyway, Dummett identifies all three of the notions we have touched on. (I'm thinking he is a lumper, rather than a splitter.)

Two Questions
There are two questions that I would be very interested in the answers to, if anyone out there knows them:

1) I have argued that Dummett is mistaken in identifying idea that the rules for a logical constant are in harmony (with respect to some proof system), with the idea that adding the constant to a proof system will result in a conservative extension. If the elimination rule allows too little, rather than too much, to be derived (given the introduction rule) then the addition will still result in a conservative extension, even though the rules are not in harmony. Am I right, or I am hopelessly confused on this and being unfair to Dummett?

2) Dummett says that adding a constant to a proof system will result in a conservative extension of that system if, and only if, all the "local peaks", (or non-normal pairs) can be removed - that is, if, and only if, the new connective has the local reduction property with respect to that proof system. I could not find the argument for this in Dummett, but he might be right anyway. Is he? What IS the relationship between local reduction (a.k.a. normalisation) and conservative extension?

Posted by logican at 5:32 PM | TrackBack

April 3, 2005

Truth and Tonk

I've always been sympathetic to the Tarskian idea that languages can be inconsistent, and in particular to the idea that the truth-predicate is responsible for making natural languages inconsistent (yes, even when I'm being careful to distinguish proof systems from languages and even after reading Herzberger on the topic.) Sometimes, when I thought about this, I liked to think of the truth predicate as a kind of less obvious 'tonk'; introducing it to our language had made it possible to derive contradictions (such as the Liar paradox.)

One might wonder how 'true' had managed to survive in our language, though it is hard to imagine that 'tonk' ever could. But I thought that since the problem with 'true' was a bit harder to discover than the problem with 'tonk', and since it didn't get in the way of our everyday thinking in the same way, and since a truth predicate governed by simple disquotational rules like these ('T' is our truth predicate):


was easy to use, the simple truth predicate had been allowed to remain, even though 'tonk' never would be.

But since the discussion of local reduction/normalisation in posts here and here, I've noticed something that does not sit very well with the view that 'true' is like 'tonk'. Though 'tonk' does not have the local reduction property, 'true' surely does and it is easy to show this, by showing that any proof containing an instance of [TI], followed immediately by an instance of [TE] can converted into a proof without those steps, like this:


It is tricky to know exactly what follows from this. Though it is always tempting to take technical ideas as obviously underwriting exciting philosophical conclusions, (just as it is tempting to take experimental results in psychology as having sweeping consequences in ethics) I suspect that hastiness in this area is more likely to lead to publication than to the right answer. Maybe there are such consequences to be had here, but there in no harm in going slow...

Following Michael Kremer's suggestion in the comments, I have been reading some of Michael Dummett's The Logical Basis of Metaphysics. In chapter 9, "Circularity, Consistency and Harmony," Dummett has a bit to say about the relationship between the local reduction property (though he doesn't call it that), conservativeness, harmony and the nature of deduction. So in the next post I'll have a go at reconstructing his position.

(Who says philosophy weblogs can't do dramatic tension? cares about those surfers who read this weblog for the plot.)

Posted by logican at 6:49 PM | TrackBack

March 23, 2005

Normalisation/local reduction (Ch2 of Proof & Counterexample)

In a previous post I discussed A.N. Prior's 'tonk' connective and local reduction, and lots of smart people left comments, or discussed the issue on their own blogs. Greg Restall, in particular, pointed me to chapter two of the proof theory book he is writing. Greg's book project is likely to be particularly interesting to e-philosophers (that's you), because he is posting it online as a Wiki as he writes it, and inviting participation and comments.

Anyway, I have now read chapter two, and I thought I would write a little about what I have learned, as a follow-up to the Tonk post, and perhaps as a prelude to some more philosophical posts on 'tonk', the truth predicate and meaning.

1. First of all, I have been reminded of the tree method of writing natural deduction proofs, which I haven't really thought about since I first read Gentzen. Simple tree diagrams are pretty easy to read. Here is one:

The formula on the bottom line is our conclusion, any formula which does not have a horizontal line above it is a premise. The line above a formula tells us how it was derived; when we want to derive something from (say) two formulas, we draw a single horizontal line under them both and write the derived formula under that line. In the simple tree above, C is the conclusion, A→(B→C) and A, and B are all premises, and B→C is an intermediate step, derived from A→(B→C) and A using the rule of [→E]. Here is a simple proof in which premises get discharged:

In this case we have assumed A at the beginning, then discharged it using [→I] on the second line. We indicate that a premise has been discharged by drawing a line above it and writing the number of the step where it was discharged next to that line (number assignment is arbitrary; it is just a way of matching up discharged premises with the places where they were discharged. The orderly-minded will, no doubt, wish to start at 1 and move up through the natural numbers, and the rest of us should refrain from alarming our readers by following suit.) Finally we use [→I] again to get the conclusion A→(A→A). It's worth noting here that in this case we have 'discharged' a premise we never assumed. Some logics take a dim view of this kind of thing, as we shall see...but first, a less simple example of a proof, for you to look on in awe:

2. Different logics have different policies about what kinds of premise-discharging are allowed. At the most liberal end, we have standard logics (including classical logic.) They are liberal in two degrees; they allow vacuous premise discharges (as in the last step of the second proof above) and they allow more than one premise to be discharged at once, like in the last line of this proof:

If we allow more than one premise to be discharged at once, but do not allow vacuous discharging, then the result is a relevant logic. The second proof above is thus not relevantly acceptable, since it uses vacuous discharge in the last step. If we allow neither vacuous discharge nor duplicate discharge, the result is a linear logic (important to computer scientists, I believe) and if we allow vacuous but not duplicate discharging, the resulting logic is affine. Affine logics include, but are not exhausted by, intuitionistic logics.

We are dealing with logics quite generally here, independently of questions about which is the "one true logic," so we want to be able to talk sensibly about all of them. If we are going to talk about logics which do not allow multiple discharges, it is no use thinking of the premises, or premises and assumptions in force at any particular step, as a set. These logics care about whether there are two occurrences, or merely one occurrence, of A in the premises and assumptions, but the set {A, A, B, C} is the same set as {A, B, C}. So we think of our premises as forming a multiset, which is a bit like a set, except that elements can be members more than once, so that "A, A, B, C" is a different multiset from "A, B, C" (disappointingly, multisets don't come with any cool bracket notation, we just write them out as lists in quotation marks.)

In this chapter we are restricted to logics with just two rules: [→I] and [→E]. This is because Greg is attempting to keep his treatment of normalisation simple (he believes that the basic idea is simple and that you shouldn't have to be a mathematician to grasp it.) Check out his introduction for more about the goals of the book.

3. Methodology: in a sense, it is easy to demonstrate that there is a proof of some formula or argument; we just exhibit the proof. Without resorting to model theory, it is trickier to show that there is no proof of something. It turns out that one way to do it is to show that there is no normal proof from the premises to the conclusion, since i) it is often easier to show that there is no normal proof than it is to show directly that there is no proof at all and ii) it turns out that if there is a proof, there is a normal proof. The climax of the chapter is the proof of that last conditional. I am not going to go through the proofs here - this is just an explanation and summary of the results. There are two main results, the subformula theorem, and the normalisation theorem.

5. Normal Proofs: a proof is normal iff the concluding formula A→B of an [→I] step is not at the same time the major premise of an [→E] step.

In the schematic proof on the left below, the formula A→B is both the conclusion of an [→I]-step and the major premise of a [→E]-step. For now I will call such pairs "non-normal pairs" (Greg calls them critical pairs, but from the discussion with Aaron Stump on the website (scroll down to see it) it seems as if he no longer likes this terminology.) We can take non-normal pairs out by rewriting the proof as in the schematic proof on the right:

When we generalise the definition of "normal proof" to cover the other connectives, it seems that the 'tonk' proof that I was interested in here is bound to count as non-normal:

6. The Subformula Theorem: normal proofs satisfy the Subformula Theorem: if π is a normal proof from premises X to conclusion A, every formula in π is in sf(X,A), where sf(X,A) is the set of subformulas of X and A. Non-normal proofs do not, in general, satisfy the subformula theorem, as this example shows:

Here the formula A→A is a subformula of neither the premise, nor the conclusion. Clearly the inference with 'tonk' would not satisfy a generalised version of the subformula theorem either, since it contains formulas which are not in sf(P, Q), namely 'P tonk Q.'

And, finally

7. The Normalisation Theorem: if there is a proof π from X to A, there is a normal proof π' from X' to A.

X' is a submultiset of X iff whenever a is an n-fold member of X, a is an m-fold member of X for m less than or equal to n. (m may be zero.)

e.g. "A,A,B,C" is a submultiset of "A,A,B,B,C,D"

X' is a modest supermultiset of X if whenever a is an n-fold member of X for some n>0, then a is an m-fold member of of X' for some m greater than or equal to n and whenever a is a member of X' to some non-zero degree, it is also a member of X.

e.g. "A,A,A,B,C" is a modest super multiset of "A,B,C"

The normalisation theorem turns out to be easy to prove in the linear and affine cases (where there are no duplicate discharges), and a bit harder to prove in general. The basic idea in the easy case is that you take the non-normal proof, and remove the non-normal pairs one at a time, using the method outlined in 5 above. Proofs have a finite length, and the proof gets shorter each time a pair is removed, so eventually there will be no more pairs to remove. The result is a normal proof.

But if we allow duplicate discharges, as we do in standard and relevant logics, the proof might not get smaller when we remove a non-normal pair, since we may be writing the proof of A many times, in place of every time we assumed A. In these cases, how can we know that the process of normalisation will come to an end?

Complexity: the complexity of a formula is the number of connectives it contains.

(In order to make the following definition readable on the web, sequences are written within parentheses (curved brackets.) This is not the correct notation and I am looking for a better alternative.)

Non-Normality: the measure of non-normality of a proof is a sequence (c1, c2,..., cn) of numbers such that ci is the number of critical pairs of formulas of complexity i. The sequence for a proof stops at the last non-zero value. Sequences are ordered with their last number as most significant, that is (c1, > (d1, iff n>m, or if n=m, when cn>dn, or if cn=dn, when (c1, > ( d1,...dn-1).

Lemma - Non Normality reduction: given a proof π with some critical pair π reduces to π' with a lower measure of non-normality.

So even though you might not have fewer non-normal pairs, when you take one of them out, you will have a proof with a lower non-normality measure. Keep going, and eventually the process will return a normal proof. Thus every proof can be transformed into a normal proof.

Caveat: as the detail of the normalisation theorem makes clear, except in the linear case, the normal proof will not necessarily proceed from the same multiset of premises as the original.

Posted by logican at 3:22 PM | TrackBack

March 22, 2005

Logic Happens

Logic happens, but where does it happen? Australia, apparently, and if your argument is good enough, if could happen to you too...

Posted by logican at 6:21 AM | TrackBack

March 6, 2005

'Tonk' and Local Reduction

Recently, I was talking to a computer scientist about the idea that the introduction and elimination rules for a connective give the connective's meaning, and I mentioned Prior's 'tonk' as an example of a connective that has introduction and elimination rules, though some argue that it fails to be meaningful. CS guy had never heard of 'tonk,' so I explained that 'tonk' is a binary connective with an introduction rule that allows you to get 'P tonk Q' from 'P' (which is like the introduction rule for '∨') and an elimination rule that allows you to get 'Q' from 'P tonk Q' (which is like the elimination rule for '∧.')

'Tonk' is a very exciting and powerful connective; for example, it allows me to demonstrate that the conclusion follows from the premises in the following argument:

There is life on Earth.
There is life after death.

Here is the proof:

Let P be 'There is life on Earth' and Q be 'There is life after death.'

P (our premise)
P tonk Q (by tonk-intro)
Q (by tonk-elim)

There, one of the fundamental questions solved in a heartbeat. Except that we can also give a proof of the following:

There is life on earth.
There is no life after death.

So now we are back where we started, lost and confused and existentially angsty, and perplexed about 'tonk' to boot. 'Tonk' is too powerful and any natural deduction proof system in which you can prove any theorem at all - say, 'P → P' - will be trivialised by the addition of the tonk rules (by which I mean that it will be possible to prove P for arbitrary P.)

CS guy understood this all very quickly, of course. I tried to explain why this might seem to need explanation; if the rules for 'tonk' merely give its meaning, how can they be wrong? And if they are not wrong, how can it be wrong to add them to our proof system? But of course it must be wrong to add them to our proof system, because they make it unsound. We seem to need a way to distinguish a connective like 'tonk' from one like '∨' and explain what is wrong with 'tonk,' yet not wrong with the other connectives. Then I mentioned that I thought a standard response to the 'tonk' example was to say that 'tonk,' unlike the good connectives, failed to express a truth-function.

CS guy said brightly: isn't it just that 'tonk' doesn't have the local reduction property, wheras the other connectives do?

And I said: local reduct - huh?

It turns out that the usual cast of connectives - ¬, ∨, ∧, → and ↔ - have the following property: if anywhere in a proof there is an application of the introduction rule for a connective, followed immediately by an application of the elimination rule for that same connective which takes the result of the introduction rule as its major premise, then the introduction and elimination pair can be eliminated, leading to a more direct proof of the conclusion.

I should probably track down a presentation of the result, but for now I will give a couple of examples, which should convey the general idea of what this local reduction property amounts to. Suppose we have a proof that contains an instance of ∧-intro, followed immediately by ∧-elim on the result, like this:

P ∧ Q (∧-intro)
P (∧-elim)

Then clearly the middle two steps are superflouous. We can obtain a more direct proof of P from the first premise. Here's a slightly less obvious case. Suppose we have a proof that contains an application of ∨-intro, followed by an application of ∨-elim, where the result of the ∨-intro is the major premise in the ∨-elim, e.g.:

P∨Q (∨-intro_
(new subproof)
(end of subproof)
(new subproof)

(end of subproof)
P (∨-elim)

Again, the conclusion can be reached in a more direct way.

But it seems clear that 'tonk' does not have this local reduction property. Consider my argument

P (our premise)
P tonk Q (by tonk-intro)
Q (by tonk-elim)

which I used to show that 'there is life after death' followed from 'there is life on earth.' This just is an application of tonk-intro followed by tonk-elim. But, intuitively, without this step, the proof would not have been possible at all.

My impression is that discussions of 'tonk' have proceeded without awareness of the local reduction property, an impression that is reinforced by the fact that a google search for "local reduction" and "tonk" returns just a single hit - a multiple-choice geography exam for the Indian Civil Services. A google search for "local reduction" and "deduction" however, returns a host of computer science sites, the first of which reveals that the local reduction property has a twin, the local expansion property. Connectives which have this property can be reintroduced by their introduction rule after an application of the elimination rule. Connectives with the reduction property perserve the soundness of a system (tonk, of course, does not have it and destroys soundness,) while connectives with the expansion property preserve completeness.

Posted by logican at 11:10 PM | TrackBack