« March 2005 | Main | May 2005 »

April 29, 2005

Mothers of Invention

I had thought that Putnam, Harman and Kripke had the imaginative-counterexamples-to-necessary-truths market sewn up, but I have just learned that I was wrong. You might remember that Putnam, for example, argued that "all cats are animals" expresses a contingent truth by describing a situation in which we would say the sentence is false: if it turned out that all the things which we call "cats" were (extremely sophisticated and well-disguised) robot spies from Mars, then we would say that "all cats are animals" is false.

Harman described a similarly unlikely situation that would make the sentence "red is a colour" false, and though I forget the exact description and I don't have the book handy, this is roughly it: if we discovered that we have the visual experience of redness in response to a certain frequency of sound emitted by an object, rather than in response to a certain wavelength of light, then we would say that "red is a colour" was false.

Kripke turns a similar trick for "Gold is a yellow metal" (which was one of Kant's examples) and there's a good chance that I'll be posting more about what these kinds of examples show, since I have quite a bit to say about it.

These examples are sometimes - though not always - used in defence of radical empiricisms, where by radical empiricism I mean versions of empiricism which reject any kind non-a posteriori knowledge, even that based in meaning. One criticism of such views is that they cannot explain de dicto necessities, such as "2+2=4" or "triangles have three sides" or "all cats are animals;" suppose we allow that experience may somehow bring us to believe such claims, how could it ever bring us to believe that the claims are necessary?

One response open to the radical empiricist is to say that the claims are not necessary; we only think that they are because we are not imaginative enough to come up with the possible situations which would make them false.

In the history and philosophy of mathematics, something like the radical empiricist view is usually associated with John Stuart Mill, and so I should not have been surprised to learn from Coffa's The Semantic Tradition from Kant to Carnap (though I was a little surprised - I had mistakenly thought these hyper-imaginative counterexamples were a relatively recent phenomenon) that Mill also ran the no-imagination-crazy-counterexample defence.

In arithmetic, for example, our commitment to the law that 2+2=4 would vanish if whenever two pairs of things "are placed in proximity or are contemplated together, a fifth thing is immediately created and brought within the contemplation of the mind engaged in putting two and two together." The production of this fifth thing must be "instantaneous in the very act of seeing, [s]o that we never should see four thing by themselves as four: the fifth thing would be inseparably involved in the act of perception by which we would ascertain the sum of the two pairs."...Clearly Mill was thinking about adding up things like rabbits or cows, not things like solutions of third degree equations or Roman consuls. As Frege would point out in the Grundlagen (1884, secs 7 and 8), the later are not easily "placed in proximity" or involved in "acts of perception." A world in which, when someone adds the first two Roman consuls to the next two a fifth one appears, presumably with his distinct proper name, his own political record, and so on, is not a world at all, but the product of a confused mind; for in that world the decision to add would alter the past, and on pain of contradiction there could not be one person adding a group of objects and another not.(47-48 of Coffa)

Coffa's choice of rabbits, and then cows, as examples of things which Mill must have had in mind confused me on a first reading, since these are creatures capable of reproduction, and in fact rabbits are famous for breeding like...rabbits. Solutions to equations and Roman consuls cannot, so the examples might suggest that the crucial thing is that certain objects, when left together long enough, can produce more objects. But I think Coffa recognises that Mill's counterexample is even more imaginative than this.

Sometimes when one watches television, channels which feel bound to protect their viewers or subjects will cover a part of the image with a solid black rectangle. Sometimes the rectangle covers a suspect's eyes, in an effort to protect the suspect's anonymity, and sometimes it covers a subject's genitalia, in an effort (presumably) to preserve the delicate flower of our viewing innocence. Now suppose your eyes and perceptual system did this kind of thing automatically (I can't help wondering whether there are mental disorders like this.) And now, to take things a step further, suppose that instead of covering up something with a rectangle, your perceptual system instead creates an extra object whenever two pairs of objects are brought together. For example, suppose you look to your left and see two apples, then you look over to your right, and see two more apples. Then you close your eyes, reach out to the right and grab the apples that are over there. You bring them over to the apples on the left and then open your eyes and look left. What you see is the four original apples, plus a fifth - the product of your perceptual system under these kinds of circumstances. This, I'm guessing, is the kind of thing Mill is getting at with his:

[The production of this fifth thing must be] instantaneous in the very act of seeing, [s]o that we never should see four thing by themselves as four: the fifth thing would be inseparably involved in the act of perception by which we would ascertain the sum of the two pairs."


It is very unclear to me that Mill's story can be told consistently. Whilst I think the non-standard story about the apples is perfectly clear, and conceivable, Mill takes it that this story can be generalised, so that "we never see four things by themselves as four." But suppose I see two (sterilised) rabbits. How many rabbit ears do I see? Mill's answer should be five, but, I wonder, where is the extra ear attached? Is it just floating in space near the whole rabbits? Perhaps Mill would tranquilly absorb this bizarre consequence - it's not as if he is trying to persuade us this is actually what happens with our perceptual systems. But suppose I am counting attached rabbit ears? Then perhaps Mill could say the extra rabbit ear is attached in an abnormal place. But suppose I am counting rabbit ears attached in the normal place? Can we fit two ears in one normal place? And here's the clincher, I think, suppose I want to consider both how many ears attached to normal-looking rabbits there are and how many normal looking rabbits there are. If there are two normal looking rabbits there, then, on Mill's assumptions about our perceptual systems, what we should see is exactly 5 ears-attached-to-normal-looking-rabbits and two normal-looking rabbits. But that is surely something that it is not possible for our perceptual systems to represent.

Mill's attempt at a counterexample to the claim that "2+2=4" expresses a necessary truth reminded me of my old German linguistics teacher Chris Beedham, who once tried to convince me, not just that "1+1=2" was contingent, but that was not really true, since if you add one raindrop to another raindrop, the result is one larger raindrop, not two. I think my response at the time was to say that he should not think of addition as bringing things together (and perhaps doing a little pushing to overcome the surface tension.) And though I wasn't quite sure of the correct positive response, I was fairly sure (and am still sure) about the negative part: this example doesn't show that 1+1 is not 2, and the problem with the example is something to do with the misinterpretation of the addition symbol.

People are always trying to talk me out of believing the basic truths of arithmetic. On a Greyhound bus two years ago I met an "FBI interrogator" who argued that "our math" was only true for us, since a different culture might interpret "2" as meaning 5 - in which case "2+2=4" would be false and "2+2=10" would be true. I imagine - though I do not recall - that he was a little less careful about the use/mention distinction that I have been in the retelling of this argument. And my grounds for thinking this is that the argument as a whole is based on a confusion between a sentence and its content: yes, relative to some other language, "2" might refer to 5, and that would make the sentence "2+2=4" false (with respect to that language), but does not show that what the sentence says (namely that 2+2=4) could be false, since in that language "2+2=4" does not say that 2+2=4, but that 5+5=4. This didn't convince the FBI interrogator, but then, he wasn't really listening and we moved on to talking about whether it was ok to "get a bit rough" with suspects if the crime they were charged with was especially horrible...he was wrong about that too. Greyhound - it's the new Clapham omnibus.

Posted by logican at 1:18 AM | TrackBack

April 28, 2005

Google Definitions

Warning: causal theory of reference geekery ahead.

At a 1962 conference in Helsinki (the same conference at which Kripke presented "Semantic Considerations on Modal Logic"), Ruth Barcan Marcus said the following:

[T]o discover that we have alternative proper names for the same object we turn to a lexicon, or, in the case of a formal language, to the meaning postulates, ...[o]ne doesn't investigate the planets, but the accompanying lexicon.

(Aside: this Barcan-Marcus quote is taken from John P. Burgess' "Quinus ab Omni Naevo Vindicatus", a paper which John usually refers to excitedly and mysteriously as "the paper with the Latin title." The title, he explains in the paper, echoes Saccheri's Euclidus ab Omni Naevo Vindicatus or Euclid freed freed every blemish, and the paper defends (successfully in my opinion, and I have an extremely fractious relationship with Quine's writings) Quine's argument that de re modality cannot be reduced to de dicto modality.)

Of course, Barcan-Marcus' claim here looks crazy. One cannot always tell that a single object has been given two names just by looking up the meanings of the names in the dictionary. The discovery that 'Hesperus' names the same object as 'Phosphorus' for example (they are both names for Venus), required substantial empirical research in astronomy, not just the consultation of a dictionary. I used to think this quote was an example of someone saying something crazy because an attractive theory seems to imply it. And I thought the train of thought probably went something like this: names are just tags and their meanings are just the objects tagged - like names in modal logic. How does one find out about the meaning of a linguistic expression? One looks it up in the dictionary (right?) So if two names have the same meaning (tag the same object) then we'll be able to tell from the dictionary entries.

And you might think that Barcan Marcus' comment contained some important and radical semantic ideas but wasn't yet very clear on one of the epistemic possibilities that could go along with those ideas (namely that two names, e.g. "Hesperus" and "Phosphorus" could have the same meaning without it being possible to tell, on the basis of one's semantic competance alone, that a sentence expressing the identity of the object(s) referred to is true, e.g., without being able to tell that "Hesperus is Phosphorus" is true. (Forgivably of course, no-one else got there until Kripke's "Naming and Necessity" lectures at Princeton 8 years later.)

However I have just discovered Google Definitions. If one feeds Google the expression "define: " followed by the term one wants defined (it's a little too late for corner quotes,) it will return definitions from all over the web. So naturally I had to feed it all the old philosophical examples, and behold:

Hesperus
evening star: a planet (usually Venus) seen at sunset in the western sky
www.cogsci.princeton.edu/cgi-bin/webwn

Phosphorus
Phosphorus means Venus when it is seen in the morning (the morning star).
en.wikipedia.org/wiki/Phosphorus_(morning_star)

There you go, Ruth Barcan-Marcus was right and all that hard astronomy was for nothing.

Except, er, not. My new electronic super-lexicon is surprisingly quiet on the identity of the referent of '2' with the referent of either '{0,{0}}' (a la van Neumann) or with '{{0}}' (a la Zermelo), and it didn't have anything to say about the identity of Plumwood with Routley. (Though it did tell me that "Londres" was a name for London, and that "Tully" is a name for Cicero.)

Posted by logican at 9:54 AM | TrackBack

April 25, 2005

Ideas of Imperfection

I'll try not to get into the habit of introducing "new" philosophy weblogs that are actually a few days older than mine, but I can't help this one. Listen up: Kieran Setiya has a weblog. Yep, Kieran. If you know him, 'nuff said, I think. If not, I can only say that that man can say more interesting things over dinner than most people think of in a lifetime. I shan't say more, since it would only be gooey and embarrassing gushing over Kieran, but go and read his stuff. He comes highly recommended.

In his post from the 14th March, on Marjorie Garber's book Academic Instincts he writes:

I do take issue with the following remark:

Virtually everyone in the humanities envies the philosophers, but the philosophers, some of them at least, aspire to the condition of law. Or, alternatively, to the condition of cognitive science.

This description of philosophers is both peculiar and false. Some aspire to the condition of law? I don't follow. Does she mean that they want to be lawyers – a remark on adversarial style? Or that they wish they could legislate the world to fit their image of it? In any case, what is striking about philosophers, for the most part, is rather their peculiar self-confidence: their lack of envious insecurity.

This reminded me of Tim Schroeder's short essay "What are you going to do with that?" (where "that" refers to your newly minted philosophy degree.) His protagonist, harangued at parties, protests:

But philosophy trains the intellect. It does not simply fill one up with facts which are soon outdated, but makes one an all-purpose reasoner, clear and lucid in speech and on the page. The skills of philosophers are in demand in business. Philosophers are hired to be ethical consultants at hospitals. Philosophers get into prestigious law programs (but here we get into bad faith, because people who really love philosophy feel about the law the way lovers of Belgian chocolate feel about the Hershey Corporation).

I think he and Kieran are right about law envy - I've never encountered that one, perhaps because philosophers who wish to become lawyers usually can. What I have encountered (squirming in my own rotten soul, no less) is mathematics envy. And sometimes some theoretical physics envy. And sometimes, I think it would have been fun to be a hairdresser...and there's always linguistics and computer science. Perhaps we simply realise that it would be great to have knowledge of the fields which border on our own, and hence admire and envy the people who really do have that knowledge.

Posted by logican at 2:29 PM | TrackBack

April 23, 2005

Modern Logic

Is it a pop star? A model? No, it's senior blogger and web-neighbour Richard Zach.

(Thanks to Marianne and Andrew for putting me on to this.)

In searching for that interview, I also turned up this one, in his own h2So4. The second interview covers a wider range of topics, allowing Professor Zach to respond to the question:

Why do you (on occasion) work for no money?

with:

What do you mean? I've never worked for money in my life.

Posted by logican at 10:23 PM | TrackBack

April 21, 2005

Antimeta

There's a new blog on the philosophy of mathematics here. (Well, it's newish - older than this blog, yet younger than the year.) At the helm is Kenneth Easwaran, a Berkeley graduate student studying for his qualifying exams. So far topics covered include fictionalism and platonism, logicism, logical consequence and conservative extensions.

In his first post Kenneth says he started the blog "to give myself a place to write up my thoughts on various things that I'm reading in preparation for my qualifying exam. Hopefully some of the thoughts on these and related issues will eventually turn into research projects that will be worth discussing."

This makes a lot of sense to me. When I have graduate students I think I might start a group blog and encourage them each to contribute something each week.

Posted by logican at 10:39 PM | TrackBack

April 20, 2005

Swims like a dolphin in the deep grammar of Ku

Nicole Kidman has apparently learned a fake language for her new film, The Interpreter. Lots of entertainment sites are reporting on this and some of them have been saying some pretty strange things:

Scriptwriters invented the obscure African language of Ku as part of the plot for Nicole Kidman's interpreter who claims to overhear an assassination plot. (Entertainment Northeast)

I suppose one way to understand this is as saying even in the fiction the language is obscure, unlike, say, Tolkien's Quenya, which, according to the fiction, is a major language of Middle Earth. (Hope I'm right about that.)

But it could also be read as saying that a couple of scriptwriters came up with a language for the film and - big surprise this - no major peoples have adopted it, so that (tragically) their language remains obscure.

But that's not the end of the slightly odd things said about this new film. Here's the Guardian's film critic:

Nicole speaks Ku like a native.

(What natives?) And hilariously, he goes on:

Nicole swims like a dolphin in the deep grammar of Ku. And her pronunciation is frankly top drawer.

Unfortunately I wasn't able to track down any text in Ku. I wanted to feed it to the language identifier and then run away.

Posted by logican at 12:42 AM | TrackBack

April 19, 2005

Searching for Logic

Adam Morton - holder of the Canada research chair in Epistemology and Decision Theory - spoke here on Friday. His talk was a flurry of impressive and confusing ideas concerning the relation of something he calls searching to three other topics - logic, the teaching of logic, and human reasoning.

Adam's main example of searching is using a computer in a library to do a Boolean search of a database for records. We can get the library computer to search for all the English books by putting "English" in as our search string, and then it will return the records for each of the English books in the library. We also can use the three Boolean operators to form more complex search strings. "English AND Non-fiction" will return the records for each of the English non-fiction books, and "German AND ¬Non-fiction" will return German fiction and "Play OR biography" will return the records of books that are either plays or biographies.

Adam maintains - on the basis of an experimental logic class which he taught last year (sounds cool, right?) - that if you teach this, and then explain deduction in terms of it, it helps to even out the heartbreaking teacup of a grade-graph familiar to many intro logic teachers, which records the fact that half the class found logic easy, and half the class were failing from the get-go. He found that students learned searching readily, and that it was simple to justify some of the less intuitive argument forms in terms of searching. Here is (what I remember of) Adam's dramatised two ways of teaching the disjunction rules:

Disjunction 1a


Adam: George skis in spring OR George likes spinach, therefore, George likes spinach is not a valid argument.
Students: Yes it is. George loves spinach. He eats it all the time. Even in class. Look! There he goes again!
Adam: But suppose the last sentence wasn't 'George likes spinach' but 'George likes apples'...
Students: But the last sentence IS 'George likes spinach.' THIS is the argument we're talking about, stop changing the subject!

Disjunction 1b


Adam: roughly, an argument is valid iff any record which is a search result for each of the premises is also a search result for the conclusion. For example, suppose we have as our premise 'George skis in the spring OR George likes spinach' and as our conclusion 'George likes spinach.' This is not a valid argument because 'George skis in spring' is a search result for the premises, but not for the conclusion. See?
Students: Indeed, Professor Morton. You are very wise.

Disjunction 2a


Adam: George skis in the winter, therefore George skis in the winter OR George likes spinach.
Students: What? You are crazy, sir! Where did the spinach come from?
Adam: Erm...

Disjunction 2b


Adam: Any search result for "George skis in the winter" is also a search result for "George skis in the winter OR George likes spinach." So the argument is valid, see?
Students: But of course, Professor Morton, even the smallest child knows that.

I think he is right that the second approach to explaining the rule is likely to be the more convincing. But it should not really be surprising that we can explain these arguments in terms of searching, because being a result for a search-string is very closely related to being a model which satisfies a statement, and we can already explain validity in terms of satisfaction by models: an argument is valid just in case any model which satisfies the premises satisfies the conclusion. Similarly, the argument is valid just in case any record which is a result for each of the premises is also a result for the conclusion.

In my experience it is usually the case that justifications of logical laws proceed best through semantics. It is common, for instance, for students to balk at the idea that explosion

explosion.jpeg

is valid. And one good way to explain its validity is to say: look, by definition an argument is valid if and only if every model which satisfies the premises satisfies the conclusion. No model can satisfy contradictory premises, so trivially, every model which satisfies contradictory premises satisfies the conclusion. Hence the argument is valid. (Then say reassuring things about it being a degenerate case, just a consequence of our definition, 'valid' is a technical term etc.) This works for disjunction introduction too (once one has explained about inclusive and exclusive disjunction, but I think you have to do that for searching as well.)

Explaining these inferences by teaching searching first, and then explaining them in terms of searching, might have the following pedagogical advantages over the traditional semantic explanation:

One. Even students who never really understand the explanations might learn the useful practical skill of Boolean searching. I don't think we should scoff at this. One of the nice things about undergraduate teaching is that even when we fail to get a student to do good philosophy, we can still help them to improve their writing skills, close-reading skills, and maybe research skills like Boolean searching. This is very cool and likely to be of use to them in later life. Similarly, A basic logic course which gives half the class a grounding in logic, and half the class C+s and the ability to do Boolean searches, is surely better than a basic logic course which gives half the class a grounding in logic, and leaves the other half in tears with C-s, and no new skills.

Two. Students can see the point of learning to search, and everyone learns better with a motivation.

But here are some worries you might have about teaching logic this way, the first two are just developments of things that Adam touched on himself:

One. Maybe the success of teaching logic through searching is a product of the extra excitement and interest involved in teaching such an experimental class. If that were true we might expect results to tail off as the method became standardised. (Well, that might be the explanation, but we should not assume that. We could say the same thing about any new, successful teaching method, and presumably some of those would be better than the methods we use now.)

Two. Searching isn't so helpful when it comes to thinking about conditionals. Students are puzzled when asked to search for "if it is English then it is non-fiction". This matters because the collection of arguments known as "the paradoxes of material implication", for example, these guys:

pmi.jpeg

are traditional sticking points with students, just like disjunction introduction. (You might think this is fine though, since teaching conditionals is a bit tricky anyway. I might have more to say about this in a latter post, since I thought Goldfarb's explanation of the truth-table for the conditional in his recent Deductive Logic was one of the best I have seen.)

Three. New logic students already have difficulty separating syntax and semantics and often find it difficult to understand the point of a completeness proof. I worry that this method risks confusing them further by mixing up the semantic notions with the more syntactic looking database records. Maybe this worry is unfounded; after all, when we give Tarski-models for first order logic, we represent them using linguistic items - numerals, brackets etc. Maybe the database entries are just like that. But in that case shouldn't we be saying that what we're searching for is not itself a database entry, but whatever that entry represents? (like a book?)

Four. I'm a bit worried that searching encourages sloppiness with respect to the objects of certain properties. In taking about searching it's natural to end up talking about searching for "things that are in English" and then end up saying "being in English entails either being in English or being non-fiction since everything that is either in English or non-fiction is English." Though I am sure this makes perfect informal sense, it isn't the way we normally talk about entailment: entailment is a relation between interpreted sentences (or rather a set of interpreted sentences and a sentence. Or multisets of interpreted sentences, or...(sigh)), not between properties, (or even predicates.) Learning to be sensitive to such things is one of the tasks that is difficult for some students, and encouraging insensitivity to it early on might not be a kindness. (Though, thinking about it, one standard exercise is to ask students to demonstrate that "intransitivity entails irreflexivity", i.e. to show ∀x ∀y ∀z ((Rxy &Ryz) → ¬Rxz) entails ∀x¬Rxx. Perhaps by the time we start talking like this their understanding of validity is already safely entrenched.)

Finally, I have worries about introducing all this complicated build-up to the stuff we actually want to teach. There is potential for introducing all kinds of confusion. We'll be multiplying the definitions of validity, the kinds of objects students have to think about when thinking about logic, and with that the potential for confusing those objects and definitions. So a database record is kind of, but not entirely, like a model, but more familiar to non-logicians. This makes it helpful, but treacherous.

Lots of these worries are things that might be allayed by the details. Surely we can be as strict in the way that we talk about searching as we are when we talk about satisfaction. Perhaps this class is best aimed at high school students? Perhaps the class might only be aimed at students who are never going to need to follow a completeness proof. Perhaps there is a way of teaching the conditional through searching. If someday a version of this idea could help to overturn those teacups, then I'm all for it.

Adam also had some interesting things to say about the relation between searching and human reasoning, so I might talk about that in the near future.

Posted by logican at 4:28 PM | TrackBack

April 18, 2005

Mavenry on Crack (or just me on crack?)

Damn. After I posted this Mark Liberman informed me that the site I'm criticising is in fact a parody site. Sorry for the misplaced indignation - these are crazy times.

----------------------

Linguist mocks crazy old mavens.

Mavens go insane:

If we are not mistaken, 'web site' ought to be 'Web-site', since 'Web' is a proper noun, 'sitemeter' is simply a monstrous mangle that should at the very least be capitalised, and the spelling 'humor' is one more example of the kind of linguistic bigotry that demands the English-speaking world accede to the vulgar American way of doing things.

Plus check out their endorsement from famous racist Kilroy-Silk:

It is time to rescue Great Britain from the hands of foreigners, and S.P.E.C.S. have risen to the challenge.

They also lambast BBC newsreader Huw Edwards (who is Welsh) for splitting infinitives:

We have always insisted that being a non-native is no excuse for this vandalism of the airwaves, and we shall be monitoring the speech of Mr. Edwards very closely indeed.

And I have always insisted that being old and paranoid is no excuse for this sort of nonsense. These particular fellow-citizens of mine are about more than the usual self-important and badly-researched pedantry, and I hereby call them on it. I'm not sure we have much choice but to tolerate it, but ignore it, and your children will be next.

Posted by logican at 12:01 PM | TrackBack

April 17, 2005

First Time Reading

The reading exams for ancient philosophers just got a whole lot harder. Scientists at Oxford have used infrared to make the documents known as the Oxyrhynchus Papyri - 400 000 fragments of Greek and Roman writings - legible.

When it has all been read - mainly in Greek, but sometimes in Latin, Hebrew, Coptic, Syriac, Aramaic, Arabic, Nubian and early Persian - the new material will probably add up to around five million words. Texts deciphered over the past few days will be published next month by the London-based Egypt Exploration Society, which financed the discovery and owns the collection.

Via languagehat, who links to Martin Robertson's translation of a fragment of a poem by Arkhilokhos (I particularly recommend that link.) That fragment was discovered just 30 years ago and the new haul promises more from the same writer.

Just one more reason why it would have been worse to be born 100 years earlier...

Posted by logican at 12:23 PM | TrackBack

April 16, 2005

"It's not even a warning system,"

says Biznel. "It's better thought of as an information system."

Rewording reduces media frenzies?

Posted by logican at 12:57 PM | TrackBack

April 14, 2005

Open-Access Journals

Wired reports on the growing number of open-access (free) journals in academia. The article represents this as happening "despite concerns about the ethics of pay-for-play publishing" but the first two open-access journals that come to mind are the Australasian Journal of Logic and Philosophers' Imprint. I know one cannot pay to publish in the AJL, and I will eat my copy of Naming and Necessity if you have to pay to publish in Philosophers' Imprint.

Posted by logican at 1:23 AM | TrackBack

April 13, 2005

Melbourne on the Air

Joanne Faulkner writes:

This week the La Trobe Philosophy Radio Program looks at Public Philosophy. Discussants include Stan van Hooft (Deakin), Michelle Irving (Heart of Philosophy), and David Miller (The Existentialist Society).

The program can be streamed live at: http://www.subfm.org
Friday 15 April, 2 pm.

Find a flyer for the program here: http://mc2.vicnet.net.au/home/ltppc/public.pdf

Most previous programs are archived at: http://www.subfm.org/sophiaaudio.htm

----

For those of you who don't run off to Australia whenever you get an opportunity, LaTrobe and Deakin are universities in Melbourne and Heart of Philosophy is an organisation that has been running philosophy cafes in the city. My good friend Matt Carter presented one on artificial intelligence on Wednesday.

Posted by logican at 10:41 AM | TrackBack

April 12, 2005

Journal Selection

I spent some time today thinking about where I should send a paper that I have nearly finished. This discussion over at the Leiter Reports was helpful, even though it is no simple matter to form sensible beliefs about journals based on it. I think I have learned what people consider to be a reasonable time to wait for a response from a journal (three months is good,) the kinds of problems people have with journals (extended response time, unhelpful reports from referees, disrespectful editors and referees) and I have a better feel for which journals are thought of as "the best" and which count as "second tier."

My tenure clock has only just started to tick, and I like the paper, so my plan is to send it to a top journal. But in checking out the websites of a few journals I was surprised to find that several of them do not accept electronic submissions:

To an outsider this seems strange. I thought the editor simply wanted to email the paper to referees. It is faster and cheaper than posting it and it is what has happened both times I've refereed papers. (Yes, quake in your stylish yet affordable boots, AJP and CJP authors, they let a young punk like me write reports on your work.)

It isn't very likely that the editors are not computer-minded; doesn't publishing a journal involve a lot of messing around with computer files and, well, possibly Quark or InDesign or LaTeX? (Am I being hopelessly naive about what publishing a journal involves?)

Incidentally, I think I'm going with PPR, (at least, it was their address I wrote on the strangely concrete envelope that I bought from the bookstore this afternoon) because, well, (only slightly embarrassed) I kind of want to publish in the same journal as Tarski...

Decisions are funny things.

Posted by logican at 11:46 PM | TrackBack

April 11, 2005

The Life of Blogs

The first day in the life of a new-born analytic philosophy weblog brings non-paradoxical self-reference.

Posted by logican at 11:25 PM | TrackBack

Teaching Logic

Richard Zach has posted a wealth of material from the ASL session on logic instruction and graduate training, including MP3s, slides and notes from the talks and some related links.

Posted by logican at 4:18 PM | TrackBack

Local News

There are a number of logic and language related talks coming up in the philosophy department at the University of Alberta in the next two weeks:

"Searching for Logic" - Adam Morton Friday April 15th, 3.30pm in Humanities 4-29.

"Truth in Virtue of Meaning" - Gillian Russell (that's me) 21st April, 12 noon in Humanities 4-29.

"Mathematical Notation and Philosophical Analysis" - Jamie Tappenden - Friday, April 22nd, 3.30pm in Humanities 4-29.

Posted by logican at 3:15 PM | TrackBack

Orwell and Racism in Word Choice

One of the commentors on a recent post wrote the following in connection with Orwell's "Politics and the English Language":

I find it hard to take seriously a writer who prefers Anglo-Saxon words to those of Latin or Greek origin, a full 2000 years after the Roman occupation of Britain. Nothing Orwell says makes me believe this is anything but plain, old-fashioned racism.

Is Orwell's position on avoiding expressions with foreign origins racist?

Maybe. I like to think that Orwell would have wanted us to be on the look out for racism in his work, and to highlight it when we see it. (And I would want people to do that with mine - unpleasant though the accusations and realisation would undoubtedly be.) Here is what Orwell says about Greek and Latin words, so that you can judge for yourself:

Pretentious diction. Words like phenomenon, element, individual (as noun), objective, categorical, effective, virtual, basic, primary, promote, constitute, exhibit, exploit, utilize, eliminate, liquidate , are used to dress up a simple statement and give an aire of scientific impartiality to biased judgements. Adjectives like epoch-making, epic, historic, unforgettable, triumphant, age-old, inevitable, inexorable, veritable, are used to dignify the sordid process of international politics, while writing that aims at glorifying war usually takes on an archaic color, its characteristic words being: realm, throne, chariot, mailed fist, trident, sword, shield, buckler, banner, jackboot, clarion . Foreign words and expressions such as cul de sac, ancien r&eacutgime, deus ex machina, mutatis mutandis, status quo, gleichschaltung, weltanschauung , are used to give an air of culture and elegance. Except for the useful abbreviations i.e., e.g. , and etc. , there is no real need for any of the hundreds of foreign phrases now current in the English language. Bad writers, and especially scientific, political, and sociological writers, are nearly always haunted by the notion that Latin or Greek words are grander than Saxon ones, and unnecessary words like expedite, ameliorate, predict, extraneous, deracinated, clandestine, subaqueous , and hundreds of others constantly gain ground from their Anglo-Saxon numbers. The jargon peculiar to Marxist writing (hyena, hangman, cannibal, petty bourgeois, these gentry, lackey, flunkey, mad dog, White Guard , etc.) consists largely of words translated from Russian, German, or French; but the normal way of coining a new word is to use Latin or Greek root with the appropriate affix and, where necessary, the size formation. It is often easier to make up words of this kind (deregionalize, impermissible, extramarital, non-fragmentary and so forth) than to think up the English words that will cover one's meaning. The result, in general, is an increase in slovenliness and vagueness.

And then later, in his summary, one of his six recommendations is:

Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.

Though, of course, this is then tempered by recommendation 6 which reads:

Break any of these rules sooner than say anything outright barbarous.


I want to suggest that we should count this as a form of racism if the ground for using an English word in place of a foreign word is merely that it is English in origin and the other is foreign in origin (henceforth I will just say foreign and English words.) Of course it might be tempting to point out that Orwell's grounds for rejecting foreign words is that they are pretentious, not merely that they are foreign. But "pretentious" is a normative expression, and if the sole reason they are felt to be pretentious is that they are foreign, then this is merely an evasion. The crucial question will be whether there is any other reason not to use them, or any other reason for calling them pretentious.

One reason suggests itself: they are, for the expected audience , harder to understand. Many writers have an obligation to write in such a way that they can be understood. Most serious non-fiction writers have an obligation to write in such a way that it is hard for them to be misunderstood. So perhaps we should avoid foreign words - not because they are foreign - but because we care about communicating something to our audience, and foreign words will make it harder to do that.

But this defence of Orwell runs into trouble when we look at Orwell's list of foreign words: phenomenon, element, individual (as noun), objective, categorical, effective, virtual, basic, primary, promote, constitute, exhibit, exploit, utilize, eliminate, liquidate; cul de sac, ancien r&eacutgime, deus ex machina, mutatis mutandis, status quo, gleichschaltung, weltanschauung; expedite, ameliorate, predict, extraneous, deracinated, clandestine, subaqueous. While we might make a case that someone who used "mutatis mutandis" - in the context of, say, a letter to the local paper - was being inconsiderate of his audience, I don't think we can claim that "element", "individual" or "basic" are hard for English speakers to understand. So that reason isn't good enough.

One trick of Orwell's that I think is very effective in convincing the reader that Orwell is on to something here, is his translation of a verse from the bible into what he calls "modern English." Here is the original:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Here is the translation:

Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

First passage - kind of beautiful. Second passage - not. Is the second passage harder to understand? Well, the first is both poetic and archaic. "Happeneth"? "neither yet bread to the wise"? It requires a little interpretation. What it does have over the second is vivid imagery - it uses specific examples where the second passage is more general - and the first passage has a greater proportion of monosyllabic words.

Is generality and abstractness responsible for unclarity? Not on its own. The languages of mathematics and logic deal with unusually abstract and general issues and yet they are also unusually precise.

Is there any reason to link polysyllabic words with lack of clarity? At first blush this might seem ridiculous. Some languages - such as Latin and German - are much more apt to stick words and bits of words together than English is. (Patently I could do with some help from a friendly linguist here.) Does (or did) this tendency make sentences written in them less clear to their native speakers? That seems very unlikely. So it seems silly to favour mono- over polysyllabic words in general.

But could it be that when polysyllabic words are imported into English they encourage unclarity? Perhaps some of the information available - just by looking at the word - to someone who understands Latin affixes and morphology may not be available to someone who does not. But in many cases the original meaning of the word is now irrelevant to the meaning of the word in English anyway. Exciting new discoveries about what the Greeks really meant by "idea" should not change our views on the meaning of the English word "idea". And none of this threatens the fact that that "individual", "element" and "exploit" are all perfectly well understood by most native English speakers.

I see no good, non-xenaphobic justification of the claim that we should avoid foreign expressions in "Politics and the English Language". So why does Orwell say that we should not use them? Perhaps part of the explanation is some British jingoism, combined with a romantic attachment to the past of the kind that Orwell himself criticises in his own anti-racist essay "Notes on Nationalism". This is a kind of racism. But if that is part of the explanation, I suspect a full explanation would include i) Orwell's antipathy towards to British middle classes and the way they use language and iii) some careless over-generalisation from poor uses of foreign expressions.

I suppose this only goes to show that you can write essays against xenophobia, run off to Spain fight for POUM (the Workers' Party of Marxist Unification) in the civil war, and point out that the pigs can get away with saying things like "All animals are equal, but some animals are more equal than others", and still fuck up it all up - in print, no less - yourself.

I think there's probably a lot more to be said here, so I might write up this up in more detail later on. Comments are very welcome on this one (and if you sign into Typepad, you can comment without waiting for the comment to be approved.)

Posted by logican at 2:30 PM | TrackBack

April 9, 2005

Unzeitgemasse Betrachtungen

So I was wondering,....which philosophers would have had weblogs, had they lived long enough?

And which not?

I'm thinking yes for Wittgenstein; the Investigations demonstrates clear blog-yearning. Yes for Nietzsche - the witt, the aphorisms, the opportunity to hold forth on the state of the culture to an audience at the click of a mouse. And yes for Quine - Quiddities too was itching to be a blog and there would be a ton of bizarre posts about things like the miscalculated areas of small countries.

But no for Kant, (couldn't begin to get started on the lack of seriousness, depravity of it all etc.) No for Kripke (he has lived long enough, obviously, but by reputation he lacks the kind of thick-skinned imperviousness to the pressures of publicity that weblogs are meant to require) and no for Epicurus, because he would much rather be out having dinner with his friends.

Though you know what? Kripke should totally have an anonymous weblog.

Posted by logican at 2:32 AM | TrackBack

April 8, 2005

Unkindness of Mavens

I was surprised to see that Fay Weldon has taken up language mavenry. Weldon is the author of a good and disturbing novel called Down Among the Women, which holds the curious distinction of being one of three texts ever to have made me feel physically sick. I recommend it. (Yeah, really. The other two are Kafka's "Der Heizer" and Brett Easton Ellis' "The Rules of Attraction" - I seem to be able to stomach graphic physical violence, but not social confusion. Yes, I know that's stupid.) But Weldon has also just written a totally lame-arse opinion piece for the most recent Sunday Times, called "Language - Not Another Euphemism" and it combines all the usual elements:

1. unsupported and wild claims about the influence of language on thought:

Our ideas of what we are and what we want to be are limited by the language we use. The hanging, dangling participle has no conclusion [...] We ourselves, not just our participles, are hanging, dangling, and strangling in verbiage and euphemism.

2. tiresome (and in this case, tasteless) exaggeration:

It’s all velvet-glove stuff, I fear, disguising the fingers at the throat. It has been going on a long time. Today’s version of Work Makes You Free over the camp gates turns smoothly into Making Work Choices Easier.

3. and all this based on complaints about utterly trivial things, in this case, a trend in advertising:

It’s the “ing” word I’m talking about; not a participle hanging on its own, or dangling likewise, it’s the “hanging and dangling participle”. I claim it as a new grammatical usage, and it’s everywhere. It’s the Home Office logo: Building a Safe, Just and Tolerant Society: it’s there on the social-services minibus: Driving for the Caring Community. (How about “Caring for the Driving Community” on your parking ticket? Coming soon, no doubt.) Creating Opportunity Worldwide, claims the British Council (Really? Sounds like a business plan). ACAS: Making Work Work (hello, in there? Anyone?).

Weldon reminds us that Orwell was doing this before her, in Politics and the English Language. But Orwell's essay was great. (Yes, even though Geoff Pullum despises it. He's right about everything else, he's just wrong about Orwell.) The basic message of Orwell's essay is STOP BEING SO FUCKING PRETENTIOUS AND THINK ABOUT WHAT YOU'RE SAYING. Lots of brilliantly clever people needed to hear that message at some point. But that is a long way from the message that the "hanging, dangling participle" is going to lead them straight to the concentration camp.

Meanwhile, this weblog's favourite Maven is on topic this week, with a column entitled "Der Pabst ist Tod! Der Pabst ist Tod!" (the pope is death, the pope is death!) This is, naturally, a long awaited opportunity for him to go all editor-y on your German ass, because you should have written "der Pabst ist tot! Der Pabst ist tot!" Yep, the Pope is death, and the German language is vollig durcheinander geraten and Sick's the man to sort us all out. (And in doing so he provides so many euphemisms for "almost dead" that I could probably rewrite the dead parrot sketch in German with an ecclesiastical twist...)

Posted by logican at 1:47 AM | TrackBack

April 7, 2005

Officialese

It's US tax filing time, which means that it is time for my annual confrontation with the bizarre question 12 on form 8843 [88kb pdf]:

Were you present in the United States as a teacher, trainee or student for any part of more than 5 calendar years?

Take ANY period of time shorter than 5 years. Say, 20 minutes during 2004. That is a part of the 10 year block between 1996 and 2006, which is more than 5 calendar years. So it looks to me as if, if I have ever been to the US at all (as a teacher, trainee or student), I should answer 'yes' to that question.

But this cannot be what they really mean, otherwise they would have asked "have you ever been present in the United States as a teacher, trainee or student?" And even we aliens know that it is best not to upset the IRS. So I have to figure out what they meant to ask. And it is kind of difficult to express, which is probably why it turned out so badly on the form. But I think it is this:

Have there been 6 or more calendar years in which you were present in the United States as a teacher, trainee or student for part of the year?

Not elegant, perhaps, but at least it asks the right question.

Posted by logican at 2:40 PM | TrackBack

Even More Harmony

Lest you think that I am the only person to come down with tonk-fever, Charles Stewart has started a harmony page over on Greg Restall's Wiki, and Matt Weiner (newly of Texas Tech - congratulations Matt!) has an interesting post on truth and local reducation.

Posted by logican at 2:35 AM | TrackBack

April 4, 2005

Dummett on Harmony, Conservative Extensions and Local Reduction/Normalisation

This post will be a brief discussion of a family of related concepts - local reduction/normalisation, conservative extension and harmony - in the light of Dummett's "Circularity, Consistency and Harmony" in The Logical Basis of Metaphysics.

One thing that emerged in the comments here and here, it was that it is easy to get confused about whether Dummett's concept of harmony is identical with the satisfaction of local reduction. The idea of local reduction is reasonably straight forward: a logical constant has the local reduction property just in case, whenever its introduction rule is used to derive some formula A, and then its elimination rule is immediately used, taking A as the major premise, then the proof can be rewritten without that pair of steps. (See this post for more detail, with the caveat that the post ignores the complicated literature on the subject of local reduction with respect to negation.)

Harmony and Meaning
Harmony is slippery because it is something which Dummett has an intuitive grip on and is trying to make more precise. The idea is introduced through talk of meaning. Dummett thinks that verificationists and pragmatists have been investigating different parts of the meaning-elephant: verificationists identify the meaning of a statement with what is needed to establish its truth (verify it), pragmatists identify the meaning of a sentence with its consequences, but Dummett thinks neither sufficient:

Someone would not be said to understand the phrase 'valid argument', for instance, if he knew only how to establish (in a large range of cases) that an argument was valid but had no idea that, by accepting an argument as valid, he has committed himself to accepting the conclusion if he accepts the premises. The analogue holds good for a great many expressions...(213)

Rather, the meaning of an expression encompasses both principles governing how to verify it, and principles governing what follows from it. He wonders if these rules might somehow be in tension. Could they be inconsistent? Or might it be that one rule could be too lax or too restrictive given the other? Such a situation would be a failure of harmony. Since it is difficult to isolate the principles governing a particular expression in a natural language, Dumment takes the logical constants as a case study, since their verification and consequence principles are clearly established in their introduction and elimination rules.

Harmony and Logic
Dummett has other reasons for being interested in the logical constants as well. His interest in harmony stems from his interest in rejecting or accepting change in logic. He thinks that fear of lack of harmony makes us wary of changing logics. He gives two examples, first quantum logic's proposal to weaken the classical [VE] rule, so that this classically valid argument form becomes invalid:

blogproof31a

And second the (fictional) proposal to strengthen counterfactual logics (John P. Burgess used to call these and modal logics "neo-classical logics") so that the following argument form is valid:

blogproof31b


Wittgensteinian Conventionalisism
One possible view about such new proposals in logic - which Dummett attributes to Wittgenstein - is that any combination of introduction- and elimination- rules defines a legitimate logical constant. Some of these constants are not in use in any natural or formal language, but that is merely a matter of convention. In principle we could even adopt connectives like 'tonk' that allowed us to derive contradictions. (I suppose this might be the explanation of the Wittgenstein quotation at the beginning of Graham Priest's chapter on Paraconsistent Logic in the second edition of the Handbook of Philosophical Logic: "Indeed, even at this stage, I predict a time when there will be mathematical investigations of calculi containing contradictions, and people will actually be proud of having emancipated themselves from 'consistency.'")

Dummett is looking to resist the Wittgensteinian conventionalist view by finding a way to criticise some combinations of rules. The concept of harmony appears to promise a foundation for this criticism. It looks as if what is wrong with 'tonk' is that the introduction and elimination rules are not in harmony; we can derive far too much from a claim of the form 'A tonk B', given what was required to establish it.

Harmony and two notions of Conservative Extension
Dummett suggests that the notion of a conservative extension can help us to make the notion of harmony more precise. Conservative extension is normally defined over theories, and a theory T2 is a conservative extension of T1 if i) it can be obtained from T1 by adding new expressions, along with axioms (or, according to Dummett) rules of inference which govern those expressions and ii) "if we can prove in it no statement expressed in the original restricted vocabulary that we could not already prove in the original theory." (218)

(Just a thought, but it seems to me that the ideas of language, theory and proof system are all mixed up together here, a la Tarski and Carnap.)

Dummett then turns to natural languages, wondering whether there could be any disharmony in English, and says "if there is disharmony [between the rules governing an expression E,] it must manifest itself in consequences not themselves involving the expression E, but taken by us to follow from the acceptance of a statement S containing E."

And then: "A conservative extension in the logicians' sense is conservative with respect to formal provability. In adapting the concept to natural language, we must take conservatism or non-conservatism as relative to whatever means exist in the language for justifying an assertion or an action consequent upon the acceptance of an assertion. The concept thus adapted offers at least a provisional method of saying more precisely what we understand by 'harmony': namely that there is harmony between the two aspects of the use of any given expression if the language as a whole is, in this adapted sense, a conservative extension of what remains of the language when that expression is subtracted from it."(218-9)

I think this is wrong. It seems to me that it confuses lack of harmony (as he characterised it before: a lack of fit between introduction and elimination rules) with a particular KIND of lack of harmony. If the elimination rule of an expression licences too much, given what was needed to introduce the expression, then yes, we will be able to prove sentences not containing that expression which we were not able to prove before, and we will have a non-conservative extension. But if the elimination rule is too restrictive given the introduction rule, then it doesn't seem that we will be able to prove any more than before. Suppose, for an example, we start out with a classical propositional logic containing only '→' and '¬' and we add 'V', governed by the usual introduction rule, but taking as an elimination rule the weak, quantum version given in the picture above. Surely the result is a conservative extension of the original, on Dummett's definition. And if we had added the usual classical 'V' it would still be a conservative extension - all three systems are just classical proof systems, some of which are easier to use than others. Yet how could the introduction rule for 'V' be harmonious with two non-equivalent elimination rules?


Local Reducation
In chapter 11 of the same book, entitled "Proof Theoretic Justifications of the Logical Laws," we get another provisional definition of harmony, this time just for the logical constants, and it is here that local reducation/normalisation comes in:

"The analogue, within the restricted domain of logic, for an arbitrary logical constant c, is that it should not be possible, by first applying one of the introduction rules for c, and then immediately drawing a consequence from the conclusion of that introduction rule by means of an elimination rule of which it is the major premiss, to derive from the premisses of the introduction rule a consequence that we could not otherwise have drawn."

(I couldn't find an argument for this identification.)

Dummett has just claimed that susceptibility to local reduction (aka normalisation) (within some system) is the formal analogue of being a conservative extension of a theory (once that idea is adapted for natural language.) But since the notion of a conservative extension was originally a formal notion anyway, it looks as if he thinks that a logical constant will be susceptible to local reduction with respect to some proof system just in case adding it to that proof system results in a conservative extension. And, as noted earlier, he thinks that the rules governing a connective are in harmony with respect to a proof system just in case adding that connective to a proof system results in a conservative extension. So it seems that, for logical constants anyway, Dummett identifies all three of the notions we have touched on. (I'm thinking he is a lumper, rather than a splitter.)

Two Questions
There are two questions that I would be very interested in the answers to, if anyone out there knows them:

1) I have argued that Dummett is mistaken in identifying idea that the rules for a logical constant are in harmony (with respect to some proof system), with the idea that adding the constant to a proof system will result in a conservative extension. If the elimination rule allows too little, rather than too much, to be derived (given the introduction rule) then the addition will still result in a conservative extension, even though the rules are not in harmony. Am I right, or I am hopelessly confused on this and being unfair to Dummett?

2) Dummett says that adding a constant to a proof system will result in a conservative extension of that system if, and only if, all the "local peaks", (or non-normal pairs) can be removed - that is, if, and only if, the new connective has the local reduction property with respect to that proof system. I could not find the argument for this in Dummett, but he might be right anyway. Is he? What IS the relationship between local reduction (a.k.a. normalisation) and conservative extension?

Posted by logican at 5:32 PM | TrackBack

April 3, 2005

Truth and Tonk

I've always been sympathetic to the Tarskian idea that languages can be inconsistent, and in particular to the idea that the truth-predicate is responsible for making natural languages inconsistent (yes, even when I'm being careful to distinguish proof systems from languages and even after reading Herzberger on the topic.) Sometimes, when I thought about this, I liked to think of the truth predicate as a kind of less obvious 'tonk'; introducing it to our language had made it possible to derive contradictions (such as the Liar paradox.)

One might wonder how 'true' had managed to survive in our language, though it is hard to imagine that 'tonk' ever could. But I thought that since the problem with 'true' was a bit harder to discover than the problem with 'tonk', and since it didn't get in the way of our everyday thinking in the same way, and since a truth predicate governed by simple disquotational rules like these ('T' is our truth predicate):

blogproof21a.jpeg

was easy to use, the simple truth predicate had been allowed to remain, even though 'tonk' never would be.

But since the discussion of local reduction/normalisation in posts here and here, I've noticed something that does not sit very well with the view that 'true' is like 'tonk'. Though 'tonk' does not have the local reduction property, 'true' surely does and it is easy to show this, by showing that any proof containing an instance of [TI], followed immediately by an instance of [TE] can converted into a proof without those steps, like this:

blogproof21b.jpeg

It is tricky to know exactly what follows from this. Though it is always tempting to take technical ideas as obviously underwriting exciting philosophical conclusions, (just as it is tempting to take experimental results in psychology as having sweeping consequences in ethics) I suspect that hastiness in this area is more likely to lead to publication than to the right answer. Maybe there are such consequences to be had here, but there in no harm in going slow...

Following Michael Kremer's suggestion in the comments, I have been reading some of Michael Dummett's The Logical Basis of Metaphysics. In chapter 9, "Circularity, Consistency and Harmony," Dummett has a bit to say about the relationship between the local reduction property (though he doesn't call it that), conservativeness, harmony and the nature of deduction. So in the next post I'll have a go at reconstructing his position.

(Who says philosophy weblogs can't do dramatic tension? Logicandlanguage.net cares about those surfers who read this weblog for the plot.)

Posted by logican at 6:49 PM | TrackBack

April 1, 2005

Proof in Print

Proof and Beauty in The Ecomomist.

Posted by logican at 3:25 PM | TrackBack

Diversosphere

Blogging Beyond the Men's Club, via Arts and Letters Daily:

And at the Harvard conference, Suitt challenged people to each find 10 bloggers who weren't male, white or English-speaking—and link to them. "Don't you think," she says, "that out of 8 million blogs, there could be 50 new voices worth hearing?"

Not being one to care that the challenge wasn't issued to me personally, here are the first 5 of my 10:

I've elected to interpret "10 bloggers who weren't male, white or English speaking" as equivalent to "10 bloggers with the following property: either they aren't male, or they aren't white, or they aren't English-speaking" (though it could also be interpreted more strictly as equivalent to "10 bloggers who are neither white, nor male, nor English-speaking.)

A couple of thoughts on this: it has been observed that the web, whilst also allowing for gratuitous self-publication, allows for an unusual degree of privacy. It is possible to publish a weblog without telling anyone your real name, your gender, your ethnicity. That said, I read recently (if I find the link I'll add it) that the majority of bloggers do choose to reveal their real names, where "majority" turns out to be 55%. Often this tells us something about their gender. Sometimes it tells us something about their ethnicity. Others post photographs of themselves. Q. Who would be likely to publish their gender and ethnicity in this way? A. Those who suspect that it is to their advantage. Q. Who would be more circumspect about revealing these things? Someone who had found that they had worked to their disadvantage in the past, and felt they would be taken more seriously if no-one knew these things. It would be interesting to know whether the proportion of non-white, non-male blogs is higher in the percentage of blogs that remain anonymous. And it would be interesting to know whether that varied depending on the topic. Being female might seem to be more of an advantage if one is blogging about women's issues, and less of an advantage if one is blogging about programming languages...

Added later in the day: here are 5 more weblogs, this time they are all weblogs in logic, language or philosophy:

Majikthise- analytic philosophy and liberal politics
Diana's Bloggerific Musings - Philosophy
The X-Bar - Two lingustists walk into an X-bar...
For the Record - Jessica Wilson
Sappho's Breathing

My experience has been that it is much easier to find blogs that seem to be written by women than it is to find blogs that seem to be written by non-whites. Part of the explanation is that it is harder to tell the skin-colour of a blogger than it is to tell their gender. But in the case of philosophy blogs, I think the reason is also that there are very few non-whites in the discipline. (The bar to my promoting blogs in other languages is, of course, my own ignorance.)

Posted by logican at 1:59 AM | TrackBack