Saucers of Mud

February 2, 2025

“Deepity” is just a word

Filed under: Uncategorized — matt w @ 1:20 pm
Tags: , , ,

I was just introduced to a new concept, “deepity.” While I think the concept may be useful, the example used to introduce it is terrible.

Daniel Dennett apparently coined the word “deepity” in a speech to the American Atheists. Neil van Leeuwen sums it up:

A deepity, as Dennett characterizes it, is a sentence or other utterance that has more than one interpretation; it has “two readings and balances precariously between them. On one reading it is true but trivial, and on another it is false but would be earth-shattering if true.”

Dennett’s toy example of a deepity is this: “Love is just a word.”

Consider these two readings:

  1. “Love” is just a word.
  2. Love is just a word.

The first is obviously true—the string of four letters inside the quotation marks really just is a word. But the second, while it seems profound in some way, is false: as Dennett points out, love might be an emotion, or a relation, or a commitment, or many possible things, but it’s not a word! Words are strings of sound or written marks that constitute a unit of language—love itself is not one of those.

It seems useful to have the idea of an utterance that equivocates between two meanings, using the obviously true one to trick the hearer into believing the pernicious one. (Though we’d do well to be more precise as to how we think that works; there are probably many different ways.) But I don’t think “‘Love’ is just a word” is one of those.

To begin with, reading 1 isn’t obviously trivial. “‘Love’ is a word” is trivial. “‘Love’ is just a word” depends on the contrast class invoked by “just.” Maybe the contrast class is with words that signify important concepts, so 1 is true if and only if “love” doesn’t signify an important concept. This is non-trivial (and indeed false), and it’s plausibly what it means to convey.

Even if reading 1 is trivial, or reading 2 self-contradictory, that doesn’t mean they don’t convey something meaningful. We use tautologies and contradictions to implicate meaningful statements all the time. Is “Boys will be boys” a deepity? “You’re older than you’ve ever been and now you’re even older?” If you’re 25, and the 25-year-old you’ve been dating for a month says “We’re not getting any younger,” it is appropriate to run screaming even though what they’ve said is trivially true. Because what they meant is perfectly clear. Which is also true, from the other direction, if they say “‘Love’ is just a word.”

Maybe it is a deepity when a tautology is used to implicate something meaningful. “Boys will be boys” sounds unimpeachable but the conveyed meaning, that there is nothing that can be done to stop boys from engaging in [whatever toxic behavior is being discussed], is contentful and terrible. It’s a problem with the notion of “deepity” if it doesn’t rule this out, because there’s nothing extraordinary or novel about using a tautology to implicate something meaningful and possibly pernicious. It’s one rhetorical strategy among others. What makes it worse than using a snappy rhyme to get people to nod along to something they maybe shouldn’t?

What “‘Love’ is just a word” expresses is quite clear. It’s a kind of nominalism or irrealism about the concept of love. If you wanted to explain nominalism about games to your neighbor at the bar, “‘Game’ is just a word” would be a great way to start. Here’s the first use of “love is just a word” that I could find on the Internet Archive (from “Words, Words, Words” by Winona Godfrey, Smith’s Magazine Vol. 27 No. 1 [April 1918], p. 126):

Walter knew that love is just a word; he knew that it wasn’t real; he knew the real things were dingy rooms and mutton stew and crying babies and dirty dishes; and he wasn’t going to let a sweet-sounding word lure him.

Does anyone not understand what “love is just a word” means here? It’s certainly not Dennett’s alleged second meaning, that the concept of love is a word. What we have here is a sentence that has a reading that is trivially true (or trivially false, since it doesn’t have quotation marks), and that reading is being used to implicate something highly contentious but contentful. Is there anything interesting about that?

December 10, 2024

ChatGPT invents a new rake and steps on it

Filed under: Uncategorized — matt w @ 11:16 am

I wasn’t expecting this:

There is an island where everyone can see each other’s eyes but not their own. One hundred people have blue eyes and one hundred people have brown eyes. It is taboo for anyone to discuss eye color. If someone comes to know that they themselves have blue eyes, they must leave the island that day at midnight.

One day an oracle (who has green eyes) arrives on the island and says “Someone on this island has blue eyes, and it’s Joey.”

Who leaves the island when? Explain your reasoning step by step.

[ChatGPT 4o mini’s response]

This is a classic logic puzzle involving induction, and the key to solving it lies in the way the islanders reason based on what they observe and the oracle’s statement. Let’s break it down step by step:

The setup:

  • There are 200 people on the island: 100 with blue eyes and 100 with brown eyes.
  • Everyone can see everyone else’s eyes but not their own.
  • It is taboo to discuss eye color, and each person must figure out their own eye color based on what they observe.
  • If someone discovers they have blue eyes, they must leave the island at midnight that same day.

The oracle’s statement:

  • The oracle (who has green eyes, so it’s irrelevant for our puzzle) says: “Someone on this island has blue eyes, and it’s Joey.”
  • Joey is presumably one of the people with blue eyes, and now the entire island knows that there is at least one person with blue eyes — Joey. This statement, however, does not directly tell anyone anything new about their own eye color, but it starts the reasoning process.

Step-by-step reasoning:

Day 0: The Oracle Speaks

  • The oracle says, “Someone on this island has blue eyes, and it’s Joey.”
  • Everyone with blue eyes now knows that Joey has blue eyes.
  • Importantly, they do not yet know their own eye color. They can see other people’s eye colors, but they do not have enough information to conclude their own eye color just from the oracle’s statement. Joey, being one of them, has blue eyes, but that doesn’t immediately tell anyone anything about their own eye color.

Day 1: The First Day

  • Each person with blue eyes will start to reason based on what they see.
  • Every person with blue eyes can see 99 others with blue eyes (since there are 100 people with blue eyes in total).
  • If a person with blue eyes thought they had brown eyes, they would see 99 other people with blue eyes, and they would think, “If I had brown eyes, then all 99 other people with blue eyes would see only 99 blue-eyed people, and each of them would think, ‘I see 99 blue-eyed people, so if I had brown eyes, there would only be 99 blue-eyed people in total.'”
  • However, none of these 99 blue-eyed people leave on the first day. This is the key! They wait to see what happens.

Day 2: The Inductive Step

  • Since no one left on Day 1, it must mean that each person with blue eyes sees at least 99 other people with blue eyes and thus knows they are not the only one. Therefore, each of them must also have blue eyes, because if they had brown eyes, the 99 other blue-eyed people would have left on Day 1.
  • On Day 2, all 100 people with blue eyes will realize they must have blue eyes, because they saw that no one left on Day 1. Their reasoning is based on the fact that if they had brown eyes, all the other 99 blue-eyed people would have left on Day 1, but they didn’t. Since they didn’t leave, they must also have blue eyes.
  • Therefore, on Day 2, all 100 people with blue eyes will simultaneously come to the conclusion that they have blue eyes and will leave the island at midnight.

Summary:

  • Day 0: The oracle announces that Joey has blue eyes, but this doesn’t immediately cause anyone to leave.
  • Day 1: No one leaves, because each blue-eyed person is waiting to see if others will leave, and no one leaves.
  • Day 2: All 100 blue-eyed people realize that they have blue eyes (because they saw that no one left on Day 1), and they all leave at midnight on Day 2.

Thus, the answer is that all 100 people with blue eyes will leave on Day 2 at midnight.

August 4, 2024

An example about fathers and presupposition that people should stop using

Filed under: Uncategorized — matt w @ 11:23 pm

Here’s another complaint about a linguistics example. Unlike the last one, I am quite sure that at least one of the complaints I am going to make is accurate. And I think it probably has some actual bearing on the linguistic phenomenon in question.

This is from the literature on the principle of Maximize Presupposition–see Nadine Bade’s overview here. Maximize Presupposition, due to Heim (1991), is the principle (roughly) “Make your contribution presuppose as much as possible.” This explains why a sentence like
(1a) #All of John’s arms are broken
is bad or at least disconcerting. This sentence doesn’t presuppose that John has only two arms, while the alternative sentence
(1b) Both of John’s arms are broken
does presuppose that John has only two arms. And if John is a human, and not Zaphod Beeblebrox, he has no more than two arms. So “All of John’s arms are broken” doesn’t maximize presupposition, and it sounds bad.

So far so good! The problem is that an example that runs throughout the literature, going back to Heim and serving as the first example in Bade’s overview, is
(2a) #A father of the victim arrived at the crime scene.
Bade explains that since (2a) does not presuppose that the victim has a unique father, and that the alternative
(2b) The father of the victim arrived at the crime scene
does presuppose that the victim has a unique father, that the utterance of (2a) requires that it not be part of the common ground (since otherwise Maximize Presupposition would be violated). Bade says, “The resulting inference is contradictory to common knowledge…: the speaker does not believe that there is a unique father of the victim.”

But this inference is not contradictory to common knowledge, because the victim might have more than one father. Maybe it was not unreasonable to overlook this possibility in 1991. It is now! I could probably stop this post here, saying “Stop using an example that assumes that a person can only have one father!” (“The spleen of the cadaver was weighed” should work.) But I think there’s a little more to it.

Scenario: it’s common knowledge that Heather has two mothers (Mama Jane and Mama Kate). Georgie wants to know if Heather can come over. Georgie’s parent talks to Jane and says:
(3a) ???I talked to a mother of Heather’s.
OK, Georgie’s parent does not say that. Almost no native speaker would say that, I don’t think, even given the common knowledge that Heather has two mothers. Nor would Georgie’s parent say
(3b) #I talked to the mother of Heather
which is terrible six ways. What is normal is to say
(3c) I talked to Heather’s mother
(“Heather’s mom,” really) or, if it might important that the parent only talked to one
(3d) I talked to one of Heather’s mothers.

Now one issue here is that “a/the X of Heather” is horribly stilted. This is also true, to a lesser extent, in (2a/b); (2b) is not terrible, but usually I’d say “the victim’s father arrived at the crime scene.” (Also a problem in the version of (1b) Bade gives, “#All/Both arms of John are broken,” which I don’t find idiomatic either way.) We can get around this by using a multi-word phrase that can’t take the possessive ‘s.

Scenario: Heather has left her backpack at school. Both teacher and principal know that Heather has two mothers and that Heather left the backpack. The teacher talks to Kate, and tells the principal about it, in a context in which Heather’s having left her backpack is more relevant than her specific identity:
(4a) ?I talked to a mother of the girl who left the backpack.
This still sounds bad to me! Honestly I would rather say
(4b) I talked to the mother of the girl who left the backpack
because I think the definite description often just doesn’t presuppose uniqueness. “William is the son of the king of England” is fine. But if (4b) was unsatisfactory for some reason, I think the teacher would say
(4c) I talked to one of the mothers of the girl who left the backpack.

Even aside from same-sex parents, I often find “a” to be weird even when uniqueness isn’t presupposed. Substituting “parent” into (3a) doesn’t make it much better, and I think still leaves (4a) kind of sketchy. Or take this:
(5a) #The volume of The Lord of the Rings is called The Two Towers.
(5b) A volume of The Lord of the Rings is called The Two Towers.
(5c) One of the volumes of The Lord of the Rings is called The Two Towers.
Here (5a) is flat-out unacceptable, unless the context makes it clear that there is only one volume that could be relevant. (5b) is… not great, not terrible? It’s not disconcerting like (1a), but I would be much more likely to say (5c).

So if I have a tentative suggestion here, it’s that maybe the scale that is sometimes posited to explain Maximize Presupposition should be not <“a,” “the”> but <“a,” “one of the,” “the”>, where “one of the” presupposes that there aren’t many of the thing in question, more or less. That might explain why (3a) and (4a) are bad in terms of Maximize Presupposition; the alternatives with “one of the” presuppose that Heather doesn’t have very many mothers, and that’s an available presupposition.

But what I am certain of is that people should stop using (2a)/(2b) in ways that assume that the victim can’t have more than one father, because that’s not true.

[PS: Occupational hazard; earlier today I was thinking some very dark political thoughts in my interior monologue, and then I thought, “Hmm, the sentence I just thought to myself doesn’t satisfy Maximize Presupposition.”]

July 31, 2024

Complaint about a linguistics example

Filed under: Linguistics — matt w @ 11:24 pm

This is a very minor complaint, but I’ve been reading a fair amount of linguistics papers and keep getting thrown by examples that seem slightly off. (I have one I want to talk about more soon.)

For this one, in “Intensional Anaphora” by Ezra Keshet and Steven Abney (https://semprag.org/index.php/sp/article/view/sp.17.9/sp.17.9), the authors develop a very interesting system to (in part) address the phenomenon that you can’t say “Andrea might be eating a cheeseburger. It is very large.”

But they also say (pp. 4-5) that it is impossible to say “Andrea might be eating a cheeseburger. They are very large!” with the intended meaning “Any burgers she might be eating are very large.” But this seems quite possible to me? “Andrea might be eating a cheeseburger. They are very large, but she is very hungry” seems fine. Even when the intent is not to say that all cheeseburgers are very large, but all the cheeseburgers she might be eating are very large.

Now that I have typed this out, I think the issue might (!) be that the authors are talking about a counterfactual reading of “might,” whereas the most natural reading of “Andrea might be eating a cheeseburger” in isolation is epistemic. But I think even if we force a counterfactual reading, without a specific antecedent, “They are very large” can still be acceptable.

Scenario: Andrea’s friends are mourning in the kitchen at her wake. One says “I wonder what Andrea would be doing right now.” The other says, “She might be eating a cheeseburger. They are very large, but she had a huge appetite.” (Apologies to Andrea, I did not realize I would be killing her off when I started talking about this example.)

Keshet and Abney go on to say

the intuition is sharpened further when the predicate involves a specific location in time and space:

(4)#They are on the kitchen counter.

There is clearly no reading of (4) where the various cheeseburgers Andrea might be eating—lamb burgers, imitation meat burgers, turkey burgers, etc.—are all on the kitchen counter.

But again I do not get this. “Andrea might be eating a cheeseburger. They are on top of the fridge, but she was very tall.” (We’re at the wake again.)

I think the issue is that the intended reading is one in which “Andrea might be eating a cheeseburger” covers a wide variety of cheeseburgers that may not exist, and that may even be very different from the cheeseburgers that do exist in our world. At the wake, the cheeseburgers Andrea’s friends are talking about are the very cheeseburgers that are at the wake. (Maybe.) In which case I think it would be better to stick with examples, like some others they give, that don’t have plausible readings on which they’re felicitous.

EDIT: I forgot to add that this paper put me in a cranky mood by saying “Since medieval times, scholars have analyzed intensional operators, such as modals,as quantifiers over worlds, parallel to determiners, which quantify over individuals” with a footnote to Simo Knuuttila’s Stanford Encyclopedia article on medieval theories of modality (https://plato.stanford.edu/entries/modality-medieval/). That article does not say that medieval scholars analyzed intensional operators in terms of possible worlds! Every occurrence of “worlds” in the SEP article refers to using present-day logical machinery to reconstruct medieval analyses.

For instance, Knuuttila says “many historians have used the terminology of possible worlds semantics without the metaphysical and formal details of the modern theory” and cites (among others) Caterina Dutilh Novaes’s Formalizing Medieval Logical Theories pp. 90-91, where Dutilh Novaes says “The use of this 20th-century logical apparatus to account for an aspect of medieval logic may seem inappropriate and anachronistic at first sight” and then disclaims the metaphysical baggage of possible worlds as giving truth-values to all propositions.

Of course the distinction between possible worlds and whatever the medievals may have used is not important for the project of the paper–it’s not about the medieval usages, and in any case the analysis would surely go through if the quantification was over possibilities or situations or something rather than possible worlds as such. But that’s exactly why I’m salty about this. One can use possible worlds without projecting them and their whole machinery back onto philosophers who didn’t use them, and projecting them back makes them seem more entrenched than they should be.

(Keshet and Abney site von Fintel and Heim’s lecture notes [https://dspace.mit.edu/bitstream/handle/1721.1/53741/24-973Spring-2005/NR/rdonlyres/Linguistics-and-Philosophy/24-973Spring-2005/19058019-AB15-4760-868A-52AEC2CAFA3C/0/intensnlsemantic.pdf], which attributes the idea to Leibniz and says there are precedents in medieval literature, which is perfectly true!)

August 3, 2023

“Must We Know What We Say?”: Belated Postscript and Reply to DeRose, Part I

From a long time ago: Here’s part of a footnote from Keith DeRose’s 2009 book The Case for Contextualism Vol. 1 (p. 98, the continuation of footnote 20 from p. 97) about my paper “Must We Know What We Say?” (JSTOR link, open final draft link):

[Text: While I am discussing Weiner’s paper, I will briefly take the opportunity here to register my reactions to his proposed counter-examples to the knowledge account of assertion at (2005: 230-1). These are proposed as cases in which the assertions are proper, though the speakers don’t know the propositions they are asserting to be true. But it seems to me very far from clear that the speakers don’t know the propositions they are asserting in these examples. Indeed, it wouldn’t seem to me at all unusual for the speakers to have instead outright said, ‘I know that…’ in these examples. And where the speakers do refrain from claiming knowledge, making do with mere flat-out assertion in these examples, it would be perfectly appropriate for their conversational partner to ask, ‘How do you know that?’ If the speaker then admits that she doesn’t know, it seems that she is then barred from making the flat-out assertion. (On contextual analysis, it seems likely that the epistemic standards have been raised in such cases.) At any rate, where one sticks to examples where it actually is fairly clear that the speaker doesn’t know the proposition she’s asserting, I think one will find it hard to come up with cases where the assertion is also clearly appropriate. At least, haven’t encountered any counter-examples that I’ve found convincing.]

It’s obviously well past time for me to respond to this, but:

It’s perfectly congenial to what I say in that paper that the speakers know the propositions they assert in my examples. I even say so in the paper. This may be surprising to people who know about the paper, because if it’s known for anything, it’s known for arguing against the knowledge account of assertion and for the truth account. Nevertheless, at the end, in a passage I don’t think anyone ever discusses, I say:

We want our beliefs to be stable, but we also want them to justify assertion. Why should the former outweigh the latter when we ascribe knowledge?
We might think of rescuing the knowledge account by ruling that proper asserters do have knowledge; not by invoking an independent theory of knowledge…, but by identifying the epistemic position required for knowledge with that required for assertability. Then we would have an assertability account of knowledge rather than a knowledge account of assertion.

Which means that I’m willing to allow that the speakers in my examples have knowledge—if we define “knowledge” as “whatever it takes for a speaker to assert properly in a given context.” I give the examples of Jack Aubrey watching the French fleet and saying “The French will attack at nightfall!” or Sherlock Holmes glancing at a crime scene and (uncharacteristically) saying “This is the work of Moriarty!” Both these, I stipulated, are more or less hunches that wouldn’t resist counterevidence.* But I don’t mind if you call them knowledge. You can say that Aubrey knows that the French will attack at nightfall because of his long naval experience, if you like.
*Which is why it’s uncharacteristic of Holmes, who would never make an assertion like that without being able to explain his exact reasoning. In a footnote to his dissertation, Aidan McGlynn correctly rakes me over the coals for getting Moriarty wrong as well. But never mind that.

The thing here is that I meant my paper as an attack on knowledge-first epistemology. One way to knock the concept of knowledge off its pedestal is to argue that it doesn’t play important roles like being the norm of assertion. Another way is to drain the concept of its significance. That is, to say that when we say “Jackie knows that it’s raining,” we’re not getting at a concept that we can build our epistemology on. Which might seem strange, if it’s true that Jackie can properly assert that it’s raining iff she knows it’s raining. How could that not be a fundamental concept? It could not be a fundamental concept if the fact that Jackie knows that it’s raining depends on the fact that she can properly assert that it’s raining, not vice versa.

At the time, my thinking was more friendly to contextualism than it became later. On p. 240 of “Must We Know What We Say?” I even suggest that DeRose’s account (as presented in “Assertion, Knowledge, and Context”) can yield that the Aubrey and Holmes assertions are knowledge while lottery assertions aren’t. But I also argued that the cost is that this means that “knowledge” isn’t robust to counterevidence; they ought to retract their assertions in the face of the slightest counterevidence. That stability is specifically something that Williamson invoked as making knowledge important. He argued that someone who knows that p will be more likely to stick with their actions based on p than someone with a mere justified true belief that p, so knowledge explains actions better than justified true belief does. So my idea was partly: If saving the knowledge account of assertion means giving up the stability of knowledge, it means giving up one of the big motivators of knowledge-first epistemology.

Contextualism seemed even more incompatible with knowledge-first epistemology, though. On its face, if the meaning of “know” varies indexically with context, then there is no such thing as knowledge. The word “me” means whoever the speaker of the constant is, but there’s no Me, only different people who may be speakers. You can’t have a philosophical theory of Me-first anything. If “know” worked like “me,” so that it meant “know by whatever standard of knowledge is in effect in the context,” then the non-linguistic concepts “know” picked out would be a bunch of knowledge-by-standards, not a single concept of knowledge. And I figured that these would vary largely by the strength of justification required, so this would push us to a justification-first conception of epistemology.

(There have definitely been attempts to reconcile contextualism and knowledge-first epistemology, like Jonathan Jenkins Ichikawa’s Contextualizing Knowledge (2017). But that’s what I was thinking at the time.)

So at the time I was fine with contextualism. I would be happy to endorse everything DeRose said in the footnote, including that asking “How do you know?” could raise the standards so that the speaker would have to qualify their assertion. (Though I also thought that, in contexts where the speaker was playing a hunch, “How do you know?” wouldn’t be an appropriate thing to say. When John McLaughlin says “The next man or woman to walk on the moon will be Chinese” there’s no point in saying “How do you know?”)

Later I decided even contextualism didn’t go far enough. Not only is knowledge not a thing in itself, our use of “know” is basically a way to put a gold star on a belief, saying that it does whatever we want belief to do in a given situation. Sometimes we just want the belief to be there, and believing is enough for knowledge. With assertibility, whatever makes something assertable usually makes it knowledge. I’ll talk about that more in another post.

November 5, 2022

Back-Yard Rain Probability Gauge from Science Made Stupid by Tom Weller

Filed under: Uncategorized — matt w @ 6:10 pm

I wanted to post about the Rain Probability Gauge from Science Made Stupid by Tom Weller (out of print, hosted with Weller’s permission at https://drive.google.com/file/d/0B241HCXaGuT8TzZhYXNJS25EWEk/view?resourcekey=0-G5WyeyFZRFpq0CFMEkHiLQ), but the text is too long for me to alt-text on Twitter, so here it is in what I hope is accessible form!

Testing Rain for Probability

You’ve probably heard the weatherman predict a “30% chance” or a “70% probability” of rain. You can check the chance of rain having fallen for yourself with a back-yard rain probability gauge.

Let’s say it rained during the night. What were the chances of that rain occurring?

  1. Check the gauge—which is marked in inches just like a regular rain gauge—for the level of rainwater, and mark it down. This represents the level of actual rainfall (which will always be the same as the level of probable rainfall.)

2. Next, check the level of nonprobable rainfall (which you can also think of as probable nonrainfall). Since nonprobable rain is lighter than probable rain, the nonrain will float on top of the rainwater.

Probabilites, of course, are invisible. To render them measurable, the rain probability gauge contains a probability float to mark the level of nonprobable rain. A probability float can be made of any material less probably than rain, and hence lighter. Except in very dry parts of the world, this presents no problem; an entry stub from the Publishers Clearing House Sweepstakes will do nicely. Alternatively, a few drops of statisticians’ ink can be added to the column to make it visible.

3. To the two levels, apply the formula actual rain divided by total probable & nonprobable rain = % chance

In the illustration, 3 inches of rain divided by 10 inches of norain gives .30, telling you that the three inches of rain that fell did so as a result of a 30% chance of rain.

[Illustration: a tube with markings. Water filling up the bottom three markings are labelled “Actual Rain.” Seven more empty markings are labelled “Non-Probable Rain.” A stub of paper at the top is labelled “Probability Float.”]

If it has not rained, and the gauge is dry, proceed as follows:

  1. Mark down the level of the probability float.
  2. From a watering can or garden hose, slowly add water to the column until the probability float starts to rise.
    This approach is based on the fact that the bottom of the gauge contains a certain level of probable rain, just as before, but without any actual water to make it visible. Since real rain must contain equal volumes of water and the probability of water, the probability in the bottom of the column will absorb just its own volume of the water you add, and no more.
  3. Measure the level of water and the new level of the float.
  4. Subtract from the water level a volume of water equal to the rise in the probability float, as this represents water in excess of the probability level.
  5. Divide this figure by the total capacity of the gauge, thus deriving the odds from which your dry spell resulted.

October 29, 2020

Myron Orfield, “Deterrence, Perjury, and the Heater Problem”

Filed under: Uncategorized — matt w @ 1:05 pm

I was trying to read Myron Orfield’s important University of Colorado Law Review article, “Deterrence, Perjury, and the Heater Problem,” and Westlaw’s incompetently designed website made me do about half an hour of browser wrangling before it allowed me to view it even though my university library pays for a subscription, so I thought I should perform a public service and upload it. It can be found at mattweiner.net/orfield. (The bibligraphic information is available at that link.)

Academic publishers exist to prevent people from viewing academic work and should be destroyed.

November 21, 2019

They’re Not Wrong, But He’s Not Wrong Either, It’s Weird

Filed under: Uncategorized — matt w @ 9:34 am

August 19, 2019

Rachel McKinney on Standards

Filed under: Uncategorized — matt w @ 11:47 pm

Excellent Twitter thread from Rachel McKinney on how too much of the discussion of trans rights in philosophy fails “to meet normal content-neutral standards of evidence-based inquiry.”

August 18, 2019

Open Thread for Epistemological Discussion

Filed under: Uncategorized — matt w @ 11:22 am

In the event that anyone wants to talk about anything epistemological here, go for it!

Next Page »

Create a free website or blog at WordPress.com.

Design a site like this with WordPress.com
Get started