Williamson On Inexact Knowledge: Ó Springer Science+Business Media B.V. 2007

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Philos Stud (2008) 139:171180 DOI 10.

1007/s11098-007-9109-9

Williamson on inexact knowledge


Anna Mahtani

Received: 21 November 2006 / Accepted: 8 April 2007 / Published online: 24 May 2007 Springer Science+Business Media B.V. 2007

Abstract Timothy Williamson claims that margin for error principles govern all cases of inexact knowledge. I show that this claim is unfounded: there are cases of inexact knowledge where Williamsons argument for margin for error principles does not go through. The problematic cases are those where the value of the relevant parameter is xed across close cases. I explore and reject two responses to my objection, before concluding that Williamsons account of inexact knowledge is not compelling. Keywords Vagueness Inexact knowledge Epistemic theory Williamson Margin for error Reliabilism

1 Introduction Glance quickly through the pages of this article, and you will gain some knowledge of the number of words.1 You will learn that there are more than 2,000 words, and less than 20,000but you wont know exactly what the word-count is. As Timothy Williamson would put it, you have inexact knowledge of the number of words. Or imagine that you are standing in a vast crowd in a stadium, trying to assess the number of people present (Williamson 1994: 217). A quick look around will give you some knowledge of the number of people, but your knowledge will again be inexact. Williamson uses examples like these to introduce the expression inexact knowledge, claiming that it is a widespread and easily recognised cognitive phenomenon (Williamson 1994: 227). Williamson then gives his account of inexact knowledge. Using certain general epistemological claims, he argues that whenever knowledge is inexact, a margin for error principle applies (Williamson 1994: 226227). This, according to
1

By the number of words, I mean the number of word-tokens, not the number of word-types.

A. Mahtani (&) 3 Elm Road, Redhill, Surrey RH1 6AJ, UK e-mail: anna_mahtani@hotmail.com

123

172

A. Mahtani

Williamson, is the underlying nature of inexact knowledge (Williamson 1994: 227). In this paper I raise an objection to Williamsons account of inexact knowledge. I begin in section 2 by sketching Williamsons account. I then outline my objection in section 3. In sections 4 and 5, I explore but ultimately reject two possible responses to my objection. In section 6 I conclude that Williamsons account of inexact knowledge fails.

2 Williamsons account of inexact knowledge To illustrate Williamsons account of inexact knowledge, suppose that an editor glances quickly through this article and forms the belief that it contains at least 6,000 words. Suppose also that this article happens to contain exactly 6,000 words. The editors belief is true, butaccording to Williamsonit is too unreliable to count as knowledge. The article could easily have contained one word fewer without the editor noticing the difference and adjusting her beliefs accordingly. In a close case, the article contains fewer than 6,000 words, but the editor retains her (now false) belief that it contains at least 6,000 words. On Williamsons account, it follows that the editors belief in the actual casethough true could easily have been false, and so is too unreliable to count as knowledge. Supposing that this paper contains exactly 6,000 words, then, the editor cannot know from a hasty glance that it contains at least 6,000 words. If the editor believes that it contains at least 6,000 words, then her belief will be too unreliable to count as knowledge. Plausibly, for similar reasons the editor also cannot know that this paper contains at least 5,999 words. If the editor believes that this paper contains at least 5,999 words, then her beliefthough truewill be too unreliable to count as knowledge, for the paper could easily have been two words shorter without the editor noticing the difference and adjusting her beliefs accordingly. Perhaps for similar reasons the editor also cannot know that the paper contains at least 5,900 words, or even 5,000 words. It does not follow, however, that the editor cannot even know that this paper contains at least 1,000 words: it is not the case that this paper could easily have been 5,000 words shorter without the editor noticing the difference and adjusting her beliefs accordingly. In general, the editor can know that the paper contains at least n words only if n is sufciently smaller than the actual number of words. As Williamson would put it, the editors belief can count as knowledge only if it leaves a sufcient margin for error. M1 below, with positive c, is a margin for error principle. This principle governs the editors knowledge of the number of words in this article, given that all she knows about the number of words has been gathered from a hasty glance: M1: The editor can know that there are at least n words only if there are at least n + c words. Margin for error principle M2 below (with positive d) will also hold, for similar reasons. Just as the editor cannot know at a glance that this paper contains at least 6000 words, so she cannot know at a glance that it contains no more than 6000 words. If she believes that it contains no more than 6000 words, then her belief will be true, but it will be too unreliable to count as knowledge: the paper could easily have contained one word more without the editor noticing the difference and adjusting her beliefs accordingly. M2: The editor can know that there are no more than n words only if there are no more than nd words.

123

Williamson on inexact knowledge

173

Williamson argues that margin for error principles like M1 and M2 apply in all cases of inexact knowledge: this is his margin for error meta-principle (Williamson 1994: 227). Let us take another case of inexact knowledge (Williamson 1994: 217227) as a further illustration. Suppose that a spectator standing in a crowd looks casually about her and forms the belief that there are at least 10,000 people present. If there are exactly 10,000 people present, then according to Williamson the spectators belief is too unreliable to count as knowledge. The crowd could easily have contained one person fewer without the spectator noticing the difference and adjusting her beliefs accordingly: in a close case, there are not at least 10,000 people present, but the spectator believes that there are. On Williamsons account, it follows that the spectators belief in the actual case could easily have been false, and so is too unreliable to count as knowledge. The spectator can know that there are at least 10,000 people present only if there are more than 10,000 people present. More generally, for some positive c the spectator can know that there at least i people present only if there are in fact at least i + c people present. For similar reasons, for some positive d the spectator can know that there are no more than i people present only if there are in fact no more than id people present. Williamson claims to have captured the underlying nature of inexact knowledge (Williamson 1994: 227): margin for error principles apply whenever a person has inexact knowledge with regard to some parameter (e.g. the number of words in a paper, the number of people in a crowd). These margin for error principles are supposed to follow from certain general epistemic principlesin particular, the claim that a certain sort of reliability is required for knowledge. Where knowledge is inexact, the thought is that beliefs that do not leave a margin for error are too unreliable to count as knowledge. Having sketched Williamsons account of inexact knowledge, I turn now to my objection. I claim that there are cases of inexact knowledge that do not t Williamsons account.

3 My objection Where a person has inexact knowledge with regard to a parameter, Williamsons account seems to demand that the value of the parameter could easily have been different. In other words, the account seems to demand that there is a close case where the parameter takes a different value. Recall the case of the crowd. According to Williamson, the spectator cannot know that the crowd contains at least 10,000 people because there could easily have been fewer than 10,000 people without the spectator noticing the difference and adjusting her beliefs accordingly. This assumes that the value of the relevant parameter (in this case, the number of people in a crowd) could easily have been differenti.e. that it varies across close cases. The problem is that there are examples of inexact knowledge where the value of the relevant parameter does not vary across close cases. We can modify the crowd example to illustrate this. Suppose that the number of people in the crowd has been extremely carefully controlled by the event organisers. The bouncers at the entrance have been given strict instructions to let in exactly 10,000 people (from an enormous queue of people trying to get in), and they carry out this instruction very carefully. They let people in one by one, and carefully record each admittance. We can suppose that they have a rigorous checking system, and that all other entry points are carefully guarded. The closest case where there are not exactly 10,000 people present is some remote case where several of the bouncers fall unexpectedly ill at the same time. In every close case, the number of people present is

123

174

A. Mahtani

the same. Nevertheless a member of the crowd, who does not know the instruction that the bouncers have been given, can look about her and gain inexact knowledge of the number of people present. The fact that there could not easily have been any other number of people present does not prevent her from having inexact knowledge. In general, a person can have inexact knowledge with regard to some parameter even if the value of that parameter could not easily have been differenti.e. even if the value of that parameter is the same in every close case. This sort of example of inexact knowledgewhere the value of the relevant parameter is xed across close casesdoes not seem to t Williamsons account. On Williamsons account, if a person with inexact knowledge forms a true belief without leaving a margin for error, then her belief should be too unreliable to count as knowledge. The idea is that such a belief could easily have been false: in a close case, the value of the parameter is slightly different, but the person in question notices no difference and keeps the same now falsebelief. This will not work if the value of the relevant parameter is xed across close cases. Thus Williamsons account of inexact knowledge does not work for every case of inexact knowledge: it fails where the value of the relevant parameter is xed across close cases. I explore two responses to this objection, neither of which is successful. The rst response is to insist that in every case of inexact knowledge, the value of the relevant parameter does vary across close cases. This rst response would deal with my modied crowd example above by claiming that there are in fact close cases where the number of people in the crowd is different. The second response involves accepting that in some examples of inexact knowledge the value of the relevant parameter is xed across close cases, but claiming that in every such example, whenever a person forms a belief without leaving a margin for error, the content of her belief will vary across close cases. I explore each response in turn, explaining why neither is successful.

4 First response Let us reconsider my modied crowd example. In this example, the spectator has inexact knowledge of the number of people present, even though the number of people present has been very carefully controlled. In the last section I assumed that the number of people present could not easily have been different that in all close cases the number of people in the crowd is the same. I did not assume that there could not possibly have been a different number of people present. That would be unreasonable: after all, the bouncers could have fallen ill (although in fact, we can suppose, they are all quite well), or someone could have killed a bouncer to get into the stadium (though in fact, we can suppose, no-one even considered doing this). My assumption was just that these scenarios are distant possibilities rather than things that could easily have happenedi.e. rather than things that happen in close cases. This assumption seems reasonable and natural. It could be objected, however, that I am misunderstanding what is meant by could easily. Perhaps in this context, whether a case could easily have obtained depends simply on whether it could have obtained without the person in question (here, the spectator) noticing any difference. Under this interpretation, there could easily have been one person fewer in the crowd, simply because there could have been one person fewer without the spectator noticing any difference. The fact that the number of people present has been very carefully controlled is irrelevant. Under this interpretation, a case is close

123

Williamson on inexact knowledge

175

i.e. could easily have obtainediff it could have obtained without the spectator noticing any difference. Williamson did not intend could easily to be interpreted in this way. In a passage where he is showing how a margin for error principle governs a person(x)s knowledge, Williamson writes: Not all circumstances indiscriminable by x from the actual ones could very easily have occurred (consider brains in vats). (Williamson 1996: 40). The alternative interpretation that we are considering here (according to which a case could easily have obtained iff it could have obtained without the person in question noticing any difference) is not the interpretation that Williamson intended. Nevertheless we can still consider whether Williamson should adopt this alternative interpretation to deal with my objection. If he adopts this alternative interpretation of could easily, Williamson can deal with my modied crowd example. Under the alternative interpretation of could easily, there could easily have been one person fewer in the crowd for there could have been one person fewer in the crowd without the spectator noticing any difference. Given a corresponding alternative interpretation of close, we can say that there are close cases where there are fewer than 10,000 people present in the crowd. Plausibly, the spectator would fail to adjust her beliefs across these close cases, for she notices no difference between them. Thus if the spectator believes truly in the actual case that there are at least 10,000 people present, then her belief will be too unreliable to count as knowledge: there will be close cases where there are not at least 10,000 people present, but the spectator believes that there are. It looks like a margin for error principle is operating here, as Williamsons account demands. By adopting the alternative interpretation of could easily, then, Williamson can handle my modied crowd example. The problem is that this will not work as a general strategy. A person can have inexact knowledge with regard to some parameter even when the parameter necessarily takes a certain value. Where some parameter necessarily takes a certain value, then the value of the parameter simply could not have been different. If the value of the relevant parameter simply could not have been different, then it could not easily have been different either even under the alternative interpretation of could easily that we are considering. Under the alternative interpretation of could easily that we are considering, something could easily have obtained if it could have obtained without the person in question noticing any difference. Something that simply could not have obtained is obviously not something that could have obtained without the person in question noticing any differenceand so is not something that could easily have obtained under the alternative interpretation of could easily. To put the point another way: a condition that holds in all possible cases obviously holds in all close casesi.e. in all cases that could easily have obtainedno matter how close and could easily are interpreted. Here is an example of inexact knowledge where the relevant parameter necessarily takes a particular value. A pupil is given a grid of all the numbers between 1 and 100. First she shades in the number 1. Then she looks for the lowest unshaded number, which is 2, and she shades in every number greater than 2 that is divisible by 2. Then she looks for the next unshaded number, which is 3, and she shades in every unshaded number greater than 3 that is divisible by 3. She carries on in this way until for every unshaded number n, every number on the grid greater than n that is divisible by n has been shaded. She knows that this exercise will leave her with just the prime numbers unshaded. She admires her sheet without counting the unshaded numbersand then hands it in. She is left with inexact knowledge of the number of primes between 1 and 100. How can Williamsons account of inexact knowledge deal with this example? Suppose that the pupil believes that there are at least n primes, when there are in fact exactly n

123

176

A. Mahtani

primesa belief that leaves no margin for error. On Williamsons account, her belief should be too unreliable to count as knowledge. But in every possible case the content of her belief is true: if there are exactly n primes between 0 and 100, then there are necessarily exactly n primes between 0 and 100. The value of this parameter could not easily have been different: the value does not vary across close cases. Alternative interpretations of could easily and close will not help here, because the value of the parameter is a necessary fact that holds in every case, close (under whatever interpretation) or otherwise. Perhaps Williamson could dismiss this problematic example by claiming that the pupil does not have inexact knowledge of the number of primes. What she has inexact knowledge of is the number of unshaded squares on her grid, and that is not problematic: the number of unshaded squares on her grid could have been different (because the pupil could have made an error in her shading), andwith the right understanding of could easily and closewe can suppose that the number of unshaded squares on her grid could easily have been different. But why shouldnt looking at the completed grid give the pupil inexact knowledge of the number of primes between 1 and 100, as well as inexact knowledge of the number of unshaded squares? Williamson cannot claim that we have inexact knowledge only of what we perceive directly, for his own examples of inexact knowledge contradict this: according to Williamson, testimony can give us inexact knowledge of the physical characteristics of a criminal (Williamson 1994: 217). Neither can Williamson claim that we have inexact knowledge only of what is contingent, for (as I explain in the next section) he claims that we have inexact knowledge concerning vague concepts, and our knowledge there is of a necessary matter. There seems to be no good reason to exclude our pupils knowledge of the number of primes between 1 and 100 from the category of inexact knowledge. There are examples of inexact knowledge, then, where the value of the relevant parameter does not vary across close cases. To respond to this objection by adjusting the interpretations of close and could easily is not a good strategy, for it will not allow us to deal with all the problematic cases. In particular, it will not allow us to deal with cases of inexact knowledge where the relevant parameter necessarily takes a particular value. In the next section, I consider (and ultimately reject) an alternative way that Williamson could handle these examples.

5 Second response We have been looking at examples of inexact knowledge where the value of the relevant parameter does not vary across close cases. These examples look like a problem for Williamson, and it is not at all obvious how he should deal with them. However there is one special sort of inexact knowledge that Williamsons account can handle, even though the value of the relevant parameter is xed across close cases and indeed across all cases. This is our knowledge involving vague concepts. In this section, I look at how Williamson handles this case of inexact knowledge, and then consider whether we can extend this strategy to deal with every example of inexact knowledge where the value of the relevant parameter is xed across close cases. I argue that we cannot. I begin by explaining what Williamson means by our knowledge involving vague concepts (Williamson 1994: 217). Williamson is a defender of the epistemic theory of vagueness, according to which vague terms have sharp boundaries, though we cannot know where these boundaries lie. For example, the epistemic theorist would claim that in any

123

Williamson on inexact knowledge

177

given context there is some minimum number of grains needed to make a heap, though we cannot know what this number is. The epistemic theorist faces the challenge of explaining why we are ignorant in this way. In response to this challenge, Williamson claims that our knowledge involving vague concepts is inexact, and like all cases of inexact knowledge, it is governed by margin for error principles. We can know that n grains make a heap only if n grains are more than enough to make a heap just as the casual spectator can know that there are at least i people in the crowd only if there are more than i people in the crowd. A belief that n grains make a heap, where n is the minimum number needed, would be too unreliable to count as knowledge. Given that we cannot know that n grains make a heap unless n grains are more than enough to make a heap, we will be unable to know for any n that n grains make a heap and n-1 grains do not. In other words, we will be unable to know where the boundary to heap lies. But why are we unable to know that n grains make a heap, unless n grains are more than enough to make a heap? Why should a belief that n grains make a heap, where in fact n is the minimum required, be too unreliable to count as knowledge? Is it that the belief could easily have been false, because there is a close case where n grains dont make a heap? As Williamson is well aware, the number of grains needed to make a heap in a given context does not vary across close cases. In fact it does not vary across cases at all. Vague facts supervene on precise ones (Williamson 1994: 202): if in the actual case n grains make a heap, then the same holds in every possible case. In other words, if it is true that n grains make a heap (in a certain context), then it is necessarily true. And as Williamson writes, A necessary truth is true in all cases; a fortiori, it is true in all cases similar to the case in which it is a candidate for being known (Williamson 1994: 230). Here we have an example of inexact knowledge where the value of the relevant parameter (the number of grains needed to make a heap in a given context) is xed across all close casesand in fact across all possible cases. How does Williamson handle this example? Suppose that I believe that n grains make a heap, when in fact n is the minimum required. Williamson claims that my belief is too unreliable to count as knowledgebut not because I could easily have believed the very same thing falsely. Instead Williamson argues that there are close cases where my belief has a different, false content.2 The thought is that we could easily have had very slightly different vague concepts. The content of our concepts depends on our use, and our use could easily have been slightly different without our noticing any difference. A slight change in use could bring about a slight change in the content of a vague concept. Thus instead of the concept heap, I could easily have had the concept heap*, where it takes one more grain to make a heap* than it takes to make a heap. Furthermore, I could easily have had the concept heap* rather than heap without noticing the difference and adjusting my beliefs accordingly. So if, in the actual case, I believe that n grains make a heap, then in a close case I believe instead that n grains make a heap*. And if n grains are the minimum number needed to make a heap, then n grains dont make a heap*. Thus my belief in a close case that n grains make a heap* is false. On Williamsons view, this makes my belief in the actual casethat n grains make a heaptoo unreliable to count as knowledge. I can believe that n grains make a heap only if

It may seem strange to say that a particular belief could have had a different content. Keefe suggests an alternative way that Williamson could argue: Perhaps the reference to same belief could be dropped: whether it counts as the same belief in the counterfactual circumstance may not matter, just whether the corresponding belief you would have had would have been false (Keefe 2000: 74). On this view, a belief that n grains make a heap may be unreliable because in a similar case the corresponding (different) belief is false. The points I make in this paper would still stand if this alternative was adopted.

123

178

A. Mahtani

n grains are more than enough to make a heap i.e. only if, for some positive c, nc grains make a heap. We can see then how a belief with a necessarily true content (say that n grains make a heap, where n is in fact the minimum required) can be too unreliable to count as knowledge. The content of the actual belief could not easily have been false, but the belief could easily have had some other, false content. Sainsbury uses a related strategy to explain the unreliability of a different belief. He writes: Suppose you come to believe a mathematical truth in some random way (for example, you simply guess at the sum of two large numbers, and by good fortune you are right). It is clear that you could easily have been wrong, that is, you could easily have guessed otherwise. (Sainsbury 1997: 909). As Sainsbury stresses, the content of your actual belief could not easily have been false: mathematical truths are necessary. What makes your actual belief unreliable is that it could easily have had a false content instead. Here the thought is not that your belief involves a vague concept that could easily have had a slightly different content. Rather the thought is that your belief could easily have involved different concepts altogether: instead of believing, say, that the sum of 245 and 573 is 818, you could easily have believed that it is 816. Thus your (true) belief that 245 + 573 = 818 is too unreliable to count as knowledge. In the rest of this section, I consider whether we could use this sort of strategy to deal with the problematic examples of inexact knowledge that I have highlighted. I argue that we cannot. Lets begin by recalling the modied crowd example. In this example our spectator believes that there are at least 10,000 people present, when in fact there are exactly 10,000 people present, and (because the crowd-size is carefully controlled by the bouncers) there could not easily have been any other number. The spectators belief lacks a margin for error, and so on Williamsons account it should be too unreliable to count as knowledge. Yet in every close case, the content of the actual belief is true. The idea that we are exploring here is that the belief is unreliable not because the content of the actual belief could easily have been false, but because the belief could easily have had some other, false content instead. For example, perhaps in some close case instead of believing truly that there are at least 10,000 people present, the spectator believes falsely that there are at least 10,001 people present, or that there are fewer than 10,000 people present. This is plausible if we suppose that the spectators belief was formed carelessly. We could suppose that she just picked the number 10,000 at random, so she might easily have considered whether there are at least 10,001 people present, instead of whether there are at least 10,000 people present. This might easily have led her to believe (falsely) that there are at least 10,001 people present, instead of believing (truly) that there are at least 10,000 people present. Or perhaps her judgement that there are at least 10,000 peoplerather than fewer than 10,000 peoplewas just an impulsive guess that could easily have gone the other way. So she might easily have believed (falsely) that there are fewer than 10,000 people, rather than believing (truly) that there are at least 10,000 people present. The problem is that not all examples will t this model. In some examples of inexact knowledge, beliefs are formed carefully rather than recklessly even if they fail to leave a margin for error. We can adapt the crowd example to illustrate this. Suppose that the spectator is rmly focussed on the question of whether there are at least 10,000 people present: perhaps she has read that at least 10,000 tickets have been sold and she is wondering whether this is true. Thus in every close case, she forms a belief about whether there are at least 10,000 people. We can also suppose that she is inclined to overestimate the number of people present. Perhaps her eyesight is bad and she is mistaking a row of trees for extra people; perhaps the noise and movement of the crowd make it seem bigger than it really is. Let us suppose that if asked to estimate the number present, our spectator would

123

Williamson on inexact knowledge

179

say that there are around 13,000. Her judgement that there are at least 10,000 people is not a mere whim. From her perspective, it is a safe judgement that leaves ample margin for error. She is rmly convinced that there are at least 10,000 people present, and could not easily have believed otherwise: in every close case, she forms a belief with the same content.3 In a sense, her belief is perfectly safe, for it could not easily have had a different content and that content could not easily have been false. Perhaps in response Williamson could insist that there must be a close case where the spectators belief has a different, false content. He might claim that there must be a close case where the spectator is focussed on a different question (e.g. are there at least 10,001 people? rather than are there at least 10,000 people?). Or he might claim that there must be a close case where the spectator considers the same question but comes to a different judgement (e.g. that there are fewer than 10,000 people, rather than that there are at least 10,000 people). To make either of these claims would be unreasonably dogmatic. We can design the example so that the question are there at least 10,000 people present? is enormously signicant to the spectator: perhaps her whole aim in visiting the stadium is to assess the claim made in the press that at least 10,000 tickets have been sold. And we can design the example so that her judgement will go the same way in every close case. Our spectator is mistaking a row of trees for a row of people, and we can suppose that her eyesight would need to dramatically improve (as it could not easily do) to avoid this error. In every close case, she has the same bias, and makes the same judgement. There is no justication for insisting that there must be a close case where her belief has a different content. Thus we are faced with an example of inexact knowledge where the same, true belief is formed in every close case, even though it leaves no margin for error. Williamson has given us no good reason to think that a margin for error principle applies here though this is what his account of inexact knowledge demands. It is not hard to construct similar examples. To construct such an example, rst ensure that the value of the relevant parameter is xed across close cases. The value might be carefully controlled by people (as bouncers can control the size of a crowd); it might be carefully controlled by a machine (as a mechanical device can be used to keep a train at a constant speed of 100 mph); or the value of the parameter might be a physical or logical necessity (as the freezing point of an element at a given pressure is a physical necessity, and the number of primes between 1 and 100 is a logical necessity). Then the person in questionthe person with inexact knowledge with regard to the parametershould form a true belief that fails to leave a margin for error. Ensure that the content of the belief is the same in every close case. To do this, make the person consider the same question across all close cases, and make her judgement biased so that in every close case she answers the question in the same way. We can ensure that the person considers the same question in every close case by making it somehow interesting or signicant to her: perhaps a commuter considers whether the train is travelling faster than 100 mph because this is a round number; perhaps a pupil considers whether the freezing point of water is below 0 degrees centigrade because this is the question in her text book. And there are plenty of reasons why a persons judgement regarding the value of a parameter might be biased: a commuter might feel like she is travelling more slowly than she is when the track is smooth and the
3

This is not quite accurate. If Williamson is right that the content of our vague concepts varies across close cases, then the content of the spectators belief will vary across close cases, because person is a vague category. In some close case, for example, she might believe that there are 10,000 people* present, where the concept person* is subtly different from the concept person. Assuming however that there are no borderline-people present in the crowd, these subtle shifts in concept content will not affect the truth-value of the belief.

123

180

A. Mahtani

train is well sound-proofed; a cold object might feel warmer than it is to a pupil who is losing sensation in her hands. Thus we can construct numerous examples which do not t Williamsons account. In these examples, a true belief is formed that lacks a margin for error. On Williamsons account of inexact knowledge, this belief ought to be too unreliable to count as knowledge. Yet the belief could not easily have had a different content, and the content of the belief could not easily have been false. In a sense, the belief is perfectly safe: at any rate, Williamsons remarks about reliability give us no reason to class the belief as unreliable. Thus we have no reason to think that margin for error principles govern these cases of inexact knowledge. Williamsons margin for error meta-principlehis claim that margin for error principles apply in every case of inexact knowledgeis unfounded.

6 Conclusion Williamsons account of inexact knowledge is not compelling. He claims that margin for error principles apply wherever knowledge is inexact, but this claim is unfounded: there are cases of inexact knowledge where his argument for margin for error principles does not go through. In response to this objection, Williamson could reduce the scope of his account, claiming to give the underlying nature of only some, rather than all, cases of inexact knowledge. But an account of some sub-category of cases of inexact knowledgewhere the sub-category is specially cut to t the accountis not very interesting. An account of all cases of inexact knowledge, on the other handwhere the term inexact knowledge is a natural category, which can be introduced with exampleswould be a substantial discovery. In this paper I have argued that Williamsons attempt to give such an account is unsuccessful. References
Keefe, R. (2000). Theories of Vagueness, CUP. Sainsbury, M. (1997). Easy possibilities in Philosophy and Phenomenological Research 57. Williamson, T. (1994). Vagueness. London: Routledge. Williamson, T. (1996). Wright on the epistemic conception of vagueness in Analysis 56.

123

You might also like