Ana səhifə

Mentalism and Epistemic Transparency


Yüklə 143.5 Kb.
səhifə1/3
tarix25.06.2016
ölçüsü143.5 Kb.
  1   2   3
Mentalism and Epistemic Transparency
ABSTRACT: Epistemic transparency is central to the debate between factive and non-factive versions of mentalism about evidence. If evidence is transparent, then factive mentalism is false, since factive mental states are not transparent. However, Timothy Williamson has argued that epistemic transparency is a myth, since there are no transparent conditions except trivial ones. This paper responds by drawing a distinction between doxastic and epistemic notions of transparency. Williamson’s argument succeeds in showing that no conditions are doxastically transparent, but it fails to show that no conditions are epistemically transparent. Moreover, this is sufficient to reinstate the argument against factive mentalism.


  1. Knowledge-First Epistemology

In Knowledge and Its Limits, Timothy Williamson develops a distinctive approach to epistemology, which he sums up in the slogan: ‘knowledge first’. Instead of explaining knowledge in terms of justification and other epistemic notions, Williamson explains justification in terms of knowledge and thereby inverts the traditional order of explanation. A central plank of Williamson’s knowledge-first epistemology is his claim that knowledge is a mental state.

Mentalism about evidence is the thesis that one’s evidence is determined by one’s mental states.1 Traditionally, proponents of mentalism have supposed that one’s evidence is determined by one’s non-factive mental states. However, if knowledge is a factive mental state, then there is logical space for a factive version of mentalism on which one’s evidence is determined by one’s factive mental states, rather than one’s non-factive mental states.

This position in logical space is occupied by Williamson’s epistemology. Williamson (2000: Ch.1) argues that knowledge is a factive mental state; indeed, it is the most general factive mental state in the sense that all factive mental states are determinate ways of knowing. Moreover, Williamson (2000: Ch.9) argues that one’s knowledge determines one’s evidence, since one’s total evidence just is the total content of one’s knowledge. This entails a factive version of mentalism on which one’s evidence is determined by one’s factive mental states.

An influential source of resistance to Williamson’s epistemology stems from what he calls ‘the myth of epistemic transparency’. On this view, one’s evidence is transparent in the sense that one is always in a position to know which propositions are included in one’s total evidence. However, if one’s evidence is transparent, then it cannot be determined by one’s knowledge, since one’s knowledge is not transparent in the sense that one is always in a position to know which propositions one knows. More generally, if mentalism is true, then one’s evidence is transparent only if the mental states that determine one’s evidence are themselves transparent.2 Thus, epistemic transparency provides the basis of the following line of argument against factive mentalism:


  1. Evidence is transparent

  2. Evidence is transparent only if the mental states that determine evidence are themselves transparent

  3. Factive mental states are not transparent

  4. Therefore, evidence is not determined by factive mental states

Williamson’s response is to argue that epistemic transparency is a myth – a quaint relic of Cartesian epistemology. He argues that only trivial conditions are transparent in the sense that one is always in a position to know whether or not they obtain. If non-factive mental states are no more transparent than factive mental states, then this undermines one of the central motivations for rejecting factive versions of mentalism in favour of non-factive versions.3 Therefore, Williamson’s rejection of the myth of epistemic transparency plays a central role in motivating his distinctive brand of knowledge-first epistemology.


  1. The Anti-Luminosity Argument

A condition is transparent if and only if one is always in a position to know whether or not it obtains. A condition is luminous if and only if one is always in a position to know that it obtains when it does. So, a condition is transparent if and only if it is strongly luminous – that is, one is always in a position to know that it obtains when it does and that it does not obtain when it does not. If there are no luminous conditions, then there are no transparent conditions either.

Williamson’s anti-luminosity argument is designed to establish that there are no luminous conditions except trivial ones, which hold in all cases or none.4 The argument exploits a tension between the assumption that there are some luminous conditions and the assumption that knowledge requires a margin for error. These assumptions jointly entail a tolerance principle, which is falsified by any sorites series of pairwise close cases that begins with a case in which C obtains and ends with a case in which C does not obtain. So, the following assumptions yield a contradiction:



  1. Luminosity: C is luminous, so if C obtains, then one is in a position to know that C obtains

  2. Margins: If one is in a position to know that C obtains, then C obtains in every close case

  3. Tolerance: If C obtains, then C obtains in every close case (from 1, 2)

  4. Gradual Change: There is a series of close cases that begins with a case in which C obtains and ends with a case in which C does not obtain

To illustrate the problem, Williamson asks us to consider a morning on which one feels cold at dawn and then gradually warms up until one feels warm at noon. The process is so gradual that one cannot discriminate any change in one’s condition from one moment to the next. By hypothesis, one feels cold at dawn. By the definition of luminosity, if feeling cold is a luminous condition, then one is in a position to know that one feels cold at dawn. By the margin for error principle, it follows that one feels cold a moment later, since this is a relevantly close case. By repeating these moves, we generate the conclusion that one feels cold at noon. However, this contradicts the initial stipulation that one feels warm, rather than cold, at noon.

Williamson urges that we should resolve the contradiction by denying that there are any luminous conditions. But why not deny instead that knowledge requires margins for error? The answer is that the margin for error principle is motivated by independently plausible assumptions – in particular, that knowledge requires safety from error and that there are limits on our powers of discrimination.

According to the safety requirement, one knows that a condition C obtains only if one does not falsely believe that C obtains in any close case. The rationale is that if one’s beliefs are not safe from error, then they are not sufficiently reliable to count as knowledge. Still, there is a gap to be bridged between safety and margins. I can know that C obtains, even if C does not obtain in every close case, so long as there is no close case in which I falsely believe that C obtains. This need not impugn my knowledge so long as my powers of discrimination are sufficiently sensitive to the difference between cases in which C obtains and cases in which C does not obtain.5

The margin for error principle does not hold without exception, but only given the further assumption that we cannot discriminate between close cases. However, it is question-begging to assume that one is not in a position to know the conditions that make the difference between close cases. A more neutral assumption is that one’s doxastic dispositions are less than perfectly sensitive to the difference between close cases. However, we cannot assume a tolerance principle on which one believes that C obtains only if one believes that C obtains in every close case. In Williamson’s example, one’s degree of confidence that one feels cold may gradually decrease until one falls below the threshold for believing that one feels cold. Still, if one’s powers of discrimination are limited, then one’s degree of confidence cannot differ too radically between close cases. This is what we need in order to derive margins from safety.

Let us focus our attention to Williamson’s example in which one gradually warms up between dawn and noon. First, we may assume that throughout the process, one does everything that one is in a position to do, so if one is in a position to know that C obtains, then one knows that C obtains. Second, we may assume that one’s powers of discrimination are limited, so one’s degree of confidence that C obtains cannot differ too radically between close cases. Third, we may assume that knowledge requires safety from error and so one knows that C obtains only if C obtains in every close case in which one has a similarly high degree of confidence that C obtains. Given these assumptions, we may conclude that the margin for error principle is true, if not in general, then at least in this specific example:


  1. Position: If one is in a position to know that C obtains, then one knows that C obtains

  2. Discrimination: If one knows that C obtains, then one has a high degree of confidence that C obtains in every close case

  3. Safety: If one knows that C obtains, then C obtains in every close case in which one has a high degree of confidence that C obtains

  4. Margins: If one is in a position to know that C obtains, then C obtains in every close case

But if the margin for error principle is true, then there are no luminous conditions.

Broadly speaking, responses to Williamson’s anti-luminosity argument can be divided into two categories: offensive and defensive. Offensive responses reject the conclusion of the argument and so take on the burden of rejecting at least one of its premises.6 By contrast, defensive responses accept the conclusion of the argument, but engage in a kind of damage limitation exercise.7

My strategy in this paper is defensive. I will concede that Williamson’s argument establishes that there are no luminous conditions. However, I will attempt to limit the damage by relocating the epistemic asymmetry between factive and non-factive mental states. Recall that what is at stake in this debate is the motivation for rejecting factive mentalism in favour of non-factive mentalism. The defensive strategy aims to show that even if there are no luminous conditions, there is nevertheless an epistemic asymmetry between factive and non-mental states. As long as there is some epistemic criterion of quasi-transparency that is satisfied by non-factive mental states, but not by factive mental states, we can reinstate the original form of argument against factive mentalism.

Can Williamson’s anti-luminosity argument be generalized in such a way as to establish that there is no relevant epistemic asymmetry between factive and non-factive mental states? As we have seen, Williamson’s knowledge-first epistemology is motivated by the claim that there is no such epistemic asymmetry. Thus, he writes: “Any genuine requirement of privileged access on mental states is met by the state of knowing p. Knowing is characteristically open to first-person present-tense access; like other mental states, it is not perfectly open.” (2000: 25)




  1. The Lustrous and the Luminous

One defensive strategy is proposed by Selim Berker (2008), who suggests that even if there are no luminous conditions, there may be some lustrous conditions. A condition is luminous if and only if one is always in a position to know that it obtains if it does. By contrast, a condition is lustrous if and only if one is always in a position to justifiably believe that it obtains if it does. So, a condition is lustrous, but not luminous, if one is always in a position to believe justifiably, if not knowledgeably, that it obtains if it does.

Can Williamson’s anti-luminosity argument be extended to show that there are no lustrous conditions? Consider the following:



  1. Lustrousness: C is lustrous, so if C obtains, then one is in a position to justifiably believe that C obtains

  2. Margins: If one is in a position to justifiably believe that C obtains, then C obtains in every close case

  3. Tolerance: If C obtains, then C obtains in every close case (from 1, 2)

  4. Gradual Change: There is a series of close cases that begins with a case in which C obtains and ends with a case in which C does not obtain

As before, the argument relies upon a margin for error principle. However, it is more plausible that margins for error are required in the case of knowledge than justified belief. If justified belief is non-factive, then the margin for error principle is false, since one may be in a position to justifiably believe that C obtains even if C does not obtain in the actual case, which is the closest of all possible cases. As Williamson himself remarks, only factive conditions are subject to margin for error principles.8

This point has limited value for proponents of the defensive strategy. After all, the aim is to identify an epistemic criterion that is satisfied by non-factive mental states, but not factive mental states. Arguably, however, factive mental states satisfy the criterion of lustrousness. For instance, if one sees that p, then one is in a position to justifiably believe that one sees that p.9 Certainly, no factive mental state is strongly lustrous in the sense that one is always in a position to justifiably believe that it obtains when it does and that it does not obtain when it does not. For instance, it is not the case that if one does not see that p, but merely seems to see that p, then one is in a position to justifiably believe that one does not see that p, but merely seems to see that p. Therefore, proponents of the defensive strategy must argue that some non-factive mental states are strongly lustrous in order to motivate an epistemic asymmetry.

The problem is that if C is strongly lustrous, then justified belief that C obtains is factive. The argument is straightforward. If justified belief that C obtains is non-factive, then there are cases in which one justifiably believes that C obtains and yet C does not obtain. But if C is strongly lustrous and C does not obtain, then one is in a position to justifiably believe that C does not obtain. It follows that one is in a position to justifiably believe that C obtains and that C does not obtain. And yet one is never in a position to justifiably believe a contradiction. Therefore, if C is strongly lustrous, then one is in a position to justifiably believe that C obtains if and only if C obtains.

If Margins is restricted to strongly lustrous conditions, then the objection from factivity is blocked. Still, further argument is needed in order to establish that the restricted version of Margins is true. After all, justified true belief is factive, but it does not generally require a margin for error. Gettier cases typically involve a subject who justifiably believes that a condition C obtains, which does obtain in the actual case, but not in all or even most of the closest non-actual cases. For example, in Alvin Goldman’s (1976) fake barn case, Henry has a justified true belief that there is a barn on the road ahead, but he doesn’t know this, since his belief is false in most of the closest non-actual cases. One possible reaction to Williamson’s anti-luminosity argument is that it uncovers a new kind of Gettier case in which being close to the margin for error is like being in fake barn country – that is, one is in a position to form a justified true belief, which is not sufficiently reliable to count as knowledge.10

In what follows, I provide an argument for the restricted version of Margins. The argument is motivated by reflection on Ernest Sosa’s (2003) version of the problem of the speckled hen. Following Sosa, we can ask: what explains the difference between the following pair of cases?


  1. If I experience 3 speckles, then I am in a position to justifiably believe that I experience 3 speckles

  2. If I experience 48 speckles, then I am not in a position to justifiably believe that I experience 48 speckles

Why am I in a position to form a justified belief in the one case, but not the other?

One strategy for solving the problem appeals to facts about the determinacy of the representational content of experience. My experience in the first case represents that the hen has 3 speckles, whereas my experience in the second case does not represent that the hen has 48 speckles: it is simply not that determinate. Even if I do experience 48 speckles, it is a further question whether my experience represents that there are exactly 48 speckles; indeed, there may be no determinate number n such that my experience represents that there are exactly n speckles. If so, then my experience in the first case provides justification to believe that I experience 3 speckles, but my experience in the second case does not provide justification to believe that I experience 48 speckles. In general, my experience provides justification to believe that I experience n speckles if and only if my experience represents n speckles.

The problem with this response is that it fails to generalize to other examples. All we need to generate the problem is a case in which the determinacy of experience is more fine-grained than one’s powers of discrimination in judgement. For instance, experience represents objects as having not just determinable shades, such as red, but also more determinate shades, such as red-48 and red-49. Nevertheless, I may be unable to discriminate these shades in judgement: perhaps I can tell them apart when presented simultaneously, but not when presented in sequence.11 In that case, my experience might represent a shade as red-48, although I am no better than chance in judging whether I am experiencing red-48 or red-49. Once again, we can ask: what explains the difference between the following pair of cases?


  1. If I experience red, then I am in a position to justifiably believe that I experience red

  2. If I experience red-48, then I am not in a position to justifiably believe that I experience red-48

Why am I in a position to form a justifiable belief in the one case, but not the other?

Sosa’s solution appeals to safety.12 My belief that I experience red is safe from error, since I would not believe that I experience red unless I were to experience red. By contrast, my belief that I experience red-48 is not safe from error, since I could easily believe that I experience red-48 if I were to experience red-47 or red-49. Likewise, in Sosa’s original example, my belief that I experience 3 speckles is safe from error, since I would not believe that I experience 3 speckles unless I were to experience 3 speckles. By contrast, my belief that I experience 48 speckles is not safe from error, since I could easily believe that I experience 48 speckles if I were to experience 47 or 49 speckles.

The problem with Sosa’s solution is that it does not generalize to beliefs about the external world. However, the problem of the speckled hen arises for beliefs about the external world as well as the internal world. Consider the following pair of cases:


  1. If I experience red, then I am in a position to justifiably believe that there is something red

  2. If I experience red-48, then I am not in a position to justifiably believe that there is something red-48

Why am I in a position to form a justifiable belief in the one case, but not the other?

In this context, safety is a red herring. If I am hallucinating a red object and I believe that there is something red, then my belief is false. Still, there is an intuitive difference between the epistemic status of my belief that there is something red and my belief that there is something red-48. And yet safety cannot explain the difference, since no false belief is safe from error. What we need in order to explain the intuitive difference is not safety from error, but rather safety from lack of justification.

Absent defeaters, my experience provides justification to believe that p if and only if it represents that p. If I believe that p, however, my belief is justified only if it is based in a way that is counterfactually sensitive to the representational content of my experience, which provides my justification to believe that p. As Sosa rightly insists, an actual match in content is not sufficient. Given the limits on my powers of discrimination, this condition is satisfied in the first case, but not the second. My belief that there is something red is counterfactually sensitive to the content of my experience, since I would not easily believe that there is something red unless my experience represents that there is something red. By contrast, my belief that there is something red-48 is not counterfactually sensitive to the content of my experience, since I could easily believe that there is something red-48 when in fact my experience represents that there is something red-47 or red-49.

What we need to solve the problem of the speckled hen is not a safety requirement of counterfactual sensitivity to the facts, but rather a basing requirement of counterfactual sensitivity to one’s evidence, which determines which propositions one has justification to believe. One justifiably believes that C obtains only if one’s justifying evidence obtains in every close case in which one has a similarly high degree of confidence that C obtains. If one believes that C obtains on the basis of justifying evidence E, but there is a close case in which one’s justifying evidence E does not obtain and yet one believes or has a similarly high degree of confidence that C obtains, then one’s belief is unjustified.13

What is required for a belief to be justified is not counterfactual sensitivity to the facts, but rather counterfactual sensitivity to one’s justifying evidence. In the special case of strongly lustrous conditions, however, there is no distinction between one’s justifying evidence and the facts. If C is strongly lustrous, then one is in a position to form a justified belief that C obtains if and only if C obtains and moreover because C obtains. In that case, the condition C that justifies one’s belief is one and the same as the fact that one’s belief is about. Since there is no distinction in this case between one’s justifying evidence and the facts, we can derive a local safety condition, which requires counterfactual sensitivity to the facts, from a more general basing condition, which requires counterfactual sensitivity to one’s justifying evidence. Thus, for any strongly lustrous condition C, we can argue as follows:


  1. Basing: If one justifiably believes that C obtains, then one’s justifying evidence E obtains in every close case in which one has a high degree of confidence that C obtains

  2. Identity: If one justifiably believes that C obtains, then C is identical to one’s justifying evidence E

  3. Safety: If one justifiably believes that C obtains, then C obtains in every close case in which one has a high degree of confidence that C obtains

Next we can derive Margins from Safety by the following argument:
  1   2   3


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©atelim.com 2016
rəhbərliyinə müraciət