Monday, December 31, 2012

2.2 Evidence vs. reliability

Regarding the topic of deontological and non-deontological justification, one way of conceiving of reasons for beliefs is as either (1) not being under obligation not to believe it and so being entitled to believe something or (2) something that arises from some kind of minimally natural process, perhaps a cognitive one. In my view, both could be true. For example, the first could be a kind of explanation for ordinary beliefs. But you could also credit the second as being a kind of scientific explanation for why people believe what they believe.

This next topic concerns what makes the belief reasonable, and two competitors are evidence and reliability. Even a summary examination of these two positions shows that it's not entirely clear what 'evidence' would be or what 'reliability' would be. But here's a shot. Does the belief become reasonable as being arrived at in the right way, according to some kind of cognitive process (reliability), or does a belief become reasonable in light of evidence providing verification or falsification?

This kind of argument seems strange to me since, as with the last distinction, both could be true. Whatever we count as evidence for a belief could provide a reason for a belief and whatever correct cognitive processes we used, consciously or non-consciously (the latter because it's just a natural process), could provide reasons for a belief. The problem is the senses in which we're using these terms are imprecise: 'reasons for a belief,' 'evidence,' 'reliability,' and the like could be made precise or defined in terms of an explanatory theory of knowledge, but to argue at cross purposes without realizing how we're going to use these concepts and make them part of a science of knowing something is pointless and just so much armchair-theorizing.

Friday, December 28, 2012

2.1 Deontological and Non-Deontological Justification

This will be a short post, I think, but basically the question for a theory of knowledge here is: What are the criteria for believing something? In the language of philosophers, it's: "What is the epistemic justification for my belief?" (The latter, I think, sounds a lot uglier and unnecessarily verbose.) And there have been two ways, traditionally, that philosophers have approached this question. One suggestion is that saying I have a reason to believe something is equivalent to saying I'm under no obligation not to believe it, given whatever further specific criteria somebody would want to give; this view has been called deontological justification. Another possibility suggested is that saying I have a reason to believe something is equivalent to saying that I believe it because the proper processes, perhaps cognitive processes that allow me to make certain factual connections for example, result in the belief I have; this has been dubbed a non-deontological justification.

Basically, the distinction comes down to this. Either I have a reason to believe something because I'm not under obligation not to believe it or I have a reason to believe something because some correct processes or other brought about the belief. The word 'obligation' in the former part of the sentence there is what makes the belief a matter of deontology, which is related to one's duty to do something. The latter is about some kind of natural mechanism (perhaps) that results in the belief.

It seems as though both hypotheses could be true. We could believe, at least at the level of informal speech, that I have a good reason to believe something because, well, I can't think of any reason not to believe it that would count as evidence against it. And we could believe at the level of explanation related to causes and mechanisms that I believe what I do because of a certain kind of cognitive process, and when my cognitive processes have no failed me, then that makes me have beliefs with good reasons for those beliefs.

Argument to the contrary would appear to be so much argument at cross purposes. They are just two different ways of analyzing phenomena, namely in this case, belief formation and good reasoning. The world is big enough for both.

Tuesday, December 25, 2012

1.2 The Gettier Problem

Generally speaking, scientists formulate a hypothesis, model some feature of the world, and check the hypothesis against the model. The model could be adjusted to better accommodate the feature of the world, the hypothesis could be modified in light of the model, and so the complexity of explanatory theories goes. I do not pretend to be an expert about the sciences, and perhaps some people would disagree with this characterization of ideas in the sciences. Nevertheless, I think it is true, at least in rough form.

Likewise, philosophers make their own models in the capacity of what are typically called thought experiments. As far as thought experiments go, philosophers construct a model, the thought experiment, which is some idealized version of the world, and then check a hypothesis against it. This process is in no wise as decisive or clear as with the sciences because the whole theory construction process here takes place within the mind of the philosopher as part of his or her own internal thought process, in discussion with other people, or perhaps in the course of written or spoken argument (in the non-pejorative sense, hopefully).

Regarding the theory of knowledge, the hypothesis that what should count as knowledge is "justified true belief" was tested by Edmund Gettier. He proposed two scenarios, two thought experiments, basically, where a person would be said to have, typically, justified true belief but not knowledge. I will just give the first scenario. In the first scenario, two men, Smith and Jones, are up for a job interview. Smith knows that Jones has ten coins in his pocket, which seems like a random fact but is relevant in just a moment. Smith also heard from the president of the company that the two men are doing job interview for that Jones was going to get the job. Smith thinks out loud, right there in the waiting room, while Jones is doing the interview, "The man with ten coins in his pocket is going to get the job."

It turns out, however, that Smith gets the job and he happens to have ten coins in his pocket. So regarding the thought, "The man with ten coins in his pocket is going to get the job," it's true and Smith has a rational explanation for why it's true, namely that he knows Jones has the coins in his pocket, and he heard from the president that Jones was going to get the job. In spite of Smith's having these reasons for his beliefs, the thought experiment demonstrates, contrary to the hypothesis, that knowledge can't be formulated in terms of justified true belief.

Gettier's thought experiments demonstrate both the point that knowledge cannot be formulated in terms of justified true belief but it also hints at the possibility that definitions for ordinary language concepts are not sufficient to capture all the features people want when they use the concept. In this case, the definition is not adequate to account for the concept KNOW.

Saturday, December 22, 2012

1.1 Knowledge as Justified True Belief

I don't know many magic tricks, but I know of one card trick. I can do the card trick to varying degrees of success, depending on the audience. This is how the trick goes.

I have a deck of cards. I show the person the underside of the deck to see that the deck is not a trick deck. I then give the person the six of hearts and nine of diamonds. I ask the person to insert the cards anywhere in the deck they'd like. They do. I ask the person to tap the top of the deck and then I tap the top of the deck. I snap my fingers, and then I pop the two cards out, holding them in my left hand while catching the rest of the desk in my right hand.

Or did I?

If I can make you believe that I popped the two cards out, then the trick has worked and the magic remains a mystery. However, if you examine the trick more critically, you'll see that the 'real magic' was in getting you to accept a false premise, namely that the two cards you put in were the ones that I popped out.

The discipline of philosophy, sometimes accidentally, works like a magic trick. If a philosopher gets you to accept a false premise, perhaps one even he or she believes, then you are well on your way to accepting the conclusion. This causes unnecessary problems because if the premises were examined more critically, just as one would examine the situation more carefully with a card trick, the so-called mystery would disappear.

That a false premise would be accepted regarding the theory of knowledge is quite ironic. It is with a comprehensive theory of knowledge, finding out how we know what what know and if we know anything at all, that we would want to avoid such mistakes, it seems. But I think the problem begins right at the beginning: Plato's Thaetetus.

In the Theaetetus, the character of Socrates comes to the conclusion that knowledge is something like true belief with a rational explanation. Contemporary philosophers restate this conception of knowledge as "justified true belief." So, it would follow, a theory of knowledge would deal with the component parts of the concept of knowledge. It would ask such questions as (1) What counts as a rational explanation for our beliefs?; (2) What is the nature of truth in relation to our knowledge?; and (3) What is a belief, anyhow?

Although philosophers have not necessarily proceeded to answer each of these questions as systematically as just posed, these questions have all been issues for classical and contemporary epistemology--that is, issues for ancient and modern ideas about knowledge.

The major premise of the argument in the Theaetetus is that to know what knowledge is, or to do a thorough investigation of when we have knowledge, means to give the term full definition. Without definition, we would not know if we could or do possess knowledge. Even if not always, this is a tacit assumption of several contemporary philosophers as well. But this premise is false..

As has been explicated from readings of Ludwig Wittgenstein (but which could just as easily be argued without reference to his name), there are several instances when we know something without having to provide definition for it. Take the example of a game. There are several different kinds of activities and events which we understand as games: soccer games, chess, golf, and so on. We would never have to give the concept GAME full definition to know when we are playing a game or what a game is. If philosophers took this implication seriously, they would realize that to know what the concept KNOWLEDGE means is not to be able to provide it with full definition, along the lines of "justified true belief," or, more verbosely, as "true belief with a rational explanation," but instead perhaps be able to identify instances when the concept is applicable.

At this point, someone might object that even though for ordinary purposes we do not have to give full definition to the concept KNOWLEDGE to have an understanding of knowledge, for the sake of philosophy we do. Philosophy, it might be argued, is a discipline that applies conceptual analysis to ordinary terms and tries to imbue them with more precise meanings using logical rigor. So, to object to conceptual analysis is to object to philosophy itself, and at this point objectors and philosophers must part ways.

My answer is that if this is what philosophy amounts to, count me out. If philosophy is just the analysis and attempted clarification of common sense concepts, or at the very least that this is what analytic philosophy is, then it could still be asked why this method is being used. If what I previously said is true, it is a dead end to think of the philosophical enterprise as providing definition to what people really mean when they apply a concept to a situation since not only does no definition seem to apply to these sundry cases no definition could.

I would like to consider an analogy to the sciences. Even though people apply the concepts WEIGHT or WEIGH in various ways in daily life, physicists do not attempt to provide a definition of the various ways in which people use this concept. Rather, the concept WEIGHT has been defined as such a way that it would fill out explanatory theories relative to the domain of physics. And then weight in this new sense can be measured and help explain the way the world works.

As Paul Thagard proposed in his article "Eleven Dogmas of Analytic Philosophy" (2012), philosophy can just as easily seen as theory construction rather than conceptual analysis. And the theory would then have to be relevant to a specific domain. With the case of language, for example, it might be hypothesized that language is an innate faculty, and that all human beings are born with an innate knowledge of language. This innate knowledge would not be defined as a specific language. That would be hard to defend on empirical grounds, to say the least. Rather, the knowledge would be equivalent to or defined in terms of computational principles and parameters that allow for the acquisition of what we might call natural languages (like English, French, Korean, and so on).

Likewise, knowledge relative to other domains could be defined and revised in other terms. We might even find some unifying conception of what knowledge would be, although that might not be necessary. Anyway, the larger point is that not only will knowledge as "justified true belief" won't cut it, it's not even clear why we would want this understanding of knowledge.

Knowledge in ordinary language is an honorific term. It is a term we use evaluatively to capture a whole host of phenomena. As Noam Chomsky (2012) put it when asked what knowledge is:
Knowledge is something we seek to attain... The more we succeed in gaining some level of understanding, the more we approach some ideal [certainty]... But to the extent that we think we're approaching it, we give it the honorific term 'knowledge.' But there's no way of answering what knowledge is. It's what we hope to attain and know that we can only approach.
There is no essence regarding what 'knowledge' is. It is something for which someone could precisely define it relative to some science, if it is to have any value at all, and to which will continue to have sundry applications in ordinary speech, being applied to various phenomena. Which is just fine.

Knowledge

The Stanford Encyclopedia of Philosophy has a wonderful entry on epistemology. It's located here: http://plato.stanford.edu/entries/epistemology/. Given that the reader I've been working with to get my material to right is getting pretty drab, and it has a bunch of articles I don't enjoy reading, I'm going to use the table of contents for the epistemology entry at Stanford Encyclopedia of Philosophy as my guiding light to talk about the issue. Each subsequent entry will be using one of these sub-points and will be where I address that sub-point. I'll see how it goes. At any rate, below is the way in which Stanford Encyclopedia of Philosophy organizes its epistemology entry, and you can think of the next blogs in this entry as sort of being an engagement with that material.
1. What is Knowledge?
1.1 Knowledge as Justified True Belief
1.2 The Gettier Problem

2. What is Justification?
2.1 Deontological and Non-Deontological justification
2.2 Evidence vs. Reliability 2.3 Internal vs. External
2.4 Why Internalism?
2.5 Why Externalism?

3. The Structure of Knowledge and Justification
3.1 Foundationalism
3.2 Coherentism
3.3 Why Foundationalism?
3.4 Why Coherentism?

4. Sources of Knowledge and Justification
4.1 Perception
4.2 Introspection
4.3 Memory
4.4 Reason
4.5 Testimony

5. The Limits of Knowledge and Justification
5.1 The Case for Skepticism
5.2 Skepticism and Closure
5.3 Relevant Alternatives and Denying Closure
5.4 The Moorean Response
5.5 The Contextualist Response
5.6 The Ambiguity Response
5.7 Knowing One Isn't a BIV

6. Additional Issues
6.1 Virtue Epistemology
6.2 Naturalistic Epistemology
6.3 Religious Epistemology
6.4 Moral Epistemology
6.5 Social Epistemology
6.6 Feminist Epistemology

Friday, December 21, 2012

Skeptical problems

Here is a summary of parts of three articles about skeptical problems. I will do my best to keep the summaries short, and share what I think about them.

James Van Cleve in "Foundationalism, Epistemic Principles, and the Cartesian Circle" is both a knowledge problem and a theological problem, which might lead to skepticism if you wanted to take it down that road. Here are the premises that Cleve gives:
(1) I can know (be certain) that (p) whatever I perceive clearly and distinctly is true only if I first know (am certain) that (q) God exists and is not a deceiver.
(2) I can know (be certain) that (q) God exists and is not a deceiver only if I first know (am certain) that (p) whatever I perceive clearly and distinctly is true.
Put another way, (1) having a sound belief system depends on knowing that a non-deceiving God exists, but it also seems, to Descartes, that (2) knowing that a non-deceive God exists depends on having a sound belief system.

The problem Laurence BonJour addresses in "Can Empirical Knowledge Have a Foundation?" is ultimately a skeptical problem, too, one that creates a kind of regress. I'll lay the argument out like this, even though he doesn't.
(1) For any of my beliefs, I must have good reasons for my beliefs.
(2) Reasons are beliefs.
(3) For any of those good reasons, I must have good reasons for those reasons ad infinitum.
So, (4) there must be an infinite set of beliefs for any beliefs that I have.
So, (5) foundationalism is false and coherentism is false by definition.
It's like this: You have to have good reasons for your beliefs but then you have to have good reasons for those reasons and good reasons for those reasons and good reasons for those reasons and so on. That seems to be the problem.

Ernest Sosa's "Reflective Knowledge in the Best Circles" is an appeal to other things besides truth and coherence to account for knowledge. He writes:
Knowledge requires truth and coherence, true enough, but it often requires more: for example, that one be adequately related, causally or counterfactually, to the objects of one's knowledge, which is not necessarily ensured by the mere truth-cum-coherence of one's beliefs, no matter how comprehensive the coherence. Madmen can be richly, brilliantly coherent; not just imaginary madmen, but real ones, some of the locked up in asylums. Knowledge require not only internal justification or coherence or rationality, but also external warrant or aptness. We must be both in good internal order and in appropriate relation to the external world.
So I didn't examine Van Cleve's or BonJour's arguments against these ideas, and I didn't examine the way in which Sosa arrives at his conclusion really. But maybe I will talk about some of these things in the next post.

Tuesday, December 18, 2012

Epistemic norms and foundherentism

This post is about two articles. One of the articles is by John Pollock called "Epistemic Norms." The other is called "A Foundherentist Theory of Empirical Justification" by Susan Haack. At least one way to think about the meaning of 'epistemic norms' is as 'good reasons for your beliefs.' And at least one way to think about 'foundherentism' is the idea of beliefs that have good reasons to support them that account for the beliefs that come from the five senses but that doesn't give any kind of priority to them as opposed to other beliefs a person might have. So this post is about those two things.

Pollock argues in his article that good reasons for a person's belief do not come from the outside world but rather the good reasons come from within. He writes that you can think about the view he opposes 'externalism' as either 'belief externalism' or 'norm externalism.' According to Pollock:
Belief externalism insists that correct epistemic norms must be formulated in terms of external considerations...

In contrast to this, norm externalism acknowledges that the content of our epistemic norms must be internalist, but employs external considerations in the selection of the norms themselves.
So belief externalism needs objects outside of a person's body to generate the good reasons (?) but norm externalism needs objects outside of a person's body for us to have the good reasons, which are internal beliefs. Hmm. It's not entirely easy for me to see the difference.

I don't know exactly what's going on with his argument, but he thinks that to understand when people have good reasons for beliefs, we have to accept when the internal beliefs guide our actions. But belief externalism, he thinks, can't account for how the internal beliefs guide our actions, perhaps absent the things of the outside world. And norm externalism can't work because we don't need external world events to make us change our good reasons when considering, for example, the conditions under which we saw something; we could reason on our own without altering those conditions. Maybe to make the point clear, suppose you saw what looked like a gun barrel on a table in a dim-lighted room. But then later you thought it was unlikely that it was a gun barrel but probably a pen you left there. The reasoning changed not because of changes of the conditions but your own changes in thought.

This frees us up for some kind of internalist view of providing good reasons. Surely more will come of that in upcoming posts but now we need to turn to foundherentism, which is a fairly simple plausible belief.

As said above, foundherentism is "a new approach which allows the relevance of experience to empirical justification, but without postulating any privileged class of basic beliefs or requiring that relations of support be essentially one-directional." He makes four basic points about when people have good reasons for belief.
[1.] [J]ustification comes in degrees: a person may be more or less justified in believing something.

[2.] [T]he concepts of evidence and justification are internally connected: how justified a person is in believing something depends on the quality of his evidence with respect to that belief.

[3.] [J]ustification is personal: one may be more justified in believing something than another is in believing the same thing--because one person's evidence may be better than another's.

[4.] [J]ustification is relative to a time: a person may be more justified in believing something at one time than at another.
But in spite of these pieces of relatives, philosopher Susan Haack doesn't believe that the good reasons we would have are radically relative. Rather, the standards of better and worse evidence are based on human nature: It's because we're the beings that we are that we have the kinds of thinking that we have and way we use our basic cognitive skills and perhaps advanced cognitive skills when we do science or forensic investigation. She writes:
I see these standards--essentially, how well a belief is anchored in experience and how tightly it is woven into an explanatory mesh of beliefs--as rooted in human nature, in the cognitive capacities and limitations of all normal human beings.

It is sure to be objected that the evidential standards of different times, cultures, communities, or scientific paradigms differ radically. But I think this supposed variability is at least an exaggeration, the result of mistaking the perspectival character of judgments of evidential quality for radical divergence in standards of better and worse evidence.
So those were the two articles: one supporting internalism for beliefs and the other supporting sensory beliefs as being important among the set of beliefs a person could have but not necessarily being primary among the set of beliefs a person could have.

Next time, more stuff on skepticism.

Monday, December 17, 2012

Skepticism and Rationality and Partial Interlude

I have to admit upfront that in a way I have disdain for this kind of reasoning about what amounts to good reasons for belief. It's not so much that I don't think there could be good reasons for beliefs, but it's just that I think it's more a process of reflective equilibrium. We develop some theory in our heads for everyday understanding of what counts as good reasons for belief, or scientists do so professionally, and when we discover that this theory doesn't account for the range of beliefs we could have, we modify our theory--or we modify our beliefs. This back-and-forth dialectical approach is, I think, on the right track. But I'll continue further with this BS.

Philosopher Richard Foley argues that we should not so much try to make our beliefs match the real world, since the real world is not knowable anyway, but just try to make our beliefs rational. He calls his position egocentric rationality. It sounds really Ayn Randian but I don't think that's exactly what he's going for. I'll try to explain what he means throughout the course of this.

Foley writes that Cartesian skepticism alerted us to the problem that we can't know anything outside of ourselves.
From your skin in, everything about these situations is as it is now. And yet, from your skin out, things are drastically different from what you take them to be in the current situation. Still, you would have egocentric reasons in such situations to believe exactly what you do now.
But he thinks Cartesian skepticism, and examples about possible situations like you dreaming and so on, make it impossible for you to have knowledge.

You can only have knowlege, Foley thinks, if the reasons you have for beliefs match up with the way the world really is. "Knowledge, then, requires an element of luck," he writes. He continues.
Just as you can be rational and yet lacking in virtue, so too you can be rational and yet lacking in knowledge. Appreciating this can help cure the preoccupation with skepticism that has dominated modern epistemology. It can allow egocentric epistemology to be done non-defensively.
So this is where we come to egocentric rationality. Egocentric rationality is relying upon the beliefs that you have despite whether these beliefs will provide you with proof against Cartesian style skepticism. So it's basically a fancy name for something really common. Blah blah. Kind of boring to me. But I'm moving through the positions to see where we arrive.

Interlude: Dogmas of analytic philosophy

There's a new article that's been published called "Eleven Dogmas of Analytic Philosophy" by Paul Thogard. The URL is here: http://www.psychologytoday.com/blog/hot-thought/201212/eleven-dogmas-analytic-philosophy. It's a great article and even if I don't agree with it entirely, I agree with it in basic form. I will copy and paste the body of the article below, the 11 dogmas. Here goes. The italics have been added by me.
1. The best approach to philosophy is conceptual analysis using formal logic or ordinary language. Natural alternative: investigate concepts and theories developed in relevant sciences. Philosophy is theory construction, not conceptual analysis.

2. Philosophy is conservative, analyzing existing concepts. Natural alternative: instead of assuming that people’s concepts are correct, develop new and improved concepts embedded in explanatory theories. The point is not to interpret concepts, but to change them.

3. People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don't trust your intuitions.

4. Thought experiments are a good way of generating intuitive evidence. Natural alternative: use thought experiments only as a way of generating hypotheses, and evaluate hypotheses objectively by considering evidence derived from systematic observations and controlled experiments.

5. People are rational. Natural alternative: recognize that people are commonly ignorant of physics, biology, and psychology, and that their beliefs and concepts are often incoherent. Philosophy needs to educate people, not excuse them.

6. Inferences are based on arguments. Natural alternative: whereas arguments are serial and linguistic, inferences operate as parallel neural processes that can use representations that involve visual and other modalities. Critical thinking is different from informal logic.

7. Reason is separate from emotion. Natural alternative: appreciate that brains function by virtue of interconnections between cognitive and emotional processing that are usually valuable, but can sometimes lead to error. The best thinking is both cognitive and emotional.

8. There are necessary truths that apply to all possible worlds. Natural alternative: recognize that it is hard enough to figure out what is true in this world, and there is no reliable way of establishing what is true in all possible worlds, so abandon the concept of necessity.

9. Thoughts are propositional attitudes. Natural alternative: instead of considering thoughts to be abstract relations between abstract selves and abstract sentence-like entities, accept the rapidly increasing evidence that thoughts are brain processes.

10. The structure of logic reveals the nature of reality. Natural alternative: appreciate that formal logic is only one of many areas of mathematics relevant to determining the fundamental nature of reality. Then we can avoid the error of inferring metaphysical conclusions from the logic of the day, as Wittgenstein did with propositional logic, Quine did with predicate logic, and Kripke and Lewis did with modal logic.

11. Naturalism cannot address normative issues about what people ought to do in epistemology and ethics. Natural alternative: adopt a normative procedure that empirically evaluates the extent to which different practices achieve the goals of knowledge and morality.
As a kind of summary, after I finish all this stuff on epistemology, I will use this as a springboard after I summarize the epistemology info.

Saturday, December 15, 2012

Evidentialism: Good reasons for belief

Attempting to determine what good reasons you have for believing what you believe is a strange matter. It seems to me that the criteria should be determined relative to the domains. In a previous post, in an interlude, I wrote about some problems with eyewitness testimony. As astrophysicist Neil Tyson put it, eyewitness testimony is low evidence for physicists, distorted evidence for psychologists, and high evidence for the legal system. Of course, it seems to me that after each domain determines its standards for belief or evidence, then the people can compare them and judge if one would be better or worse or should be changed with respect to one another.

Richard Feldman and Earl Conee in "Evidentialism" wants to argue for something called evidentialism. Because I am kind of tired of the logicist formulations, I'll just put it simply. Evidentialism for Feldman and Conee is when your belief "is determined by the quality of [your] evidence for the belief." The better the evidence, the better reason you have for you belief. Furthermore, they think you should believe things for which you have adequate evidence for.
We hold the general view that one epistemically ought to have the doxastic attitudes that fit one's evidence. We think that being epistemically obligatory is equivalent to being epistemically justified.
So having good reasons for your belief is the same, according to Feldman and Conee, as believing what you ought to believe according to the best evidence. You shouldn't just try your best, though. And trying your best is not enough to make it the case that you have good reasons for your belief. You also have to have good evidence, plain and simple.

One of the big contenders against this view is reliabilism. Reliabilism is the view that "epistemically justified beliefs are the ones that result from belief-forming processes that reliably lead to true beliefs." In other words, according to reliabilism, you have good reasons for your belief when whatever cognitive processes or processes related to your mind and the world make your beliefs true. Feldman and Conee think that this view is so broad that you could actually accept this and accept evidentialism as the proper way to fill it out.

About this debate: My view is that both of these do not account for good reasons to belief. Think about mathematics, for example. If evidentialism is true, your beliefs about conceptual truths like mathematics, or logic, have good reasons supporting them if there is good evidence for the belief. But what good evidence is there? How could you give 'good reasons' for your beliefs like, for example, that the sum of 2 and 2 is 4? Do we even need to appeal to evidence? Surely there are math proofs but for day to day activities we seem to think that we could have this belief without having good evidence supporting claims like these.

If reliabilism is true, then claims about mathematics and logic must have the proper belief-forming processes. What could could even count as such, though? What is the appropriate chain of reasoning that gets you to those beliefs? I don't think there are any.

My point is that there is no good broad account of good reasons for belief that could or would not be formulated relative to the domains which one would ask for such criteria. And there is no catch-all 'good reasons' category for everyday experience, per se, because the world of everyday experience is not a domain; a domain is a formal field of study, an idealized or conceptualized part of the world.

So anyway, I don't think this is sufficient, and I don't like the debate very much.

Friday, December 14, 2012

Interlude: A Confession and Summary

Some of these readings on epistemology after a while are just plain boring to me. Epistemology and so much philosophy, as much as I love it, seems either false or trivially true, and very very few times is it useful for its contribution to the sciences or in clarifying commonsense notions. And often philosophy aspires to such logical rigor but think, for example, about philosophers' formulations of knowledge. Do you think these formulations in any way would be adopted by anyone? It's as though analytic philosophy is so often informed by an outdated logicism.

Anyway, I will stop ragging on epistemology in particular and philosophy in particular, and I will instead review what I have written about so far, and what my views are basically. These are my views in summary form, and I would love to expand them in the future in terms of the arguments I have for each of these.

I. Skepticism: Any argument for Cartesian skepticism is either unsound or incoherent, but most likely unsound because of its false premises.
II. Defining knowledge: The word 'knowledge' is an honorific term, which could be given this definition or that to fill out an explanatory theory, but whatever the case the scientific usages of the word 'knowledge' are quite divergent from uses by philosophers, which cast doubt on the usefulness of philosophers' conceptualization(s).
III. Foundationalism versus Coherentism: If the argument is conceived as whether some beliefs are basic or not, the truth is it could be conceived as either depending on the purpose one is using the idea 'structure of belief,' but what looks to be most useful is to assume that some concepts and structures (their principles and parameters) are in some sense basic or a priori, thus in some ways vindicating foundationalism (although the beliefs here are not basic, just the concepts and structures) while the beliefs are what are generated by this innate, a priori organization (which for better or worse could be called 'knowledge'), and which produce beliefs that cohere--in some sense vindicating coherentism but providing that some parts of knowledge are more basic than beliefs.

And I don't know what I think about epistemic justification yet or the other areas, but my thinking bears with it just as much of a skeptical approach (not in the global sense, ha ha) of the arguments I will read as has my absorption of the arguments from the philosophers I have already reviewed. I am so surprised, really, to find that my view is far more sensible and far different than the philosophers' are. I realize that to think that I am right may be hubristic but I think I am. And perhaps when this whole journey is over, I could give you reasons why I think what I think. We'll see.

Davidson and coherence

Philosopher Donald Davidson argues in "A Coherence Theory of Truth and Knowledge" for, you guessed it, a coherence theory of knowledge--and truth. This is what he writes:
My argument has two parts. First I urge that a correct understanding of the speech, beliefs, desires, intentions and other propositional attitudes of a person leads to the conclusion that most of a person's beliefs must be true, and so there is a legitimate presumption that any of them, if it coheres with most of the rest, is true. Then I go on to claim that anyone with thoughts, and so in particular anyone who wonders whether he has any reason to suppose he is generally right about the nature of his environment, must know what a belief is, and how in general beliefs are to be detected and interpreted.
He gives full definition to his meaning of beliefs and what he means by a coherence theory.
Beliefs for me are states of people with intentions, desires, sense organs; they are states that are caused by, and cause, events inside and outside the bodies of their entertainers.

... What distinguishes a coherence theory is simply the claim that notion can count as a reason for holding a belief except another belief. Its partisan rejects as unintelligible the request for a ground or source of justification of another ilk.
Davidson makes an argument after that along these lines:
[1.] [A] theory of knowledge that allows that we can know the truth must be a non-relativized, non-internal form of realism.

[2.] [Y]our utterance means what mine does if belief in its truth is systematically caused by the same events and objects.

[3.] [NB. My inference:] Your utterance means what mine does. (1, 3)

[4.] [M]ere coherence, no matter how strongly coherence is plausibly defined, can not guarantee that what is believed is so. All that a coherence theory can maintain is that most of the beliefs in a coherent total set of beliefs are true.

[5.] [So...] The question, how do I know my beliefs are generally true? thus answers itself, simply because beliefs are by nature generally true. Rephrased or expanded, the question becomes, how can I tell whether my beliefs, which are by their nature generally true, are generally true? (3 & 4)
Okay, so maybe it looks like a fishy argument but it kind of comes to this: I'll break it down like this.

1. "[A] theory of knowledge that allows that we can know the truth must be a non-relativized, non-internal form of realism."
Davidson doesn't try to refute skepticism as to maybe just trust a kind of phenomenological description to give credence to an inference to the best explanation. What I mean is that Davidson seems to think if we describe the world as correctly as possible as we perceive it, intersubjectively, then we are well on our way to giving a good explanation to how we know things about the world. This, for Davidson, seems to be realism enough. Basically, he thinks that when we look at our beliefs about what we perceive to be the most immediate things, it at least looks like our beliefs about the objects around us and events that we see are caused by the objects and events that are external to us. This is the best inference we can make, he seems to think. Of course he wants to say that even though external objects and events are causes for these sensory beliefs they are not themselves reasons or justifications for the belief. He writes:
[T]he distinction between sentences belief in whose truth is justified by sensations and sentences belief in whose truth is justified only by appeal to other sentences held true is as anathema to the coherentist as the distinction between beliefs justified by sensations and beliefs justified only by appeal to further beliefs. Accordingly, I suggest we give up the idea that meaning or knowledge is grounded on something that counts as an ultimate source of evidence. No doubt meaning and knowledge depend on experience, and experience ultimately on sensation. But this is the 'depend' of causality, not of evidence or justification.
2. [Y]our utterance means what mine does if belief in its truth is systematically caused by the same events and objects.
The way Davidson explains this is that when we listen to people talk, we appeal to a principle of charity: We assume, other factors notwithstanding, that someone is not a liar or crazy, etc., and is instead trying to express himself clearly and intelligibly using language. If this is true, if people are sincerely trying to express themselves using a given language, they are doing so if their and our worlds are the same. And incidentally, per the first premise, they must be.

4. [M]ere coherence, no matter how strongly coherence is plausibly defined, can not guarantee that what is believed is so. All that a coherence theory can maintain is that most of the beliefs in a coherent total set of beliefs are true.
This is how Davidson explains the situation.
All beliefs are justified in this sense: they are supported by numerous other beliefs...and have a presumption in favor of their truth. The presumption increases the larger and more significant the body of beliefs with which a belief coheres, and there being no such thing as an isolated belief, there is no belief without a presumption in its favor.
He thinks that it does not make much sense to ask how many beliefs a person can have since a belief is really determined by ascribing beliefs to yourself and others. And if that is the case, we can really only understand beliefs, Donaldson thinks, by appeal to the body of beliefs, which by their nature are going to be mostly true, that a person has.

So Davidson's conclusion about knowledge and a coherence theory of it is that if we understand the nature of beliefs, which are about the world and true, by definition he thinks, then we will understand that we and other people mostly have true and non-coincidental beliefs about the world, and so have knowledge about the world.

Davidson does not so much address traditional philosophical problems as assert premises that look to be self-evident. And either you share his intuitions or you don't. The fact that the intuitions are so common is what makes Davidson an appealing philosopher.

What do you think? Do you think I've correctly represented Davidson's thinking here? Do you think Davidson is right?

Thursday, December 13, 2012

Interlude: Is Quine a virtue epistemologist?

In W.V.O. Quine's Web of Belief is he tells some ways in which you might revise your beliefs in light of new evidence or confirm or disconfirm beliefs in general. He gives what he thinks are basic scientific 'virtues' for change of beliefs, and they are, by his estimate:
(1) conservatism: the fewer existing beliefs the new belief would interfere with, the better;
(2) modesty: one hypothesis is more modest than another if it logically implies fewer other beliefs;
(3) simplicity: keep a single belief as simple as possible in its relation to other beliefs, but "[w]e cheerfully sacrifice simplicity of a part for greater simplicity of the whole when we see a way of doing so";
(4) generality: "[t]he wider the range of application of a hypothesis, the more general it is";
(5) refutability: "some imaginable event, recognizable if it occurs, must suffice to refute the hypothesis"; and
(6) precision: if a hypothesis states precisely its principles and parameters for measuring the occurrence of some event, then it is not easy to dismiss as coincidence.
The book is in some sense excellent, but it suffers from being a product of its time. A structuralist view of language, a behaviorist view of psychology, and a logicist understanding of science hangs over the work, all positions which I will not attempt to refute here, but which I think there are good reasons not to believe.

Anyway, Quine has long been considered an advocate of 'naturalized epistemology,' whose maxim is "Make epistemology a matter of cognitive psychology that we can study as natural phenomena and empirically." Here's Quine's quote:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz., a physical human subject. This human subject is accorded a certain experimentally controlled input — certain patterns of irradiation in assorted frequencies, for instance — and in the fullness of time the subject delivers as output a description of the three-dimensional external world and its history. The relation between the meager input and the torrential output is a relation that we are prompted to study for somewhat the same reasons that always prompted epistemology: namely, in order to see how evidence relates to theory, and in what ways one's theory of nature transcends any available evidence...But a conspicuous difference between old epistemology and the epistemological enterprise in this new psychological setting is that we can now make free use of empirical psychology.
But check that info against both his scientific virtues and also the meaning of virtue epistemology. I'll just use the definition from Stanford Encyclopedia for "virtue epistemology":
Two commitments unify them. First, epistemology is a normative discipline. Second, intellectual agents and communities are the primary source of epistemic value and the primary focus of epistemic evaluation.
So at least explicitly Quine doesn't think that epistemology is a normative discipline, that is, that it should be concerned with concepts about what it's reasonable to believe and also about norms and value and evaluation related to how we know what we know. But look at Quine's scientific virtues. Surely he would say that to have correct beliefs, and to aspire to knowledge, it is only reasonable to have beliefs according to these virtues. And these virtues would be kinds of norms toward good scientific thoughts.

Now, let's look at the second commitment of virtue epistemology. It's that people and their communities make for the source of how we can be confident we know what we know. For Quine, the focus is on scientific or proto-scientific thinking in the form of the virtues above. And the community would be us as scientific or proto-scientific inquirers.

So the question is can't you really have both? Can't you study epistemology as a series of natural phenomena as part of cognitive psychology or cognitive science and at the same time develop concepts and means to study the way we think or how it would be reasonable to think? Can't these be complementary enterprises?

Okay, now for realsies, Donald Davidson is next.

Sosa and Virtue Epistemology

Last time, I wrote about philosopher Ernest Sosa's argument against coherentism and foundationalism, two ways in which belief might be structured. I commented that I was not entirely clear on his arguments. But I'll restate them and see if I can get more clarity out of it, and then turn to his position which he calls "virtue epistemology."

Coherentism is false, Sosa believes, because when we think about our beliefs that we get from sensory experiences, they must, according to coherentism, be logically related to the rest of the system of beliefs. But it just looks clear that their justification does not require the other beliefs in the system to be justified. The reason I object so much to this point, or find it to be unclear or unsophisticated, is that it seems as though sometimes you do require other beliefs to justify the sensory beliefs. And as far as I am concerned, Wilfrid Sellars has already made this argument when he challenged Roderick Chisholm.

EXAMPLE. You are looking at a cirrus cloud. You think, "I see a cirrus cloud." Your friend asks you what you are looking at and you say, "I'm looking at a cirrus cloud." Your friend asks, "How do you know it's a cirrus cloud?" You say, "Well, cirrus clouds are long and wispy like that one up there. At least that's what I learned about them in school, anyway."

SOSA'S ANALYSIS. If you are looking at a cirrus cloud, and you think, "I see a cirrus cloud," it is not clear how that belief needs any other justification for itself than itself.

OBJECTION. In the above example, the character You expresses other reasons for why he has the belief that he does, and even if he didn't, it would demonstrate that there are several other logical connections that the belief has. My conclusion: coherentism is safe. Nahnuhnahnuh-booboo.

His critique of foundationalism seems equally hopeless and depressing and so I'm not going to look at that again, only to say that he thinks that it is not clear how, if there are basic beliefs based on sensory experiences, the sensory experiences are related to the observable world. But this to me seems like an empirical problem and doesn't demonstrate that they couldn't be. So this argument doesn't succeed too well either.

Anyway, onto Sosa's virtue epistemology. In Sosa's view, knowledge is relative to "an epistemic community."
This is brought out most prominently by the requirement that inquirers have at least normal cognitive equipment (e.g., normal perceptual apparatus, where that is relevant). But our new requirement--that inquirers not lack or blink generally known relevant information--also brings out the relativity. A vacationer in the woods may know that p well enough for an average vacationer, but he won't have the kind of knowledge his guide has. A guide would scornfully deny that the tenderfoot really knows that p. Relative the the epistemic community of guides (for that area) the tenderfoot lacks relevant generally known information, and misses relevant data that the average guide would grasp in the circumstances.
According to Sosa, there are different depths with which you could know about a subject or even a particular fact because of your expertise or lack thereof. I think Sosa's point here can be taken independent of whether Sosa's argument against traditional coherentism and foundationalism succeeds. (I would like to note this is actually contrary to what I said in the last post.) To take another example, I know that force is equal to the mass of an object multiplied by its acceleration but an expert in classical mechanics truly knows the application of this, its relation to other propositions, its real importance as a principle in physics, etc. In Sosa's system, we can rightly understand the honorific term "knowledgable" and it opens up the door for "knowledge" being a kind of honorific term. If this is what virtue epistemology amounts to, so be it.

Sosa also writes about trying to give definition to knowledge using virtue epistemology. But he does not know how one could.
I have no complete list of epistemic principles describing ways of arriving at a position to know or of being blocked from such a position. My suggestion is only that there are such principles, and that in any case we must go beyond the traditional emphasis by epistemology on warrant and reasoning as determinants of knowledge.
Sosa nevertheless gives two rough attempts at what knowledge would be to a virtue epistemologist. One account he gives is
that to understand knowledge we must enrich our traditional repertoire of epistemic concepts with the notion of being in a position to know (from the point of view of a K, e.g., a human being). Thus a proposition is evident (from the point of view of a K) to a subject only if both he is rationally justified in believing it and he is in a position to know (from the K point of view) whether it is true.

...S knows (fro the K point of view) that p iff
(a) it is true that p;
(b) S believes that p; and
(c) there is a non-defective epistemic period (from the K point of view) for S and the proposition that p.
I won't spend too much time unpacking this but just say instead that for Sosa we can look at knowledge as being true belief acquired in some way or other that does not suffer from cognitive or intellectual limitations on the person's part.

Given that I have actually been quite charitable in unpacking this, and as open-minded as possible, I am somewhat sympathetic to the view, and in a couple posts hence will return to what I think of all this stuff. But next is philosopher Donald Davidson.

Tuesday, December 11, 2012

Sosa and the Pyramid or the Raft

Philosopher Ernest Sosa thinks that Sellars' and the coherentists' view of knowledge does not really account for our beliefs and knowledge. But he also wants to suggest that the foundationalist view doesn't fair too much better either.

In his essay, "The Raft and the Pyramid," Sosa writes that we must choose between one view or the other: foundationalism or coherentism. Foundationalism is the the idea that for some beliefs there is no other belief but that belief itself which could be a reason for believing it. To say that "I'm looking at a brown table" and to think of a reason for believing why I'm looking at a brown table is just the belief that I'm looking at a brown table.

Coherentism, on the other hand, is the view that there are always other reasons a person could give for believing whatever they believe. For example, to say that "I'm looking at a brown table" and to ask what reason I have for believing this, I could make mention that "I believe that I am seeing it under the right conditions," "I believe that I understand how certain concepts apply to certain situations when I want to make observations," etc etc. So our beliefs fit in with our other beliefs and that's the kind of structure of our beliefs.

Sosa gives this argument for why coherentism is not a tenable position. Imagine this: You have a headache. You think: Oh, I have a headache. Now, think about your beliefs in the coherentist theory. Sosa writes:
Let everything remain constant, including the splitting headache, except the following: replace the belief that I have a headache with the belief that I do not have a headache, the belief that I am in pain with the belief that I am not in pain, the belief that someone is in pain with the belief that someone is not in pain, and so on. I contend that my resulting hypothetical system of beliefs would cohere as fully as does my actual system of beliefs, and yet my hypothetical belief that I do not have a headache would not therefore be justified.
I'm going to do my best to understand this argument of Sosa's. It seems to amount to this in short form. Coherentism can't be true because if it were true, then I could have one true belief that is inconsistent with a bunch of false beliefs and because it wouldn't cohere with, for example, an entire system of false beliefs I could have connected to it, then there would be no reason to believe the true belief. But if this were the case, then it would make it impossible to reiterate a true belief to support the belief itself. That's equivalent to saying there would be no way to justify true beliefs.

If I don't criticize this view now, then I'll forget to. I hope I'm not being unfair to the argument or making of it a strawman but it looks as though the argument is, put more simply, coherentism can't be true because the conclusion is repugnant: "I don't like the conclusion, so boo, can't be true." Maybe I just don't understand the argument. But anyway, ultimately, Sosa believes that the system of knowledge or the system of beliefs can't just be a set of logical relations, otherwise it could just generate a bunch of false explanations. Couldn't that be true, though?

I'll just assume that Sosa is correct about coherentism as a set of logical relations to continue the story. He thinks foundationalism looks good but it has problems. (This is also confusing to me, by the way.) The big problem for him seems to be that if we think of some beliefs as being basic or without other explanation than those beliefs themselves, we could always ask if there is itself some basic belief that grounds that or general principle. But if so, those beliefs are no longer basic. If not, then there could be several of these but then what is to stop it from being an infinite possibility of basic beliefs. So how much of a foundation is there, really? he asks.

Again, I think this so-called argument looks a little fishy. It looks something like this: "If we accept that there are too many basic beliefs, then that's not very pretty, is it? So, that has to be false." Of course, maybe I don't understand the argument correctly. Or maybe he's just trying to provide Occam's razor, the simplest explanation, saying instead that it would be unlikely for the structure of beliefs or our system of knowledge to be only a matter of several beliefs being foundational.

Whatever the case, Sosa wants to get to his view, which is virtue epistemology. Sosa considers himself a kind of foundationalist but in the sense that we could accept that there is a general principle that underlies all other belief or general belief that we have or should accept, namely that the system of beliefs is a system that is explained by the the intellectual virtues we have.

If you ask me, this doesn't seem a more coherent view, because there is no consensus on what an intellectual virtue is. Furthermore, there are not independent reasons, reasons apart from dismissing traditional foundationalism and coherentism, that support this kind of view.

I will try to provide a basic assessment of Sosa's work in the next post. But I'll just go ahead and tease and say that it doesn't make much sense to me at all. This could be my shortcoming, and if so, so much the worse for my thinking. If not, so much the worse for him.

REFERENCES
Greco, John and Turri, John, "Virtue Epistemology", The Stanford Encyclopedia of Philosophy (Winter 2011 Edition), Edward N. Zalta (ed.), URL = http://plato.stanford.edu/archives/win2011/entries/epistemology-virtue/.

Interlude: Outsmarting the hedgehog

I don't know much about probabilities and statistics, and I only know in summary form how game theory is applied as a means for prediction of human behavior. But I do think that any statistical, probabilistic, or game-theoretical means, any attempt to measure, quantitatively, in the social sciences is admirable. If ultimately it can't capture the extent of human behavior it would like, it is at least worth the effort. If it is a failure, it will just bear no fruit, and so much the worse for the theoretical underpinnings but not for our effort.

A new article called "How to Win at Forecasting" is about a government-funded project that is an attempt to put to the test the putative expertise of political scientists. It is a five-year research project.

So far, interestingly enough, the man who has been conducted the project has said that he has noticed in previous instances that the people who make the best kinds of predictions not only make more modest predictions but they are often people who believe that making these kinds of predictions is not possible or not a good idea! They are called in the article "foxes," as opposed to the hedgehogs who are more likely to think that human behavior works like clockwork and is quite predictable. The "foxes," on the other hand, often think that human behavior is too complex to make accurate predictions. And yet those people tend to make accurate predictions.

I'm a little sleepy, and maybe I haven't explained this as well as I'd lile. But anyway you can read the article here: http://www.edge.org/conversation/win-at-forecasting. See what you think.

Sunday, December 9, 2012

Sellars challenging "the Given"

Last time, I wrote about Roderick Chisholm's idea that at least some kinds of empirical knowledge could be foundational knowledge. This means that some kinds of knowledge that we get from the senses can be so basic such that we don't even have to provide reasons for it. For example, if I think something like "I see the Mac screen in front of me," I couldn't give you any other belief of mine that would be the reason for that belief, other than something like "Well, I just see the Mac screen in front of me, is all." According to Chisholm, this kind of knowledge from the senses counts as what could be called basic belief onto which other beliefs are built.

Philosopher Wilfrid Sellars challenges this. He thinks that this kind of thinking about basic beliefs is just plain wrong. Take a common example, like the sentence or the thought "I see the color green." Sellars writes in "Does Empirical Knowledge Have a Foundation?": "one couldn't form the concept of being green, and, by parity of reasoning, of the other colors, unless he already had them." He further writes:
Now, it just won't do to reply that to have the concept of green, to know what it is for something to be green, it is sufficient to respond, to green objects with the vocable 'This is green.' Not only must the conditions be of a sort that it appropriate for determining the color of an object by looking, the subject must know that conditions of this sort are appropriate.
In a nutshell, there must necessarily be a whole host of other concepts that someone knows and a whole host of conditions a person must know so that he or she can think thoughts in that situation whatsoever or howsoever. So, with sentences like the one mentioned about green or the first one we started with, "I see the Mac screen in front of me," I have to know the concepts that make up that thought and when I or anyone would think thoughts like that, and so that entails knowledge about the situation or environment. There are even more things that Sellar didn't think about, or couldn't have, because he was a behaviorist. I must also know, if I think that thought internally in propositional or sentential form, or if I were to vocalize it, something about English syntax, the roles pronouns, verbs, articles, prepositions, etc., play in the phrase structure of my language. There are more things I am leaving out but at any rate this should be proof enough that no sentence or thought or belief about something easily perceived is so basic so that it can't be asked of it what other reasons I believe that.

Here is Sellar's again:
[I]f it is true, then it follows, as a matter of simple logic, that one couldn't have observational knowledge of any fact unless one knew many other things as well... For the point is specifically that observational knowledge of any particular fact, e.g. that this is green, presupposes that one knows general facts of the form X is a reliable symptom of Y. And to admit this requires an abandonment of the traditional empiricist idea that observational knowledge 'stand on its own feet.'
Basically, then observational knowledge, assumes other knowledge of concepts and conditions/parameters and possibly, I might add, a complicated abductive or inferential process.

And Sellars for the next to last time:
The essential point is that in characterizing an episode or a state as that of knowing, we are not giving an empirical description of that episode or state; we are placing it in the logical space of reasons, of justifying and being able to justify what one says.
That is to say, when we think about thinking, namely beliefs, we think of them as logically interconnecting with others and allowing us opportunities to check those beliefs against one another for consistency.

Sellars' final words (which he seemed to love so much he plagiarized himself in another of his writings):
[E]mpirical knowledge, like its sophisticated extension, science, is rational, not because it has a foundation but because it is a self-correcting enterprise which can put any claim in jeopardy, though not all at once.
What do you think? More to come, same Bat Channel.

Saturday, December 8, 2012

Chisholm and "the Given"

So far, we've looked at large-scale skepticism and also what a definition of knowledge would look like. We'll set aside if we have adequately addressed those issues. Now we'll turn to whether or not knowledge has foundations.

Let me try to make clear what it would mean for knowledge to have foundations. It is an open question whether knowledge is like a building where some beliefs are more fundamental than others and so others follow after those beliefs, or whether knowledge is like a web of beliefs, or whether it's neither a building with a foundation nor a web of belief.

Philosopher Roderick Chisholm in his paper "The Myth of the Given" wants to defend a traditional view: that knowledge has a foundation or foundations. He specifically wants to defend the view that some of the foundations of knowledge are sense-data (or the like). But he wants to reject the idea that the only foundational knowledge is the set of sense-data.

So Chisholm accepts the following traditional principles:
(A) The knowledge which a person has at any time is a structure or edifice, many parts and stages of which help to support each other, but which as a whole is supported by its own foundations.

(B) The foundation of one's knowledge consists (at least in part) of the apprehension of what have been called, variously, "sensations," "sense-impressions," "appearances," "sensa," "sense-qualia," and "phenomena."
But he rejects the following principle:
(C) The only apprehension which is thus basic to the structure of knowledge is our apprehension of "appearances" (etc.)--our apprehension of the given.
Let's look at his argument. He wants to arrive at this idea, ultimately: "What justifies me in thinking I know that n is true is simply the fact that n is true." He wants to arrive at some basic belief that is of this form. His argument is basically a disjunctive argument against all the other positions.

Chisholm tries to exhaust the possibilities thus:
(1) One may believe that the questions about justification which give rise to our problem are based upon false assumptions and hence that they should not be asked at all.

(2) One may believe that no statement or claim is justified unless it is justified, at least in part, by some other justified statement or claim which is does not justify; this belief may suggest that one should continue the process of justifying ad infinitum, justifying each claim by reference to some additional claim.

(3) One may believe that no statement or claim... is justified unless it is justified by some other justified statement..., and that [the other justified statement] is not justified unless it is justified by [the first]; this would suggest that the process of justifying is, or should be, circular.
1. One may believe that the questions about justification which give rise to our problem are based upon false assumptions and hence that they should not be asked at all.
Chisholm addresses this objection by looking at philosophers' responses and rejecting them. He thinks that previous philosophers have used words, especially, for example 'doubt,' ambiguously. I don't know how relevant his objects are to today, so for the sake of argument let's assume that he has correctly refuted those people. Ultimately, Chisholm thinks we can ask about the justification of our beliefs, and when we do, we learn something about ourselves. We could ask, for example, why we believe we saw someone commit a crime. We could claim, for example, we noticed that that man also wore the same jacket as was revealed in the trial that the defendant wore. Etc. So he thinks this objection doesn't hold water, because we learn something when subjecting our beliefs to questions of justification and when we ask whether or not some beliefs ground other beliefs.

2. One may believe that no statement or claim is justified unless it is justified, at least in part, by some other justified statement or claim which is does not justify; this belief may suggest that one should continue the process of justifying ad infinitum, justifying each claim by reference to some additional claim.
Chisholm responds to a specific philosophical objection here by a fellow named Hans Reichenbach but perhaps his point could generate to something like this. Any statement where someone says that all beliefs are based on other beliefs before that and other beliefs before that and so on must itself be justified by the another belief, but it is essentially a claim that looks on the face of it to be a fundamental belief. But the paradox is is that if it is a fundamental belief, it is false. If it is a non-fundamental belief and so a belief that needs further and further justification all the way down, it is not clear how this belief is not either a belief which requires other beliefs so as to cohere in an overall system or some fundamental belief. Again, if we allow that is it not fundamental, then perhaps it is just a belief that needs other beliefs in an overall coherent system. Indeed, it is hard to imagine what belief could further justify it. This isn't exactly Chisholm's argument but assume he would give and assume it's true, for the sake of argument.

3. One may believe that no statement or claim... is justified unless it is justified by some other justified statement..., and that [the other justified statement] is not justified unless it is justified by [the first]; this would suggest that the process of justifying is, or should be, circular.
Chisholm wants to object to some coherence theory in the following way. He writes:
If we accept the coherence theory, we may still ask, concerning any proposition... which we think we know to be true, 'What is my justification for thinking I know that... [it] is a member of the system of propositions in which everything real and possible is coherently included, or that [it] is a member of the system of propositions which is actually adopted by mankind and by the scientists of our culture circle?" And when we ask suck a question, we are confronted, once again, with our original alternatives.
So according to Chisholm, we are back to our original position. He write:
When we have made the statement "There lies a key," we can, of course, raise the question "What is my justification for thinking I know, or for believing, that there lies a key?" The answer would be "I see the key." We cannot ask "What is my justification for seeing a key?"

... When we reach a statement have the property just referred to--an experiential statement such that to describe its evidence "would simply mean to repeat the experiential statement itself"--we have reached a proper stopping place in the process of justification.

... We are thus led to the concept of a belief, statement, claim, proposition, or hypothesis, which justifies itself.

... A statement, belief, claim, proposition, or hypothesis may be said to be self-justifying for a person, if the person's justification for thinking he knows it to be true is simply the fact that it is true.
So Chisholm basically argues that because we can always ask further what makes a set of beliefs cohere that some beliefs must be fundamental, and also because in ordinary language we just 'can't' ask what beliefs justify that we believe we see something, for example, other than that we believe we see something.

Friday, December 7, 2012

Interlude: The problem(s) with eyewitness testimony

Astrophysicist Neil deGrasse Tyson tweeted recently about not getting to serve on a jury because when asked whether or not he would allow for a guilty verdict on the basis of eyewitness testimony alone he said no. This was shocking to me. Loving law and legal procedure, and knowing the crucial role eyewitness testimony plays in the courtroom, I wondered how in the world Tyson could make this claim.

I Tweeted him as to whether he really though there would be no situations in which eyewitness testimony would be sufficient for conviction. He did not respond to my Tweet. However, the following day, assuming he got several Tweets about this, wrote on Twitter:
Eye Witness Testimony: High evidence to the Courts. Warped evidence to the Psychologist. Low evidence to the Physicist.
It was then that I recalled what I had learned in Psych101. And I went Google searching about the reasons psychologist find trouble with eyewitness testimony. I will synthesize some of that information I found and also break it down myself as to why it would be problematic to rely on eyewitness testimony alone in legal procedure or as evidence in the sciences.

The problem of eyewitness testimony really breaks down into three subsets of problem areas: a problem of perception, a problem of memory, and a problem of testimony. First, looking at perception. Please watch the following video. And please follow the instructions given.



Perhaps you have seen the video above before. Whatever the case, it and videos like it demonstrate that ordinary perception requires attention to details, sometimes astonishingly, big fat details. If a person is not careful, they will miss things that are right before the eyes. As hitherto mentioned Neil deGrasse Tyson one said, optical illusion books should be referred to as perceptual failure books since what is really happening is that your brain is failing to perceive properly what is right in front of your eyes.

The second problem with eyewitness testimony is the problem of memory. Remembering a fact or experience or situation is not like retrieving a file from a file cabinet. As much as it may seem at first glance that what we're doing when we remember is just revisiting the same memory, each time you remember something you are literally re-membering it: reconstructing it. And this interpretive and reconstructive process takes place just as soon as you receive information. Consider being in a public place and overhearing an argument. This argument will later be a make-or-break situation in a courtroom. But at the time you don't know that. You only hear keywords. It's not your conversation. You don't know the people who are having the argument. So ultimately you are filtering out a lot of what you hear and interpreting and reinterpreting the conversation, or failing to, as you hear more. Now, imagine later you are on the witness stand and asked to recall this argument. Imagine also that now you know some basic facts about the trial, the role this argument would play in the court, the value of your testimony now, etc. Even with the best intentions, you are not recalling a conversation you only gave half an ear to, which you have since had time to reinterpret it and perhaps magnify in importance. Don't you see how this could be a problem?

Finally let's consider the problem of testimony. Playing the telephone game where the story begins one way and being passed along inevitably leads to the filtering of certain information, as with the memory case mentioned earlier. The actual testimony, the actual way you phrase the events that happen, can easily be shaped by the person you are talking to. For example, if testifying about a car wreck the subtle use of a lawyer's substituting the verb 'crash' for 'hit' could make you think a car wreck occurred at a higher speed. For example, 'And when did you see the defendant crash cars with Mr. Smith?' This small locution creates a frame and makes the person testifying think and speak within that frame. There are other kinds of problems with actually relaying the information as well because sometimes one's verbal skills are not as good at conveying facts as one's head might be at remembering them. There could be, potentially a big gap, between what a person thinks and what a person says. Again, this is even if someone is well meaning.

Most of what I have said has focused on the use of eyewitness testimony in the courtroom. But it could be just as easily applied to eyewitness testimony for the sciences and could cover a whole range of things: supposed miracles that people have seen, which could just be simpler, as-yet-explained phenomena, or UFO sightings, which, um, ditto. So I think the problems with eyewitness testimony should make us all more modest in our claims and more vigilant about what we accept when people make a claim to something or when we ourselves have to make a claim in the role of giving eyewitness testimony.

And also we have to remember that ultimately problems of eyewitness testimony are problems of perception, memory, and testimony, all of which we commonly take to be sources of knowledge.

Thursday, December 6, 2012

Nozick, methods and knowledge

Philosopher Robert Nozick challenges the Gettier cases in his essay "Knowledge and Skepticism." He proposes a new definition of knowledge which, unlike Klein's, does not rely explicity on the introduction of new evidence to a specific person or whether some new proposition might come that defeats that person's previous belief. Instead, Nozick makes knowledge dependent on truth and a method for arriving at truth.

Nozick gives the following definition of knowledge:
S knows that p via M iff

1. p is true
2. S believes, via method M, that p
3. If p weren't true and S were to use M to arrive at a belief whether p, then S wouldn't believe, via M, that p.
4. If p were true and S were to use M to arrive at a belief whether p, then S would believe, via M, that p.
I'll explain this definition. The definition means that you know something when it's true, you believe it's true because you've arrived at it through some method or other, and then two more things. If something weren't true and you used that method mentioned earlier to arrive at it, then you wouldn't believe that thing were true. Conversely, if something were true and used that method mentioned earlier to arrive at it, then you would believe that thing were true.

This raises the question of how a person discovers the truth, or what methods one must use. Can the methods conflict, for example? But now these are empirical matters.

Wednesday, December 5, 2012

Gettier cases strike back

According to philosopher Peter Klein, we can address the Gettier cases. We can have justified true belief plus add one more component. Adding this one more component will give us a good definition of knowledge. To summarize Klein's position again, knowledge is true belief a person has evidence for, and if there's evidence to the contrary in the future then that person would no longer believe that and it would not actually be knowledge. So Klein's whole argument hangs on the fact that there can't be anything true sentence, any evidence, that would make someone no longer believe what he thought, and if there were then the whole account fails.

Philosopher Gilbert Harman examines the Gettier cases in his book Thought. From the Gettier cases, Harman thinks we can glean the following principle: "Reasoning that essentially involves false conclusions, intermediate or final, cannot give one knowledge." That is, in the course of one's reasoning, a person can't come to a conclusion that is false and that belief really be knowledge. Harman doesn't want to argue with this principle. He thinks this principle is true. But he wants to find some way to accept the principle and address the Gettier cases.

While looking at the Gettier cases, though, he indirectly addresses and challenges Klein's argument. Harman gives three examples, but I'll just address one, and I'll try to simplify it. The first is about Tom, a guy you know. You know Tom, and you saw Tom stealing from the library. You tell his mother but you hear from Tom's mother that Tom has an identical twin Buck and he loves to steal books. You don't know much about Tom's family so you assume this is true. Another friend tells you later that Tom's mother is insane. And now you don't feel so sure about whether Tom stole the book or not. But had you never heard the wrong information you would never have had the doubt. And so do you not really know what you thought you knew?

Harman wants to avoid paradox about knowledge, and he defines the paradox like this:
"If I know that h is true, I know that any evidence against h is evidence against something that is true; so I know that such evidence is misleading. But I should disregard evidence that I know is misleading. So, once I know that h is true, I am in a position to disregard any future evidence that seems to tell against h." This is paradoxical, because I am never in a position simply to disregard any future evidence even though I do know a great many different things.
Given that you knew it was Tom before you heard the info before, and given that you relied on the testimony of Tom's mother later, though it was wrong, you can nevertheless be confident that you knew it all along, although you had a doubt in light of the testimony.

So Harman squares off with Klein here. He challenges Klein's fourth condition for knowledge and says that if we accepted it we would have to reject that people know things in situations of new true propositions like in the counterexample, like for example the true sentence that Tom's mother testified contrary to what you knew or the true sentence that in similar situations a person would usually trust someone's mother. Harman thinks in such situations we actually would know Tom stole the book in spite of the new information that created doubt, whereas Klein, if he really accepted his four conditions for knowledge, would have to reject it.

Ultimately, Harman reconfigures the definition of knowledge. It looks something like this.

(i) p is true;
(ii) S believes p at t1;
(iii) p is evident to S at t1;
(iv) One's conclusion that p is not based solely on reasoning that essentially involves false intermediate conclusions.

So, with this principle, Harman can have his counterexamples and his definition of knowledge too.

I hope I explained this well. What do you think? And do you think this a better definition of knowledge?