?

Log in

No account? Create an account
Philosophy on LiveJournal
philosophy
.:.....::. .: ..::...:::.
... [userpic]
Disagreement

Hey kids -- how do we feel about philosophical disagreement?

It seems plausible that two reasonable epistemic agents can disagree about some pretty fundamental things: features of the world, intuitions, moral judgments, etc. In fact, that seems to happen all the time -- take philosophy and real_philosophy, for example. I'm willing to extend reasonability to lots of agents in these venues -- we're largely well-read and formally trained in philosophy and critical thinking. Lots of us are articulate and perceptive and have finely tuned senses of right and wrong. Yet, we seem to be able to disagree about some really bedrock stuff.

But perhaps we can also argue that two reasonable epistemic agents ought, ceteris paribus, reach the same conclusions if given the same evidence. If this is true, it seems like what we take to be cases of genuine disagreement will turn out to be merely the appearance of disagreement -- given the same rules, evidence and background assumptions, we'd all assent to the same sorts of statements. There's one right answer, and we'd all reach it if we'd just correctly apply the rules of rationality to the problem at hand.

Thoughts?

Comments
Page 1 of 2[1][2]
(no subject) - (Anonymous)

Which is funny, given lunkheads like vap0rtranz that just want us to homogenize, as differences (inequalities) are unjust!

(no subject) - (Anonymous)   Expand  
(no subject) - (Anonymous)   Expand  
(no subject) - (Anonymous)
(no subject) - (Anonymous)   Expand  
(no subject) - (Anonymous)   Expand  

I wouldn't call intuitions "bedrock." Intuitions are just shorthand for something, depending on which story you buy, but I'm inclined to believe that they're shorthand for general ideas we formed (or were given) during our upbringing. Since not everyone gets the same ideas, not everyone has the same intuitions.

However, I think that disagreement is a good thing, because it (among other things) rules out intuitions as sources of knowledge. I think that certain things -- maybe morals, maybe others -- depend on intuitions to get anywhere, but I don't think that intuitions are a good source for real knowledge because I think that they're more subjective than anything else.

That being said, I'm not sure I'm comfortable with the idea that there's only one right answer. Disagreement in this area is just an unavoidable manifestation of the lack of the possibility of uniform consensus. Don't get me wrong -- I might be willing to buy that if we made our experiences, beliefs, and understanding of any particular problem sufficiently similar that we might come to the same solution. However, that doesn't mean that we could ever actually reach that point. I think that much of what we take for granted is actually more personal and more subjective than we realize, and that as long as we're operating in different spheres we're never going to come to the same conclusions -- and whatever conclusions are reached are only the conclusions reached in that sphere and not in others.

Of course, I'm not saying that everything is hopelessly subjective, just that subjectivity is more of an issue than many people think it is. But since it engenders different viewpoints, it might approximate something correct, or at least more correct than any individual viewpoint.

Thank you for a nicely thought-provoking entry.

It does seem plausible that two reasonable epistemic agents can disagree about some pretty fundamental things. Interestingly, the plausibility remains even if we not only omit the phrase 'reasonable,' but qualify the agents as irrational—as lacking not merely proper operation of intellect, but also the intellectus itself. For salamanders, scorpions and squirrels are all (to my knowledge) cognitive agents, and so can "disagree" with respect to certain features of the physical world, objectifying differing things due to a difference in Umwelts (briefly: individual worlds of objects of awareness).

The addition of intellect to animal intelligence creates the possible for judgmental differences. Only intellectual animals who can add a relation of self-identity to the world, viewing it as existing in its own right, can potentially metaphysicize, speculate, ask questions about the way things "really are," questions which may generate intellectual (and not mere sense-perceptual) disagreement. Reality-modeling will inevitably differ from individual to individual, with both overlap and difference.

Reasonable epistemic agents are those which not only possess intellect, but possess it to a properly functioning degree, meeting the various constraints of rationality of which contemporary epistemologist Alvin Plantinga speaks in the final book of his series on 'warrant.' Ought two of them reach the same conclusions if given the same evidence? I am inclined to say no. Why? Because if we mean 'evidence' in the ordinary sense, including what is public to all (and don't forget: even "private" mental states are potentially public through various channels of communication), the differences in psychological and physiological constitution will generate Umweltian differences, differences in our interobjectivity. We may co-objectify the same things, but that doesn't mean we will objectify them in the same manner.

It is significant that one of the most obviously alethic absolutist traditions, the Thomist tradition, a tradition which defends the existence of Truth with a capital 'T,' has within itself an account for why this is so. It has to do with the very nature of existence. All existent things with limited perfections, finite natures, essences that differ from their act of existence, share in Being only to some extent, and thus also possess nonbeing. So any intellectual agent can perceive an object as good or bad in some respect.

Of course, if there is a truth about the being and nonbeing in a particular thing about which we are trying to reach dialogical solidarity as reasonable agents, then we should be able to arrive at the truth if we are meticulous enough. But given the nature of passion, habit, and other features of the psychology of the rational agent (which is a rational animal, and not merely a thinking thingcontra Cartesianism), it is no easy task!

Also, if the right answer does turn out to be a unity, a single Truth, we will also need to take into account the possibility that an aversion to the nature of this Truth may influence our judgments concerning that Truth. For such a possibility, read Søren Kierkegaard's Sickness Unto Death—essential reading for any student of philosophy.

If two people recieve the same rules, evidence and background assumptions are presumed to make the same choice, then we are nothing but robots. We are vending machines, and whenever F4 is pressed, we disperse a Twix Bar.

I'd like to think disagreement is good, but sometimes it's just wrong. I'd rather not disagree with some fundemental christian who believes the earth is 4000 years old. That kind of disagreement can go. More reasoned disagreement can stay--and in my view that's all I've seen on these communities.

The question however, about two people with "the same rules, evidence and background assumptions" is a practical impossibility. Our assumptions help form who we are, and we are inevitably different from anybody else. If you're merely asking hypothetically? Then I suppose yes, there is one right answer--given specific data to work from.

The problem is, all of us, every last human, is missing a few marbles.

I think that shared rules, evidence, and background assumptions are not sufficient for the kind of convergence you want, at least when we take each of these terms in their fullest generality.

Here's an obscure example loaded with technical terms that I would explain if I weren't aware of how much work I still need to do tonight: there's a theory of statistical induction based on the minimum description lengths of data d and hypotheses h in an appropriate universal language. The inference rule is that one should choose the hypothesis h which minimizes

K(h) + K(d|h)

where K(h) is the Kolmogorov complexity of h--meaning the length of the shortest possible program that generates h-- and K(d|h) is the Kolmogorov complexity of the program that generates d, given h as an input.

This is actually really well motivated as an inference rule, I swear to god. It turns out to be equivalent to Bayes' rule, under some pretty general assumptions.

The problem is that Kolmogorov complexity is an uncomputable function, meaning that even if one could think about it for an infinite amount of time, there are cases of h for which you can't compute K(h).

It's late, but I think that means that convergence isn't guaranteed given shared rules, evidence, and background assumptions, even if you generously give philosophers the additional benefit of an infinite amount of time.

You could probably get around this sort of objection by adding some constraints as to what you mean by "rules, evidence and background assumptions"--requiring only computable rules, for example.

But while I think it would be possible to get a formulation of the problem that swings your way, I have doubts about the real applicability of that sort of model. Most of these reservations are about your "shared assumptions" condition. If you are operating within a kind of classically foundationalist framework--we start with some assumptions, and make truth-preserving inferences from there onwards--then yeah, you would arrive at agreement.

But it seems like in reality we start from very different background assumptions, and our inferences (even our philosophical ones) seem to be remarkably non-monotonic. So I don't think that we can in practice claim with absolute confidence that there is a single point of convergence for all our philosophizing. There may always be, in principle, multiple attractor states. (This is my main reservation about convergence theories of truth in general....)

an inference rule is that is equivalent to Bayes' rule commits us to an uncomputable function? Really?

I should read what everyone else wrote but come on, we all know I am much too lazy for that.

This might be a tangent but I think it's related so beware: Lately I can't escape the feeling that philosophical disputes are largely, at bottom, disputes over primacy. Since primacy, in the context I'm using it, is a made up word, I'll define it: I think what philosophy does is manage to find two (or more) beliefs that most people would generally have, and show that they (through their logical implications) contradict each other. The philosophical dispute then becomes: which belief is really more primary? Which one is more "rational" to give up?

Here is an example: recently I was talking to a philosopher who wrote a book which concluded (among other things) that the only way moral judgments can motivate an agent to act is, at bottom, deviant. His theory was based in some conclusions of cogsci, and basically, he claimed that the brain patterns that would have to occur in order for an agent to act strictly by a moral judgment have only been observed to occur in patients who have tourettes (i know that's spelled wrong) during an episode. Now, obviously there are issues with this line of reasoning. But it got me wondering: if some finding in cogsci really led to the consequence that acting on moral judgments would be identical to having an episode of tourettes, why should we conclude that moral judgments can't motivate an agent to act and not that cogsci is bullshit?

Ok, I think I'm rambling. Blah blah blah.

Just curious, how does the study map the brain patterns for a moral judgment?

I think saying "two reasonable epistemic agents" ought to agree needs to be strengthened to "two ideally rational epistemic agents" ought to agree. You can be reasonable without being fully reliable, and so if you just "get something wrong" you could disagree with someone even given the same rules, evidence and background assumptions : you'd just be wrong about some of the entailments.

What do you mean 'ideally'?

Well, part of the problem I think is the way we frame our epistemic inquiries themselves. I think there are two strains of concerns going on: (1) can we produce a set of rational rules that address some of the most fundamental features of ontology or moral judgments; and (2) what constitutes "evidence" in such epistemic inquiries?

As far as the first part goes, we'd have some agreed-upon and justified set of second-order rules and criteria for rational inquiry in general; well, we can likely do that, although it's certainly subject to disagreement at this point for some. Regarding the second part, if we constitute the evidence as "intuitions", i.e. ones purely contemplated as necessary intuitions of general ontology or morality, then such intuitions are fundamentally intellectually considered without any appeal to experience, for we are concerned about fundamental matters after all. If we constitute the evidence as something empirical, that we get from repeated iterations of data-sets and so forth, then whatever rules we apply to such evidence is only going to be contingent, deniable and not really fundamental.

I think we can safely dismiss evidence of anything fundamental if we have to appeal to experience time and again for our evidence. But can we dismiss rational intuitions? After all, can't we come up with agreed-upon rational intuitions that coalesce as brute starting points, from which we can generate more rational truths concerning the world? Here, I just plain don't think we can come upon any strict agreed-upon intuitions that we can balance out amongst rational agents.

Any such rational intuition is really problematic because a justification for such intuitions will have to masquerade as something universally true independent of the perspective of the subject's necessary conditions for knowledge, while still being really dependent on the subject to start with. Even if there is a convergence of rational intuitions by rational agents, they are certainly subject to denial or will meet otherwise-conflicting rational intuitions. Not only are we going to fail to produce objectively valid, epistemic evidence for fundamental ontological conditions, but we are also going to fail with regard to moral conditions. Either way, we're just plain not going to form any fundamental knowledge about ontology or morality by applying certain rational rules to intuition pumps.

We might have to look elsewhere, change our normative considerations about such epistemic inquiries, or stop holding our breath for an absolute epistemological revelation concerning such first principles.

So it's currently 3am, and I hope this makes some sense.

I don't know if you're familiar with Calhoun's theory of emotion, but it's rather interesting in that half of her paper in which she introduces the theory is spent rejecting cognitivism, the view that emotions are beliefs, then introduces her own cognitive theory of emotion. Where cognitivism has traditionally gone wrong, she thinks, is the assumption that if emotions are cognitive, they have to be beliefs. She argues that emotions are not caused by beliefs, but rather associations and patterns of attention at a prereflective level, which she calls "cognitive sets." The contents of cognitive sets are not beliefs, but it is possible upon careful introspection to bring the contents of the cognitive sets into the realm of our beliefs. This, she argues, would be a big step towards becoming an ideally rational agent.

Perhaps the concept of a cognitive set can also explain the type of disagreement you bring up. The members of this community may be reasonable and intelligent, but we're far from ideally rational. Maybe a source of disagreement about very basic stuff stems from some prereflective preference we have towards accepting some premises over others. If we were ideally rational, the contents of this unconscious cognitive set would be brought to the light of our beliefs and could be critically examined and subsequently accepted or rejected. But, as it stands, if all of us hold prereflective dispositions towards the way we see the world, we're never going to have complete convergence in our arguments.

So philosophy has come 4,000 years to repeat Socrates? My optimism has shot up dramatically.

(no subject) - (Anonymous)   Expand  

But perhaps we can also argue that two reasonable epistemic agents ought, ceteris paribus, reach the same conclusions if given the same evidence.

I don't see how this can be true if the available evidence is insufficient to reach a certain conclusion.

The Language Game.

“It seems plausible that two reasonable epistemic agents can disagree about some pretty fundamental things: features of the world, intuitions, moral judgments, etc.”

Actually it is possible that the SAME epistemic agent can have different perspectives about the same object. When I was an undergraduate, I worked in a cafeteria after school. Another guy and I were scrubbing pots and pans while talking about philosophy. He did not believe that it was reasonable for two people to have two perspectives about the same subject. I asked if he would believe two people could have a reasonable disagreement if I could prove to him that he cold perceive the exact same thing in two different ways. He said yes.

I told him that the following experiment comes from Berkeley: I asked him to put some ice in a bucket with some water and then put one of his hands in it. He did this wile we continued to talk. After a few minutes I asked him to place both his hands into the rinse water. He did so, and the look on his face told me he had an idea of what I was going to ask next. I asked him what temperature the rinse water felt like? He said it depended on which hand he used to guess. So I asked if he now thought it possible that two people could have different perspectives about the same thing and he said yes.

The problem is fundamental to the human condition. What makes it worse is when people try to be obscure, vague, disrespectful, in concise, and many other things that go on here, especially hypocrisy. I recall being slandered by a moderator in a community where it was made explicit that slander was not allowed. Given these conditions, it is no wonder that partisanship is the rule and not the exception here. Now the next task is to select a few team mates and play the language game. =P

There's one right answer, and we'd all reach it if we'd just correctly apply the rules of rationality to the problem at hand.


We're only rational insofar as it's been evolutionary advantageous; but beyond that, we're irrational. Moreover, philosophers I gather are still defining the rules of rationality - especial thanks to the limits and challenges of cognitive science.

I think communicative discourses, such as philosophy or science, help to develop rationality both as a concept and within the participants. In the former rational rules are sought out as the agents disagree so that somebody wins. In the latter, participants learn to value careful judgment so that they can both challenge the concept of rationality, but also contribute to it in their chosen communicative discourse, such as again philosophy. This is progress, but I don't think it ever ends because irrationality always has some presence; and when it is overwhelming, it actually regresses our progress, e.g., the dark ages.

I think a fight to the death is the only reasonable choice.

The problem of universals leads reasonable people to different conclusions from the same evidence. All evidence is given as particulars. Rejecting nominalism depends on transcendental arguments that fall short of syllogisms.

Reasonable consensus regarding the problem of other minds likewise founders in the gap between syllogistic and analogical inference.

Page 1 of 2[1][2]