Monday, 13 November 2017


On Subjectivism:
My Opinion Is As Good As Your Truth

    "There is nothing so ridiculous that has not at some time been said by some philosopher."              - Oliver Goldsmith
At our November 8th meeting where we discussed "Post-truth," the OED's 2016 "Word of the Year," we were instructed by some of our friends that truth doesn't exist.  More precisely, the claim was that objective truth does not exist. What we call 'truth' is really just subjective opinion, or "truth as seen by you." "There are no facts; only interpretations."  What you believe is true for you, and what I believe is true for me. Everyone is entitled to their opinion, and one opinion is as good as another.  What people call news is just "fake news," if I don't like it. Everyone is biased.  There is no way to decide who is telling the truth about anything.  For every fact, there is an "alternative fact."  That's just how it is.  Hillary Clinton won the popular vote? - that's just your opinion.  Maybe so, maybe not.  Trump claims he won it, and who are we to say he is wrong?  The earth is flat?  If you say so; why not? 
This sophomoric point of view is called epistemological or cognitive subjectivism (close cousin to epistemological relativism).  It's a 'theory' that has gained a strong following in the last 50 years or so in western culture, thanks to postmodern philosophy and social media, producing as their final triumph the election of Donald J. Trump as the President of the United States.  Thanks to the arch-subjectivist Trump, we are now officially living in a "post-truth" age.  Radical postmodern epistemology is moribund now in the halls of academia, but its popular version lives on in the thinking of millions of our fellow citizens, especially in social media where mere opinion seems to be all that matters.  Try to argue for something you think is true in 140 characters.

Opiniona belief or judgment about something, not necessarily based on fact.
There is, of course,  a subjective aspect to any thought we may entertain, because thinking is is carried out by conscious persons or subjects.  If I think sharks are endangered worldwide, that opinion subjective because it's mine.  I'm the one who believes it.  Every opinion takes place first in somebody's mental world.  That's the realm of subjectivity: our thoughts, imaginings, feelings, dreams, sensations.  In that respect, subjectivists are right to claim that all beliefs or statements about the world are subjective opinions.  Thing is, some of them are also true.  Sharks are endangered worldwide and I can produce the facts to support my opinion.  It's not a mere opinion Opinions that are true have an objective aspect as well as a subjective.  Of course, if I do not know the facts about sharks, in that case I can offer only a subjective opinion.
Now subjectivists may not roll over before this commonsensical objection.  They may claim that the so-called 'facts' are also nothing more than opinions, so we will need to come up with a stronger, more philosophical critique. 
Our search will not be in vain.  Subjectivism and other radical relativisms collapse under careful scrutiny.  First up is a practical problem,  We can be quite confident that subjectivists do not act on their own theory in the real world.  When checking out at the grocery counter, they don't want to hear from the clerk that in her opinion the amount owed is x number of dollars.  They will insist on the actual total, demanding a digital recalculation, if necessary.  If diagnosed with a serious illness, they will seek a second opinion in an attempt to get a better handle on the truth about their condition
Secondly, subjectivism is pseudo-philosophy, passing itself off as the real thing.  Philosophy has always been understood as the love or pursuit of wisdom but is more precisely defined as reflective "inquiry into knowledge, truth, reality, reason, meaning, and value" (A.C. Grayling, Ideas That Matter).  Since there can be no wisdom and no rational inquiry without truth, subjectivism is not really a philosophical theory at all, but rather a kind of anti-philosophy, a negation of the philosophical project altogether and, by the way, of science as well.  
However, the main philosophical problem with subjectivism is that it is self-defeating.  The subjectivist wants us to believe that there is no truth, only opinion, but he does not offer his view as a mere opinion.  No indeed.  He wants us to accept it as a truth, an objectively true generalization about all knowledge claims made by anyone anywhere.  Subjectivism, he insists, is the one true or correct way to think about beliefs.  But his theory says there is no truth, no true beliefs about the world, only opinions.  This is incoherent.  The subjectivist is caught in a performative contradiction, a trap of his own making.  If he believes his theory, he can't teach it, and if he teaches it, he can't believe it.  
That is why during the discussion we heard no argument in support of subjectivism, only a loud insistence that "There is no objective truth; only subjective opinions!"  There can be no supporting argument, of course, for that would require reasons leading logically to the subjectivist's conclusion, reasons which would have to be true in order to be supportive of the theory.  Moreover, all logical reasoning presupposes certain a priori truths, such as the rule 'Modus Ponens': If a statement P implies Q, and P is true, then it follows necessarily that Q is true.* But the theory says there is no truth, so there can be no logic, no true reasons, and the whole project collapses into nonsense.
It may often be a difficult needle to find in the global haystack of opinions, but …


from X-FILES (of course)

Postmodern subjectivism/relativism seems to be fading from the academic world these days (See, e.g.,The Passing of Postmodernism here), so most discussions of relativism these days are focused on moral relativism.  For my treatment of that topic, see the next post on this site.  

Critique of Ethical Relativism
Ethical relativism is the philosophical theory that there is no such thing as objective moral truth.  All judgments about right and wrong, good and evil, are relative to cultural traditions.  The version of ER discussed last night (PC meeting, December 2013) is based on a generalization I call Descriptive Relativism 1:  different cultures hold conflicting beliefs about right and wrong.  That is a well-known fact which does not provide a basis for any interesting ethical conclusions.  Different cultures hold different beliefs about all sorts of things:  origin of the world; causes of diseases; the best form of government, etc.  So what?
A more sophisticated premise would be this one, call it Descriptive Relativism 2:  different cultures hold conflicting fundamental beliefs about right and wrong.  The distinction is important because many apparent moral differences among cultures disappear when they are traced back to their deeper ethical assumptions.  For example, the Aztecs believed human sacrifice was justified, because they thought it was necessary to ensure the sun would rise each day.  We agree with that value judgment but disagree about the human sacrifice, because we know sunrises are governed by physical laws and are entirely unaffected by anything humans might do here on earth, even dramatic rituals like human sacrifice.  Both Aztecs and we moderns agree that policies which benefit our societies in important ways are good.  That's a shared fundamental value.  So the dispute is about factual assumptions about how the universe works, not about fundamental values.
For a contrasting example, consider the Navajo game called Chicken Polo.  They bury a chicken in the ground up to its neck and then compete on horseback to see who can knock the chicken’s head off.  They freely admit that chickens feel pain but see nothing wrong in the game nonetheless.  Our culture, of course, regards such activities as morally wrong, because we hold as a fundamental principle that it is wrong to cause pain to sentient creatures for trivial reasons.  The Navajos do not hold that principle, so the cultural difference in this case is fundamental.
A proper discussion of ethical relativism must focus on Descriptive Relativism 2.  The question then becomes: if different cultures hold differing fundamental beliefs about right and wrong, good and evil, does this fact entail ethical relativism?
Descriptive relativism in either version is an empirical generalization, not an ethical theory.  Ethical relativism is a theory about moral truth.  Some versions of it use descriptive relativism as a basis for ethical relativism.
Using DR2 as the main premise, here is how the argument for ethical relativism goes:
Basic Argument:
Different cultures hold conflicting fundamental beliefs about right and wrong (DR2).
Therefore, all moral values are relative.
The conclusion can be stated in several ways:
a. Therefore, actions believed to be right in a given culture really are right, and those same acts, if thought to be wrong in another culture, really are wrong.
b. There are no grounds for choosing between conflicting moral beliefs.
c. There are no universal ethical principles, no ethical truths, no ethical knowledge.
d. It is wrong to judge another culture’s morals by our own standards.
Here is an interesting, although inconclusive, observation:  most relativists do not really believe their own theory.  Ask a relativist if she is really willing to accept that, if some culture believes that torturing children is morally permissible, then torturing children really is ok in that culture.  The response to my asking this question is usually dead silence.
Here are the more philosophical objections to ethical relativism (ER):
1.  ER can’t account for the fact that we reason and argue with one another about value issues. If there is not at least one ethical constant, we could not reason about values at all.
2.  ER can’t explain the many fundamental ethical similarities among various cultures, e.g. the universal prohibition against murder.
3.  There is a lot of agreement among cultures about basic norms.  If the Basic Argument were valid, we would expect far more basic disagreements than exist in fact in order to justify the conclusion “All values are relative.”
4.  How to identify a culture?  (pluralistic societies, subcultures, minorities?)  How many members of a group are required to establish a cultural norm?  Is a motorcycle gang a culture?  Is the Muslim minority in Canada a culture?  Is there an authentic Canadian culture at all?  In a multicultural nation, whose culture gets to decide morality?
5.  Why should culture be the arbiter of moral truth?  Why not “All values are relative to the whims of the king?” or “All values are relative to the preferences of each individual (subjectivism)?”  Standard ER is unable to answer this question.
6.   The truly fatal flaw in ER is that the Basic Argument is invalid, a logical fallacy.  The premise is an empirical generalization, a statement of fact, whereas conclusion (a) is a statement about values.  It is a widely accepted axiom in philosophy that we cannot validly infer a conclusion about right and wrong from a premise that is merely factual.  For example, the fact that stealing is common does not entail that stealing is morally right.  Thus, the conclusion of the Basic Argument does not follow from DR2. 
   Conclusions (b) and (c) also are fallacious, because they are epistemological statements, which also cannot be inferred from an empirical premise.  For example, in the case of (c), the argument is analogous to reasoning that because people at various times in history have held contrary beliefs about the shape of the earth, there is therefore no truth about the shape of the earth. Analogously, just because the Navajo and most Americans disagree about the morality of Chicken Polo does not imply there is no truth about Chicken Polo being right or wrong. Conclusion (d) is wrong for a different reason:  it assumes some standard of right and wrong, thereby contradicting (b).
These arguments constitute a serious challenge to ethical relativism.  But it gets worse,  Consider the more radical attack by Julien Beillard  in his essay “Moral Relativism is Unintelligible." He argues there that it makes no sense to say that "morality is relatively true.”  Philosophy Now, July 15, 2013.
Beillard begins with the usual critique of cultural relativism based on DR 1.  He then points out that mere disagreement of opinion - in any field at all - does not imply that there is no objective truth about the issues.
Then he makes the interesting argument that ethical relativism is self-defeating.  For, if the position is based on the fact of deep and abiding differences of opinion, there arises the problem that ethical relativism itself is an object of considerable deep disagreement among philosophers.  Does that mean that ethical relativism itself is not objectively true but only relatively true?  Seems so.
However, JB's principal aim is to go beyond showing that ER is false, implausible, or self-defeating.  He wants to show that it is unintelligible, i.e. that there is no concept of truth that can be used to frame the thesis that moral truth is relative to the standards or beliefs of a given society (or king or other individual).   Check out the article for more on this interesting approach.
Whether a consensus will develop around this approach remains to be seen, but it looks like another serious blow against ethical relativism.

Tuesday, 7 November 2017

Heap or No Heap: A Possible Solution
Philosopher Timothy Williamson begins his essay on vagueness with the following thought-experiment:
"Imagine a heap of sand. You carefully remove one grain. Is there still a heap? The obvious answer is: yes. Removing one grain doesn’t turn a heap into no heap. That principle can be applied again as you remove another grain, and then another… After each removal, there’s still a heap, according to the principle. But there were only finitely many grains to start with, so eventually you get down to a heap with just three grains, then a heap with just two grains, a heap with just one grain, and finally a heap with no grains at all. But that’s ridiculous. There must be something wrong with the principle. Sometimes, removing one grain does turn a heap into no heap. But that seems ridiculous too. How can one grain make so much difference? That ancient puzzle is called the sorites paradox, from the Greek word for ‘heap’." 
-  Timothy Williamson, Aeon, "On vagueness, or when is a heap of sand not a heap of sand?" *
Williamson is right: there is something wrong with the principle "removing one grain does not turn a heap into no heap."  One solution proposed by some contemporary philosophers is to replace traditional logic by something called 'fuzzy logic.'  Curious readers can consult Google for more information about fuzzy logic, if they wish, but TW thinks that approach doesn't work, and besides, there is nothing broken here that needs fixing.  He insists that traditional logic works just fine.  The problem confronting us in the sorites paradox, he writes, is vagueness and ...
"Vagueness isn’t a problem about logic; it’s a problem about knowledge. A statement can be true without your knowing that it is true. There really is a stage when you have a heap, you remove one grain, and you no longer have a heap. The trouble is that you have no way of recognising that stage when it arrives, so you don’t know at which point this happens."
I agree with TW that the common use of the word 'heap' is loose and vague, but he throws in the towel too soon.  Isn't it part of the job of philosophers to try to clarify troublesome terms so that they might better serve us in answering tough questions?  TW declines to do this and so leaves us with the puzzle unsolved on account of terminal vagueness.  I will argue here that there is a way of recognizing the stage when no-heap arrives and that the solution requires no exotic logical moves, just one additional concept.  Here we go.
As in many other discussions of a philosophical problem, it is legitimate here to ask for a definition of the term 'heap.'  At the outset it is vague indeed, and leaving it vague practically guarantees that no logical solution will be found.  But why should we accept the initial vagueness? 
Before offering a definition of 'heap,' I want to contrast it with another, not-so-vague term:  holon. A holon (the term was first coined by Arthur Koestler) is any entity that is a whole and also a part of a larger whole.
Holons have a number of characteristics, but the crucial one for this discussion is that every holon has internal structural integrity or agency, that is, an internal force or principle that enables the holon to resist dissolution into its component parts.  When a holon becomes part of a higher holon, the principle of functional organization of the lower is conferred or imposed by the higher holon.  For example, a protein molecule is a holon with its own agency, and it is also part of a higher holon, a cell, which organizes the activity of the protein for its own benefit.  If the cell moves around, the protein molecules and all the rest of the cell's component parts move with it; none of them can stay behind or go off on their own.  For another example, an atom is a holon within which the strong and weak forces keep the component parts - protons, neutrons, and electrons - together as a functioning unit.  The higher-level holon imposes its own agency on the lower components.  Thus, when the atom changes its location in space, the whole atom moves.  None of the components are lost or left behind.  (Nuclear fission is an exception.) 
Now we can define 'heap' as any collection of holons whose agency is not superseded by the higher agency of the collection.  Each member is a holon, but the collection is not a higher holon; it has no agency of its own and hence no power to organize the activities of its members; no power of self-preservation.  So if a member holon disappears or moves to another location, the rest of the collection does not disappear or move with it.  Any such collection we can call a 'heap.'
This definition provides the basis for a solution to our puzzle.  Recall Williamson's principle:  "Removing one grain doesn’t turn a heap into no heap." By defining 'heap,' we have removed the vagueness from the principle.  Since the principle is no longer vague, there's a greater chance that it can be used to solve our puzzle.  Let's take a look.  
We begin with our pile of sand.  It is obviously a heap, but now we understand precisely what that means:  it is a collection of grains, each of which is a holon whose agency is not subordinated to the agency of a higher holon.  Next we apply the principle: removing a single grain of sand doesn't turn the heap into no heap.  The remainder is still a heap - a slightly smaller heap, but still a heap.  Remove another one and the heap is still a heap, not a no-heap.  Continue removing one grain at a time.  The heap becomes smaller and smaller.  At what point does it cease to be a heap?  In the original puzzle, there seemed to be no clear answer to the question; hence the paradox.  With our  new definition of 'heap,' there is a precise answer:  as grain after grain of sand is removed, we eventually arrive at just two grains.  Is it still a heap?  Yes!  It is still a collection of holons whose agency is not taken over by a higher holon.  It's a very small heap, but still a heap.
Now remove one more grain.  Do we still have a heap?  No, because there is no longer a collection.  No collection, no heap.  The remaining grain of sand is a holon, standing alone with its magnificent self-contained agency, no longer a member of a collection with which it was only loosely connected in the first place.  So the answer I propose to our puzzle is that a heap ceases to be a heap when the number of member holons has been reduced to one.  In other words, the boundary between heap and no-heap is the boundary between the last grain of sand and the second-to-last.
This solution seems sound, but it requires us to amend our guiding principle to read "removing one grain does not turn a heap into no heap unless there are only two grains in the heap.
_____________
* Click here to read T. Williamson's essay.


Thursday, 21 September 2017

How Free Do You Need to Be?
Philosopher Daniel Dennett posed this question to someone who asked him whether humans have free will.  I borrowed it for a discussion of the topic at our recent Philosophy Night Live meeting.  The precise topic was "Why do we need ethics if we live in a deterministic world?"  The conversation focused mainly on the freedom/determinism issue, and we never got to the ethics part of the question.   More on that later. *
Meanwhile, let's take a closer look the general issue and then at Dennett's interesting question.  Determinism is simply the commonplace view that events in the world have causes; they can at least theoretically be explained by identifying the previous relevant events or conditions that preceded them.  A traffic accident causing injury and property damage, for example, was the result of reckless driving on the part of a drunken driver.  But other conditions were necessarily in play as well: the car he was driving (no car, no accident), the driver's failure to use a safety belt perhaps; the failure of an air bag; the unfortunate presence of another vehicle which was struck in the incident, etc.  (No event has just a single cause.)
Determinism bothers many people, because it seems to rule out free will.  Most of the time they feel they are acting freely.  On the other hand, belief in free will seems to imply that some events, e.g. deliberate human choices, have no cause; they are exempt from the universal law of cause and effect.  But that seems absurd: a human act that had no cause would be simply random, unexplainable, and how would that be different from an act of insanity?  How to resolve this paradox?
One solution is offered by libertarians (not the political variety).  Libertarianism is the view that, although most of our everyday actions are automatic or semi-conscious, the result of habit or physical reflexes and therefore determined, at least those actions that result from careful, rational deliberation about the available alternatives (e.g. Which university should I attend?) are not determined by any pre-existing conditions.  The self, libertarians insist, is the sole cause of the decision.
But who or what is the self, and how could anyone know for sure whether even their most carefully considered decisions were not the result of pre-existing conditions?  Modern depth psychology has exposed too many of our unconscious motives, biases, suppressed emotions, and other 'shadow' material for us to ever be very confident about how free any of our choices really are.  Not to mention the neuromaniacs who think everything humans do is determined by fired-up neurons in the brain.  Libertarianism is unconvincing. 
Let's be honest.  In everyday life we are all determinists.  If, after investigating a burglary at your home, the police told you the alleged burglary had no cause; it just happened; your stuff just vanished - no burglar, no other explanation, forget it, cased closed - you would probably sue the department.  Our common intuition is that everything that happens has a causal explanation, even if we never discover in a given case what the explanation is.
To be free or not  to be free - that is the question, a question that is as old as philosophy.  At the meeting, we briefly discussed a modern interpretation that seems to offer a way out.  The theory, favored by most philosophers today, is called compatibilism (or sometimes soft determinism).  The basic insight of this theory is that the traditional debate sets the bar for defining 'freedom' too high.  This was the point of philosopher Daniel Dennett's question, "How free do we need to be?"  How free, that is, to satisfy the demands of morality and legal accountability and our subjective feeling of being able to choose among real alternatives?  The answer, compatibilists think, is not the absolute freedom insisted on by libertarians which violates the principle of causality, but rather a type of freedom that exists when and only when certain conditions are present in a given situation.
Compatibilitists are determinists.  They agree that human actions, like all other natural events, are determined by pre-existing causes or conditions.  In humans, one of the causal factors is, in some cases, a process of careful deliberation.  Faced with a choice of A or B, we consider both for a while, reflecting on the relative merits.  Eventually, our feelings about A win out, and that's what we end up choosing to do.  We choose A because that's what we want to do.  And that, says the compatibilist, is exactly what freedom is: the ability to choose and act according to our desires. Call this conditional or circumstantial freedom.
Thus, compatibilists disagree with libertarians about the meaning of 'freedom.'  They claim that 'absolute freedom' is a useless fiction.  Conditional freedom is the only kind we need for understanding our practical lives, and it is compatible with determinism.  To be conditionally free means you are in control of your choices.  In the moment of free decision, there is no external coercion and no internal compulsion (psychological disorder).  You choose A because you have rationally compared it to B, and A is what you want to do.  That is why you feel free in the moment of decision.  When people have this kind of control, we say they acted freely and are responsible for what they do.  But to be free in this sense does not mean your action is uncaused or generated out of nothing by a transcendent agent, self, or soul, as libertarians claim.  All decisions and actions are conditioned.  You could have chosen B instead of A, but only if one or more of the conditions had been different.
Perhaps the easiest way to grasp this point is to see that what ultimately trips the switch of a choice, the final factor in the process of making a decision, is the strongest feeling or desire that shows up at the end of reflection.  "A or B? …...Hmmm……...not sure…….. Ah! I have it - A feels right.  That's what I'll do."  Yes, you could have chosen B, but only if the strongest desire in the moment had been different.  In the end our choices are caused by our desires, but desires don't appear out of nowhere like some random quantum events, they have causes, too, some of them known to us - habits, aspects of our character - some of them unknown, buried in the subconscious mind.  But all have a causal history; we can't create our desires out of nothing.  And that fact implies determinism.
This should not bother us, however,  Compatibilists say that conditional freedom is the only kind of freedom we need or care about.  It is all we need to judge people as responsible or not responsible for what they do, and it satisfies the justice system as a criterion for determining guilt or innocence.  Demanding more freedom than this seems metaphysically absurd; it's like demanding to be God, to be able to act ex nihilo.  As philosopher John Dewey once said, "What men have esteemed and fought for in the name of liberty is varied and complex, but certainly it has never been a metaphysical freedom of will."  
_____________

* The short answer is that regardless of how we decide the freedom/determinism issue, as social beings we need to know how to distinguish right from wrong actions, and that's what ethics is about.

Sunday, 9 July 2017

THE CLEVER SCHOOLBOY
A Philosophical Conundrum
Sometimes philosophers take a break from the heavy lifting required by metaphysics, epistemology, value theory, and other arcane inquiries, in order to have a little fun with a puzzle.  Here is one that baffled a number of professional philosophers several decades back, until British philosopher Gilbert Ryle revealed in a journal article that he had found a solution that had satisfied him for quite a long time.  I offer my own version here for your entertainment and possible philosophical profit.
The setup
A math teacher announced to his class on a Friday, “There will be a surprise test next week.  By that I mean that no student, while walking to school on the day of the test, will be able to predict the test will be given on that day.”
A clever student at the back of the class raised his hand and said, “Sir, that’s impossible.  You can’t give a surprise test next week or any other week if you announce it beforehand.  Here’s the thing:  if you don’t give us the test by next Thursday, any student walking to school on Friday would know the test has to occur that day.  But then it wouldn’t be a surprise.  Therefore, your test can’t be given on Friday.  Now, knowing that Friday is out, if the test has not been given by Wednesday, then any student walking to school on Thursday would realize the test would have to be given on that day.  But then it wouldn’t be a surprise, so the test can’t be given on Thursday, either.  The same line of thinking applies in turn to Wednesday, Tuesday, and Monday.  There are no other possible test days.  Therefore, you can’t give us a surprise test next week at all.”
The teacher praised the student for his ingenious objection and dismissed the class.  The following week, he gave the test on Tuesday surprising, in the required sense, every member of the class.
The Conundrum:  Where did the student’s argument go wrong?
Time out
You may wish to close your laptop at this point and adjourn to a coffee shop to think about this and possibly solve it on your own.  However, be forewarned - it's not easy.  If you find yourself after a while tempted to hurl your coffee mug at a wall, take a breath, return to your computer, and let the following restore your peace of mind.
The solution
Here is the argument again, this time set out in step-by-step logical form for ease of analysis:
The teacher decrees:  "There will be a surprise test next week.  By 'surprise' I mean that no student walking to school on any of the 5 days will know that the test will be given on that day."
The clever student argues to the contrary:
1. (a) If the teacher doesn’t give the test by next Thursday (i.e. by the end of the day’s class), any student walking to school on Friday would know the test would have to be given on that day.  (b) But then it wouldn’t be a surprise, which contradicts the teacher's decree.  (c) Therefore, the test can't be given on Friday.
2. (a) Now, knowing that Friday is out (can't be a surprise test day), if the test has not been given by Wednesday, then any student walking to school on Thursday would realize the test would have to be given on that day.  (b) But then it wouldn’t be a surprise.  (c) Therefore, the test can’t be given on Thursday either.
3. (a) Next, knowing that Friday and Thursday are out, any student walking to school on Wednesday would know the test has to be given on that day.  (b) But then it wouldn't be a surprise.  (c) Therefore, the test can't be given on Wednesday.
4. (a) Next, knowing that Friday, Thursday, and Wednesday are out, any student walking to school on Tuesday would know that the test has to be given on that day.  (b) But then it wouldn't be a surprise.  (c) Therefore, the test can't be given on Tuesday.
5. (a) Finally, knowing that Friday, Thursday, Wednesday, and Tuesday are out, any student walking to school on Monday would know the test has to be given on that day.  (b) But then it wouldn't be a surprise.  (c) Therefore, the test can't be given on Monday.
6. There are no other possible test days in the week.
Conclusion:  Therefore, the teacher can’t give the students a surprise test next week at all.
Problem:  The teacher administered the test on Tuesday the following week, surprising, in the required sense, everyone in the class.  So we know something is wrong with the clever student's argument.  But what exactly? 
The student's reasoning consists of a chain of 5 deductive arguments, all of which have the same structure - premise a, premise b, conclusion c - only the names of the days are different in each argument.  After Step 1, each argument depends on the one before.  That means if the first argument is flawed, the entire chain collapses.  And that is exactly what happens.  Here's why:
A deductive argument can go wrong in two basic ways:
First, a deductive argument is invalid if the conclusion does not follow from the premises, even if the premises are true.
   All men are mortal.
   Some women are mortal.
   Therefore, some women are men.
[By contrast, of course, an argument is valid if its conclusion follows necessarily from the premises.  That is, if the premises are true, the conclusion has to be true.
   All golden eagles are raptors.
   All raptors are warm-blooded.  
   Therefore, all golden eagles are warm-blooded.]
Second, a deductive argument is unsound when one or more of the premises is false, even if the conclusion does follow from the premises. 
   All humans are mortal.
   Aphrodite is a human.  (False.  Aphrodite is a goddess, not a human.)
   Therefore, Aphrodite is mortal.
Of course, an argument may be both invalid and unsound, but that is not the case with the surprise test argument.  It is valid but unsound
Why valid?  If you look carefully at the student's argument at Step 1, you can see that if the premises are true, the conclusion is necessarily true.  That means it's valid.  The logic is perfect.  But is the argument sound?  No.  Consider the first premise:  if the test is not given by Thursday, then it must be given on Friday.  But that is true only if the student can be certain that there will be a test.  Can he be certain of that?
The student assumes, as a matter of fact, that there will be a test next week.  Is that assumption true?  Is the student entitled to make that assumption? …. What does he know for sure?  Only that the teacher said there will be a test.  Could he logically infer from that that there will be a test?
   The teacher said there will be a test next week.
   Therefore, there will be a test next week.
No.  An inference like this is sometimes warranted.  For example, there is a scientific law that tells us that if we heat a piece of copper, it will expand - always; we know of no exceptions.  So we can be certain that every time we heat our copper pan on the stove, it will expand.  Now, is there a scientific law that states 'Every time a teacher says there will be a test next week, the test always takes place'?  Of course not.  Perhaps the student has never experienced an occasion when a teacher decreed a test but then the test didn't happen.  Even so, does that justify his thinking that such an occasion can never happen?  Certainly not.
With her decree - "There will be a surprise test next week" - the teacher was not making a prediction based on scientific law.  She was stating her intention to give a test next week.  As the cliché has it, there is many a slip between cup and lip.  Could the teacher guarantee that she would give the test next week?  Of course not.  She is not God.  She can't control how the future will actually play out despite her best intentions. 
I needn't bore you here with a list of the multitude of circumstances that might intervene to prevent the teacher from giving her test: she falls ill and can't teach on Friday, a blizzard forces a total school closure, the dog eats her test paper, etc.  She might even change her mind at the last minute and announce to the class on Friday morning, "There won't be a test today, kids.  I looked it over last night and decided it's not as good as it should be.  Over the week-end I will revise it and give you a surprise test next week."
It should be clear now that a test, announced as a surprise in the previous week, can be administered even on Friday.  So, if the test has not been given by Thursday, the surprised students would be well advised to study for the test anyway - just in case.
And what of our clever schoolboy?  We give him an A for logical brilliance but an F for not realizing the difference between an intention and a scientific prediction.

Thursday, 11 May 2017

Billionaires and Philanthropy
For those of you who are still interested in the conversation we had at this blog a few weeks ago about billionaires and economic justice, here are a few additional ideas.  In a comment I added to my blogpost"How many billionaires should there be?" I wrote:
"My point is not to criticize Warren Buffett, but to argue that decisions about how to use the superfluous wealth of the super-rich should not be left up to them.  They should be social decisions."
Why do I think that?  First, because the super-rich have used social capital to amass their vast private fortunes.  The image of the self-made rich person is a myth.  The wealth of millionaires and billionaires is created by the entire society (workers, civil servants, elected officials, health care systems, etc.).  While granting that inventors and other innovators should be rewarded proportionately for their contributions to wealth creation, other contributors to the market process should have some say in how superfluous wealth is spent.  As economist Richard D. Wolff says, "Wealth is not privately created [by heroic entrepreneurs acting alone] and then collectively appropriated [through taxation]; it is collectively created and privately appropriated" - as I would put it, stolen by means of favorable tax codes, tax avoidance, pro-business, anti-labor laws and regulations, subsidies, and other mechanisms used by the rich to prevent fair distribution of wealth to society as a whole.
Secondly, because, contrary to those who think billionaires and other rich folk are needed for their valuable contributions to charities and charitable projects of various kinds (a practice now becoming known as philanthrocapitalism), the super-rich cannot be counted on to make objective judgments about which human needs worldwide deserve high priority, which projects make best use of the money to better the lives of the disadvantaged.  Not that they donate unwisely all the time, but they operate all too often from their own self-interest (need to make themselves look good or further their political agendas), from a capitalist frame of reference (money donated toward 'market solutions'), or simply to satisfy some personal preference (donating to an art museum).  The article linked below expands on this critique of 'philanthrocapitalism.'
  
http://www.truth-out.org/news/item/34190-why-philanthropy-actually-hurts-rather-than-helps-some-of-the-world-s-worst-problems



Monday, 24 April 2017

Where Does Thinking Take Place?
If you would be so kind, please help me find my mind.    - Mose Allison
I often introduce a discussion about the mind-body problem by asking "Where does your thinking take place?"  Nearly everyone answers, "In my brain."  Such is the power of contemporary neuroscience with its razzle-dazzle imaging technology and brain-probing techniques.  When I reply, "Wait a minute.  What about you?  I thought it was you who does the thinking.  When I ask what you think about this or that, it's you, the person, that answers, isn't it?  Now you tell me it's your brain that does your thinking.  How did you, the person who answered my question, drop out of the story?"  People often look startled when I ask this question, because they have not thought carefully about the relationship between self, mind, and brain.  They don't realize the dramatic self-erasure that occurs when they embrace a materialistic theory of mind.  Neuroscientists have popularized the notion that mental activities like thinking, imagining, feeling, etc., are just brain states, and science is God, so people assume it must be true.*  However, this doctrine, known as physicalism or the physicalist theory of mind, is not science - there is no experiment that proves it.  It is bad philosophy.  Here is some good philosophy to help clean up the mess.
When you are thinking about something, there are at least three factors involved.  There is the agent of the thinking, that's you, a person.  Second, there is the process of thinking, moving from one idea to another, as when balancing your checkbook or writing a letter.  Third, there is what your thinking is about; thinking is always about something.  This 'aboutness'  is often called intentionality.
These three factors cannot be separated.  They are necessary components of every act of thinking.  Of course, a normally functioning brain is also necessary, along with the rest of the body it is attached to.  However, to say that thinking takes place in the brain is to attempt to cram all three of the first group of necessary components into that 3-pounds of squishy meat inside your skull.  So according to the standard view, you, your thinking, and what you are thinking about are all inside your brain.  But, of course, you have no awareness of being surrounded by or sitting on top of your brain tissue.  You are not actually aware of your brain at all.  You are aware of your body and the contents of the room you are in, and you are aware of your thoughts, but your brain is not available for inspection.  You probably don't know anything about it.  You may be living in a Matrix, but you are definitely not living inside your brain.
Are the absurdities of this story starting to become rather obvious?  First is the ridiculous notion that you, the person doing the thinking is inside your own brain at the same time as you are in the external, physical world reading an essay with eyes that are precisely not in your brain.  Philosopher Patricia Churchland goes further; she says, no, you are not in your brain; you are your brain.  A person is just a brain in a skull (the rest of the nervous system is apparently just for carrying the head around).  This kind of talk borders on lunacy.
Here's another way to see that thinking can't happen in the brain.  Let's talk about intentionality, the 'aboutness' of thought.  Thinking is always about something.  Let's assume a thinker, Sue, is thinking about the Pythagorean Theorem and let's suppose she is using a drawing like the one below to help her visualize the relationships.

                                                            a2 + b2  = c2
You might remember your geometry teacher pointing out that drawings like this are not real triangles and squares; they are representations.  Lines in drawings have width and thickness; real triangles do not,   Drawings are also never perfect, whereas real triangles have perfectly straight lines and perfectly accurate angles.  They possess only two dimensions, and no physical properties like mass or volume.  Drawings are just visual representations of abstract geometric ideas.  They are not found in nature, and they cannot be made physical by any human art.  So when Sue is working on the Pythagorean theorem, is the object of her thinking - the real triangle - in her brain?  Of course not.  The brain is a physical object, so anything in it would also have to be physical.  Plenty of neurons, blood, and water in there, but no triangles.  Lots of electrical signals whizzing around, but no thinking.

This description applies to any kind of thinking, not just mathematics.  All thinking involves concepts, and concepts are non-physical.  Take justice, for example.  Can the meaning of 'justice' be found in the brain?  Certainly not.  Only neurons in there, no justice, no meanings. 
Here is my argument in brief: 1) 'mind' is not a thing; it's a collective noun we use for convenience to talk about a number of human activities that seem different from the operations of our organs and limbs; call them 'mental activities.'  Thinking is one of those.  2) that any thinking operation requires at least four necessary elements:  a person, an object, a process, and a brain; and 3) that none of the first three can be found in the brain.  They are correlated with brain states but are not themselves brain states.  It follows that the popular "thinking happens in the brain" theory is wrong.  A functioning brain is the necessary physical component of any act of thinking, but it is not sufficient to explain the other components which are not physical.  Thinking does not take place in the brain.
"Where does it take place then?  If not in the brain, then where?"  Nowhere, I'm afraid.  The trouble lies with the question.  "Where do mental activities happen?" is based on the hidden assumption that everything has to be somewhere, that is, everything we consider to be real has to have simple location in space.  That's the fundamental assumption of all materialist philosophy and science.** But why should we believe it?  Where's the proof?  There is no proof, because there is no possible way of proving anything about everything.  The first principle of physical science is not itself a scientific hypothesis.  It's an orienting metaphysical assumption that has to be taken on blind faith if physical science is ever to get off the ground.  Such assumptions, however, can be falsified, and we have seen good reasons to think this one - that all reality is material - is false.
_____________
   * Novice philosophers might forget themselves in the embrace, but neuroscientists themselves do not.  They know that in order to discover the correlations between a mind and whatever is showing up on their MRI screens, they have to ask the experimental subject, the person who is having the experience, what it is they are experiencing. 
      ** For more on this, see my posts "Trouble With the Brain, Parts 1 and 2," October 2015.