Tuesday, 20 December 2011

Faith and Reason

Given the recent passing of Christopher Hitchens, the notorious atheist columnist, and given that we're fast approaching Yuletide (the pagan festival of the winter solstice), I thought it would be fun to write a piece on faith and reason.

I have often heard Christians argue that one must 'just have faith' and believe that God exists, that Jesus was his son, sent to redeem us for our sins. Indeed, even the famous Danish existentialist philosopher, Soren Kierkegaard, argued that one must 'take a leap of faith' - and not be concerned with the fact that religious belief is indeed irrational.

But let's consider a case. Suppose that you are accused of a murder. Suppose that you go to court, and the prosecution argues that you are guilty, and that no evidence is needed. Imagine that the judge brings down his gavel and says "Guilty as charged". When you appeal the judgment, and plead your innocence on the basis that there's no incriminating evidence whatsoever, further imagine that the judge shrugs and responds "I don't need evidence, I have complete faith that you are guilty".

Now let's turn to the case of religion. If the above scenario is unacceptable to you, why, then, would you consider it acceptable to say "I just have faith" in the case of God?

The problem with 'just believing' is that if you are ignoring evidence, you are basing your belief purely on other peoples' opinions. Take the example of what happens when you meet someone from another religious background. Let's say, Judaism. Suppose you're a Christian. Now, both you and your Jewish friend believe in a divine power; you call him 'God', your Jewish friend calls him 'HaShem'. You both agree that your beliefs are taken on faith. But now suppose you are conversing one day about the details of your beliefs, and you encounter the issue of Jesus. Your Jewish friend says that Jesus may have existed, but not been God's son. You disagree, and say that he is a blasphemer. How is this debate to be decided? Since your views are only based on faith, rather than evidence, neither of you can prove his case. The best you can do is repeatedly appeal to the authority of your respective religious texts. But that won't work, because that authority, in turn, depends on how loudly you shout at each other.

Personally, I find it impossible to believe most things without evidence. Of course, I take certain things on authority. So, for example, if a professor of geophysics writes a book that says that millions of years ago, the continents were joined as one, I feel inclined to accept it, since he is a professor, and as such, has done a lot of research into the matter. This is an example of the argument from appeal to authority. Officially, philosophers consider the argument of appeal to authority to be somewhat fallacious, but in fact, when researchers 'cite' each others' work in academic journal articles, they're really appealing to the authority of the person they cite. (And demonstrating respect for the originator of the cited idea). So it's not an appeal to authority that's the problem, per se. The problem comes in when the cited authority is dubious. So, for example, if I wrote a journal article about religion, and in it, I cited the opinion of my toddler, the reviewers would reject my article, since my toddler is not a credible or respected authority on matters religious.

This brings us to the question of why people 'just have faith'. Like a scientific researcher who believes what the professor of geophysics says about the truth about continental drift, a believer will accept the authority of his priest or Bible. Believers, then, are not so much taking their belief 'on blind faith' (which is redundant), but instead, they are taking their belief on authority. I said above that taking one's belief on faith is redundant. This is because to 'have faith' means 'to trust' or 'to believe'. So having belief based on faith, is really to have a belief based on a belief. Which says nothing about truth. It's all castles in the air.

So how does truth get established, then, if not on the authority of texts? What makes an authoritative source, authoritative? How do we know which books or persons' statements are legitimate, or true, or at the very least, well-attested-to? The answer is evidence. The more evidence a belief has to support it, the more probably true it is. There is a formula that philosophers and statisticians use to calculate this. It's known as [Bayes' Theorem](http://en.wikipedia.org/wiki/Bayes'_theorem). It goes something like this:

P(H|E) = P(E|H&k) x P(H|k) / P(E|k)

This reads: The probability of the hypothesis, given the evidence, is equal to the probability of the evidence, given the hypothesis, and some background knowledge, multiplied by the probability of the hypothesis alone, given some background knowledge, divided by the probability of the evidence, given some background knowledge.

In this case, the evidence is say, fossils, plate tectonics, DNA mutations, speciation, and other scientific facts. The background knowledge would include things like the laws of science, causation, and so on. The hypotheses will be things like Christianity or Judaism or Atheism. So, in this formula, we have to ask: what is the probability that Christianity is true, given the evidence, say, of plate tectonics and DNA mutation? The answer must be near-zero, since Christianity and the Bible mention none of these things. The same applies to any other religion. Thus, on this formula, Christianity and Judaism would both come out as equiprobable, since the physical scientific evidence fails to support either of them in any major way. (Whether the evidence supports atheism is a side-issue for now). But the key point is this: you cannot decide between two religions on the basis of evidence, unless those religions adduce some evidence. So, where is the evidence? An old book does not count. There are many old books. I have an old Superman comic. That doesn't prove that Superman exists. We need proper evidence: scientific evidence. So, for example, if every molecule, on close inspection, bore the Hebrew words 'made by HaShem', we'd have strong evidence for Judaism.

In [another post on Viewshound](http://www.viewshound.com/religion/2011/6/religion-cant-have-its-cake-and-eat-it-too-part-3-), I argue that believers sometimes try to appeal to scientific evidence, such as the remains of Noah's Ark. There are, as I argued there, two problems with doing so. Firstly, evidence of one part of a story only raises the Bayesian probability of that story being true; it does not prove it. Secondly, if believers feel entitled to help themselves to scientific evidence (anything measurable or visible is scientific evidence) - then they must also accept the scientific evidence against them. If this is right, then science is the arbiter of truth, not religion. Religion can only escape from this by appealing to blind faith. But blind faith is not really blind. It is based on the Bible, or other religious texts, which in themselves are not based on anything at all. You can't argue that the Bible's authority is based on God's, since faith in God is based on the Bible. Hence, to say that the Bible is supported by God, and that your faith is supported by the Bible, is circular. Since the Bible is putatively the proof of God's existence, God cannot be supported by it. No theory is self-supporting; that's circular.

But is religion even a scientific theory? I argue that it is. Religion makes predictions, it makes claims about how the world is. Moreover, if God made the universe, he must be able to have effects, and initiate causes, in our own segment of space-time. That means that God's effects and actions should be measurable, and detectable by science. If God exists, then evidence for him exists, and it can be weighed scientifically against the contradictory evidence. Saying that 'you just need faith' is pointless; because it's saying that your only piece of evidence is a self-supporting book. That's just not good enough. If I wrote in my diary that you murdered a chap called Jones, and Jones was indeed dead, my diary would not count as sufficient evidence in court that you were guilty. Any judge who took my repeated insistence as to your guilt as evidence of your guilt, would be quite incompetent. We should take the same stance on religion. Mere repetitions of slogans is not proof of anything.

The famous ['big lie'](http://en.wikipedia.org/wiki/Big_Lie) argument applies here. Just because someone repeats a slogan to you often, doesn't mean it's true. You should verify the facts with your own eyes. Go and see for yourself. Hold a fossil and think of Bayes' equation: is this fossil more likely to exist on the hypothesis of God, or is it more likely to exist on the hypothesis of evolution? Keep asking yourself that. And don't be tempted to add ad-hoc explanations, such as 'God made fossils to test our faith'. Ad-hoc explanations are explanations that don't follow logically from your premises ('God exists'). There's nothing, in 'God exists' that entails that God would plant fossils to test your faith. There is, however, plenty in the theory of evolution that would entail that fossils exist. That's how you test if a theory is true. And religion is just one of many theories.

Tuesday, 15 November 2011

free-will debate - a summary

This debate amongst neuroscientists and amongst commentators on Internet needs to look at the philosophical research on the matter. There are a variety of positions.

Quick summary.

1. Fatalism. The universe's determinism ensures that everything comes out with a predetermined character, come what may. This is fatalism because it doesn't matter what you do, it will come out the way it's "meant to". Fatalism, I believe, is a primitive form of hard determinism.

2. Hard determinism. Your actions have antecedent causes over which you have no control, e.g. your country of birth, your language, your initial character formation, your genetic predispositions to react to stimuli and threats in a certain way. This ensures you always behave a certain way - within character. You are not free because you did not choose to be the way you are. This is a form of incompatibilism which states that free-will is not compatible with determinism: so much the worse for free will, it says. Freud was this sort of determinist.

3. Soft determinism or Compatibilism. Free-will is compatible with determinism. This is the status quo in Philosophy. The view is this: you are free if you act in such a way as to get what you want, and you get what you want because you wanted it. The fact that it originates in your character, which was deterministically created, is irrelevant. We speak of persons as free in the case where they get what they want because they wanted it. A person who is 'under duress' or 'forced' is not free because they either get what they don't want, or can't get what they do want.

4. Libertarianism. This is the view that we live in a non-deterministic universe, and have free-will, and we have free-will because we're not determined. A form of incompatibilism which says, free-will is incompatible with determinism, and so much the worse for determinism. It's usually argued from quantum mechanics. However, quantum mechanics' effects apply only at the subatomic levels, eg with electron position and radioactivity. Neither affect our brains at all, since each neuron is made of millions of atoms. It's statistically irrelevant. Moreover, even if quantum mechanics caused our wills, our wills would be RANDOM, rather than what we will being caused by what we want. In order to say that our actions are ours, they have to be causally linked to what we want. So the question of free-will for a libertarian is just how to get our wills free of antecedent causation, for which they rely on quantum mechanics (these days). In the past, e.g. Sartre, they relied on phenomenology - ie I just feel like I am not constrained by my past. The trouble with libertarianism is it doesn't clearly link who I am with what I want and what I do. You need causal determinism to do that.

5. Neuropsychology - .e.g Benjamin Libet, 1985, Journal: Behavioural and Brain Sciences, demonstrates that decisions are non-conscious or preconscious, and happen before we are aware of them. This means either our free-will is fully unconscious, as Freud said, or, we lack free-will, and really do act on instincts and desires, as Hume argued (Treatise on Human Nature, p460, Penguin Edition).

6. There is a debate about whether we need to be able to 'do otherwise' under the same circumstances in order to have free-will. Frankfurt (1969. “Alternative Possibilities and Moral Responsibility” in Journal of Philosophy, vol 45) gives an argument that we don't require alternative possibilities. But think about how often one says "If I were to live my life over again, I'd have done something different". But no: if you are who you are, and your life was the same up to that point in the second life, the circumstances leading to it would cause you to do the same thing, if determinism is true. If libertarianism is true, you could do otherwise, but then it wouldn't be YOU doing otherwise, it would be a quantum random antecedent cause.

7. Skepticism. Nietzsche argues that free-will is a christian concept invented in order to make people accountable to God. Modern skeptics, such as Richard Double, argue that Free-will is a nonsense concept. Double, R. (1991). The Nonreality of free-will. Oxford

My own view is irrelevant to this exposition. The particularly important point to note here is that there's a more nuanced version of this discussion available in philosophy journals, and to be wary of simplistic expositions on the web.

Thursday, 20 October 2011

why the desktop PC is dead

I have seen a number of arguments on the Internet about whether the much-ballyhooed proclamation of the death of the PC has any merit. I believe it does. Although industry pundits have been proclaiming its death for ten years, things are really starting to look that way, now. Of course, the situation is still very much like Mark Twain's ["The report of my death was an exaggeration".](http://www.twainquotes.com/Death.html). But I believe that with the [iPad](http://www.apple.com/ipad/), the PC's death is not far off.

Many of the arguments against this prediction focus on the minimalist feature-set of tablets. "There are no or insufficient USB ports", they may argue, or "there's no video or audio input jack", or my personal favourite: "I want to be able to install a custom graphics card". Or these detractors may focus on the software: "I rely on Microsoft Access" or "I want to render 3D animation", or - my complaint - "I need a UNIX shell". But these complaints are not the complaints of Joe Average, who just wants to check his email, type a simple letter, read a book, and browse the web. These are the complaints of computer nerds. They're in the minority.

Average Joes will be buying tablets and ditching their PCs - and they're the majority. The only reason the "PC is dying" prediction hasn't come true yet, is that the iPad is still relatively new and relatively expensive. Up until now, tablet PCs have been as complex as regular desktop PCs - without any benefits apart from portability. But portability is just one of the reasons tablets are worth considering. The most important point of the iPad, and why it has succeeded where previous tablets have failed, is its simplicity — which it owes to iOS, the stripped-down Mac OS X that powers the device.

PCs are not quite dead, but they soon will be. As soon as people notice that you can do spreadsheets on iOS using, say, [Apple's iWork suite](http://www.apple.com/iwork/), then there will be no major reason to use a PC. A MacBook Air, for example, is just a tablet with a keyboard; it has no optical drive, and its hard drive is a solid-state flash RAM drive. It contains no moving parts, just like the iPad. How is an iMac - the all-in-one screen device - not a tablet with a keyboard? Remove its optical drive and keyboard, and add a touch screen, and voila - the iMac is a tablet. As soon as Apple move the keyboard onto the Mac's screen, the Mac will no longer exist, and Apple will only be making tablets. The question will just be which model of tablet you want.

As for Windows-based PCs - in ten years they'll start copying apple - as usual. And then PCs will disappear. Where is my evidence that this will happen? Well, Apple removed the floppy drive in 1997/1998. PC manufacturers have since stopped shipping floppy drives. Now watch: once Microsoft have an app store and online downloads of their software, the optical drive will disappear from generic PCs, too, just as it is disappearing on Apple's machines. The ultimate optical drive killer is online streaming video, or Apple's iTunes store, where you can buy movies and download them legally. Once you can do that, you really do not need an optical drive. So after the optical drive is gone, the only thing stopping a PC from being a tablet is whether it has a physical keyboard or a touchscreen keyboard.

As for hardware port limitations, the new iPads have video output - it's just a question of video input for camcorders. But even then, if Apple improves their built-in video cameras, there will be no need for that. Likewise, once Apple and Microsoft are shipping major games on their app stores, there will also be a need for them to ship high-end graphics cards in their tablets. That will end the argument that tablets lack 'serious' gaming capacity.

I predict that within ten years, the only PCs left will be server boxes in corporate headquarters, research entities, and so on, and desktop machines in movie manufacturers' render farms. Apart from entities or people who need massive processing power, the future of the desktop PC is extinction.

Thursday, 6 October 2011

Steven P. Jobs - RIP

I remember that the last time I was actually shocked by the death of a celebrity was in 1997, when Princess Diana had her fatal car accident. Whatever people may say about her, at the time, I was saddened that the world had lost someone who was, at least on the surface, kind, benevolent, and a humanitarian, but more than that, who put an approachable face on the Aristocracy. Now with the recent passing of Steven P. Jobs, I felt the same: that the world had in fact suffered a loss.

But this time, for me, it was more personal. I first heard of Steve Jobs when I was 13 years old; I'd just used my first Apple Macintosh, and my brother and I were both computer nerds (we still are), who were avid followers of industry gossip. Steve Jobs had just been kicked out of Apple, the company he founded, and replaced as CEO. Rumour had it that he had just been too difficult to work with, too demanding, too arrogant. He shrugged it off, and went on to found two new companies - NeXT Computer and later, Pixar. For those who don't know (yes, believe it or not, some people don't know) - Pixar makes those 3D cartoons like Toy Story.

Skip forward to 1997, again, and Apple had recently hired Jobs back; the company was in dire trouble. Their share price had dropped to below $20. Their products were beige uninspiring boxes, their operating system couldn't multitask, and Microsoft looked ready to destroy them at any moment. People were predicting that the company would close very soon.

It didn't happen. Jobs' return was nothing short of the salvation of the company. It was followed by a string of successes; the first iMac, the iPod, the iTunes store, the legalisation of music downloads, the iPhone, and then, the iPad.

It is not an exaggeration to say that Steve Jobs took computers, which in 1978, were unusable, and made them ever more usable. Even his own company's product - the Apple II, was clunky. He pushed for the Lisa - named after his daughter - and then the Macintosh. He caused an internal civil war at Apple - the Apple II team versus the Mac team. The Mac team won. The graphical interface was popularised. Microsoft Windows soon followed. Then, in creating the NeXT computer, he effectively simultaneously created Apple's future. System 7 - the Mac Operating System that was available at the time he was hired back into Apple - was old. It couldn't multitask and didn't have proper multiple users. It crashed a lot. In effect, it was much like Windows 95. Just more user-friendly. He brought the NeXT-derived operating system, OpenStep, with him, and that formed the basis of the modern Mac OS - Mac OS X - the system I'm using right now. It also formed the basis of iOS - the operating system that would later power the iPhone and iPad devices.

Shortly after the iPad was released, I was muttering about how it wouldn't take off. Clearly, I lack Steve's understanding of human beings. People apparently _do_ want devices that are super-easy to use, and understand. I'm a programmer. I need sophisticated capabilities. The iPad does appeal to me, but only for browsing the web or reading a book. I would still need a Mac OS X/NeXTStep/OpenStep underbelly and commandline to do my work. But here's the point: Steve knew that people really don't want to understand how computers work - they just want them to work. That is what he created in the iPad. Since I have witnessed its increasing popularity, I have come to realise that the desktop PC is dead, and that tablets like the iPad are going to dominate computing for the next ten years at least.

So in effect, Steve Jobs has defined computing since 1978 - everyone else has merely followed his lead. The man was nothing short of a visionary genius. Today (or yesterday, depending on your time zone), is one of the saddest days in world history. Because the question is now open as to whether Apple will continue on its winning streak. It went from a few dollars a share to well over $300, and that is after several stock splits. It's now more valuable than Exxon Mobil, or the combined might of Microsoft and Google. My guess is that Apple must lose some of its shine; it does, and will to a large extent into the near future, derive its vision and focus from the personality cult that surrounded Steve. All we can hope is that somewhere in the ranks of Apple, someone else with his eye for design and usability will come forward. We can only wait and see.

Friday, 30 September 2011

Review of "God, No!" by Penn Jillette

So I bought Jillette's book from the iBookstore.

Let me start by saying a few things. I really like Penn Jillette. I like his attitude, his way of putting things, and his beliefs. Mostly. I have some reservations about his views on gun ownership, but that's a topic for another time.

However, his book disappointed me slightly. In retrospect, I suppose I "should have" guessed what it would be like, judging from his (and Teller's) TV show - Penn&Teller:Bullshit.

His book is pretty much like a series of ten of those shows. It's divided up into ten sections, for each of the Ten Commandments. He substitutes them with ten suggestions. However, the chapters that follow each of the ten suggestions tend to digress quite far from the topic at hand. They are also full of sound bites and short witty quips - much like his TV show. There's very little that's novel - certainly for a reader hoping for insightful atheistic arguments.

Most of the book seems to be an autobiography - which is, I think, what his primary interest was - and most of it shows how through some or other unexpected or arbitrary life experience, he came to arrive at atheism, or another aspect of his current world-view. Actually, most of the book seemed to consist of him bragging about the bizarre sexcapades that he experienced, and how 'hot' the woman in question was. The rest of the material seemed to consist of anecdotes from the stage act industry (live performers of all kinds), and how he and his business partner Teller went through this or that experience. I found the material amusing, for the most part - especially where he met Prince Charles and referred to him as "Chuck". But I didn't find the book very strong on the philosophical front - most of the points he made were aphorisms that could have been collected onto a single page. And I didn't find the chapters associated with those aphorisms really explained them. I guess he was free-associating.

I'd say the only thing in the book that I found memorable was his discussion of death. He reports that his family have a tradition where if someone dies, they buy blue balloons (and on the anniversary), and release them into the sky, and watch them disappear, never to be seen again. I found that story touching, and a novel idea.

But apart from that, most of the book sounds and reads like one of his shows - brief, blunt, lots of swearing, naked women, interviews with worst-exemplars of a type, oversimplified arguments, etc. I guess it's written in the brief sound-bite style that most Americans expect from TV, and thus, I am certain it will reach the bestseller lists, because I am certain its intended audience will love it. But I couldn't find anything to reference in there for my dissertation.

Sorry Penn. I still love your work.

Facebook is watching you - Who cares?

1. you're not paying for the use of FB, so they have to make money off you somehow.

2. they only track sites which have FB widgets on them; if a site has no FB like/track button, there's no way they can know you went to a site. A cookie is a flat piece of information, it is not a spy program. Cookies are what enable you, for example, to go to Google Search, and see that it has set 'safesearch' to 'on', because Google gave you a cookie that says 'safesearch=on'. Cookies are just preference files for websites.

3. So what if they track what other sites you 'like' via a FB icon? All those sites are OK/legit

4. If you're looking at dodgy sites, FB won't know about it, and even if they do, who cares? It's a machine, not a human. They don't have the staff to manually look at everyone's records and see oooooh - you looked at a pron site.

5. As for getting junk mail because of FB - no. Not unless you set the FB privacy settings such that ANYONE can see your email address. IF you set your email address to 'friends only', then NONE of your junk mail is because of FB. You get junk mail from posting your email clear text on a website, or, from giving it to dodgy sites that on-sell it. Or even worse; your email server was scanned with a name dictionary attack, and they just found it because your email address was something easy.

6. FB use the information to do the advert targeting on the side panel. I like that. I prefer to have it offer me relevant adverts, rather than stuff I am not interested in.

Friday, 16 September 2011

Why Evolution is True - response to criticisms

Hello everyone. Thanks for the responses. I generally don't respond to responses but I feel that these below deserve some clarifications. Thank you for taking your time to write responses and think about the material.

1. "Believers in what?" - you answered your own question. A Creator.

2. If you became a believer through reason rather than childhood brainwashing, that's ok. You're a sample of one, a statistical outlier. We'd need to do a statistical analysis of data to find out the correlation between belief and childhood brainwashing. I wager that we'll find a strong correlation - simply because otherwise there would be a completely even distribution of religions across the planet - if it were based on reason. E.g. the statistics would not show that the majority of christian children belong to christian parents. Look at it this way. The evidence and reasons in favour of belief favour a number of different religions in different ways. Each religion may have statistically the same chances as any other; there doesn't seem to be, from my indifferent point of view, a good reason to choose one or another. So, if that's true, then we'd expect to see a statistical smear of belief over areas, rather than strongly geographical areas defining belief. IE the English-speaking world is mostly Christian - why? Is it because Christianity is obviously true if you speak English, or is it because of historical brainwashing? I think the latter. Ditto Islam, Hinduism, etc. Of course people can individually choose to go for something different, e.g. English-speaking hippies in California following Hinduism of a sort - but that is an outlier. It also has nothing to do with my basic premise that the reason evolution irks believers is because it threatens religion and they've been largely brainwashed. Geography is the proof. Think about cliques like the Amish, or how you can get Hassidic Jews in New York who don't know who Elvis is (Cf. Penn Jillette). It's the same thing: the only explanation I can think of for the overwhelming correlation between belief, geography, language, and specific religious background, is cultural brainwashing. I don't mean the term 'brainwashing' pejoratively. I'm using it as shorthand for 'deterministically led by social pressures and family bonds to believe that P, where P is any arbitrary non-verifiable cultural statement.' I just don't want to type that whole long spiel every time.

3. As for believers taking Genesis literally (I think two people said this), I think it must be taken literally. If not, why do we only take Genesis symbolically, and the rest of the bible literally? I mean, was King David symbolic, not really extant? Was Jesus symbolic, or was he real? Etc. By what criteria did you arbitrarily decide that Genesis (alone) and in particular, the Garden of Eden story alone, is symbolic, but Moses was real, etc etc.? If the snake is symbolic, why is the fiery writing on the wall of 'mene mene tekel upharsim' not symbolic, never really happened that way? Clearly if you look at the origins of the Genesis story (the fact that, as someone below - Bob -points out), there are two redacted and interspliced versions of creation. Genesis it has ancient origins in primeval myths. Primitive sheep herders took it literally, just as they took Moses, David and Jesus literally. The only reason apologists defend the Garden of Eden as 'symbolic' is because it is so obviously false, and a myth, that they're embarrassed by it. E.g. The light is made before the sun and the moon.

4. IF you came to believe in God, through reasons, that's ok. You are in good company; there are many professors of religion who are believers. That doesn't mean they're right. Or that your reasoning was correct. I often make mistakes in reasoning; the difference is I try to find them. I believe that a believer who 'reasoned' his or her way into belief has simply made an error in logic. E.g. I think the argument that the universe exists, entails that a person created it, is false. It's a non sequitur, but theologians the world over are guilty of it. Too many assumptions are required for it to be true.

5. John. Thanks for your reply, but I think Evolution might be fundamentally devastating to Christianity; not just threaten it. Christianity's eschatology depends on the view that we're born sinners (hence the baptism ritual). Original sin. It's in the Pauline epistles. If we reject the doctrine of Original Sin, then we're left with a Christianity that offers salvation only ONCE we've sinned. But now recall that the Catholics (to whom you refer) invented purgatory to cope with the idea that putatively innocent babies would go to hell if they died before baptism. Of course, the Church recently retracted purgatory. But it was standard church doctrine for forever and a day. Why? Because babies are, in the Church's eyes, born with Original Sin. Furthermore, the idea that we have the capacity for the knowledge of the difference between good and evil - viz free-will - is due to the snake tempting Eve, in the story. Without the myth of the Garden of Eden, Christianity would lack the free-will defence for the problem of evil, it would be unable to say that people CHOOSE to sin and therefore DESERVE hell. Free-will is given to us by the snake, not by God - because free-will just means knowing what you're choosing/wanting to do, and whether it is right or wrong. Think of it this way: when someone is exculpated on grounds beyond their control, it means they had no choice, no free-will in the matter. The ultimate innocence, and exculpation, freeing us of the burden of free-will, is, then, of course, that we never had the knowledge of our free-will. Hence, to punish us, and give us hell, Christianity REQUIRES the Fall of Man, it requires it literally, and it requires that evolution be false.

6. Evolution is a religion? No, sorry, you'll have to re-read my definition above. Evolution means: you are born with a genetic difference. It enables you to survive. If you survive long enough, you reproduce, and the feature is passed on. I do not see how that remotely resembles religion. Religion is the organisational practices of mystical beliefs, and the collection of such beliefs. At best, you could argue that the 'new atheists' like Dawkins display religious-like fervour in favour of evolution, but that doesn't mean that they worship evolution, or that they pray to it, or that they have certain rituals, such as eating Darwin-shaped biscuits, etc. Religion is ritual, evolution has nothing to do with ritual. Sorry.

Friday, 9 September 2011

Why Evolution is True

I’ve noticed a number of recent articles in the news about evolution, and as usual, the creation versus evolution debate emerges in the comments below each article. Aren’t we done yet?

Let me give you a brief anecdotal history. I first encountered the Theory of Evolution when I was 11 or 12 years old, in an encyclopaedia. I grew up in Apartheid South Africa. In the 1980s, South Africa was not just a semi-fascist state (I say “semi” because we did have elections). It was also a deeply religious state, with Christianity as the state religion. The education system was based on biblical beliefs. In fact, even the cover of a school science syllabus featured a bullet point stating that the purpose of science teaching was to draw the pupils’ attention to the marvels of God’s creation. So when I first saw that famous sequence of human ancestors, each standing more upright than the last, and each carrying a more advanced tool than the last, my first and only thought was “Oh, OK. That’s just obviously right. That makes perfect sense.” Genesis was out the window in an instant. It didn’t take any arguments; I did not have to hear Richard Dawkins speak. All I had to do was see that image sequence and it was blindingly clear. I remember quite clearly how my biology textbook at school skirted the issue by using the term ‘adaptation’.

So this brings up the question of why there is even any debate these days, about creation versus evolution. I can think of at least two answers to this. My first answer lies again in my own experience. My parents came from mixed religious backgrounds, and so consequently, I was never taken to religious training of any kind - neither Shul nor Sunday School. For this, I am extremely grateful. Because it is my experience that all of my peers who did go for religious training are, to this day, by and large, believers. I can only suspect a degree of brainwashing. The Catholic Jesuit motto is "Give me a child until he is seven and I will give you the man". Children under the age of 12, says Jean Piaget, are incapable of abstract reasoning. But religion is abstract reasoning _par excellence_; there is nothing more abstract. Even numbers, which are quite abstract, can be pointed to in the real world. Love can be felt. Religion? Maybe a religious experience can be felt, but certainly, God’s done a good job of staying invisible. So how or why would intelligent adults continue to believe? It can only be because of training in childhood. It cannot be because the force of the evidence is on their side.

Here are two recent examples of evidence being on the side of evolution: Firstly, many believers satirise the Theory of Evolution as being about us evolving from fish which learned to walk on their fins, and which subsequently took to the land. They pour scorn on this idea. Yet examples of this taking place right under our nose exist. Consider the Mudskipper, or the [Pacific Leaping Blenny](http://news.nationalgeographic.com/news/2011/09/pictures/110901-walking-fish-pacific-leaping-blenny-evolution-animals/). Or consider the recent find of [Australopithecus Sediba, in South Africa] (http://lightyears.blogs.cnn.com/2011/09/08/ancient-fossils-question-human-family-tree/?on.cnn=2). I personally saw this find, and was asked to help present the find to tourists last year, on a public holiday. It is quite a humbling experience to stand within one foot of something that is 1.95 million years old, and know that it may very well be your own direct personal ancestor.

Now, I realise that one of the common responses to things like the mudskipper or the leaping blenny is to say, well, why do they still exist if they’re meant to be an ancestral form? Another version of this argument is to say, well, why do apes exist if we purportedly evolved from them? The answer to both forms of this question is that we neither evolved from mudskippers nor apes; they share a common ancestor with us. (Obviously, the mudskippers are far more distantly related). The point of the blenny or mudskipper is that they show that even now, evolution is taking place. Probably, quite recently in their ancestry, one of their forebears developed the mutation that allowed its front fins to be powerful enough for it to crawl onto land. The same model is offered for the arrival of amphibians - newts, in particular. If you compare a tadpole and a newt to a leaping blenny, it is painfully obvious that they’re very closely related. That doesn’t mean, however, that they evolved from each other, rather, it means that they have a common ancestor. The same applies to apes. They can brachiate (swing from the trees), just as we can, due to our wrist architecture. They can walk upright. [Certain chimps even hunt with spears](http://www.reuters.com/article/2007/02/22/us-chimps-hunting-idUSN2244829320070222). They have social structures, binocular colour vision, are omnivorous, have sign language, the list goes on. Do you know, for example, that bats, order Chiroptera - “wing hands” - have five digits - fingers - in their wings, that make up their wings? What about snakes with legs buried in the flesh of their backs? Or whales with the same? Is this ‘by design’ or an evolutionary atavism?

What is the more sensible explanation for these things? That they have similar, or in the case of us and chimps, near-identical DNA by chance? Or that each individual creature is the product of ‘special creation’? That God sat and made each one to look remarkably like the other? Or that they are actually related by blood, as the saying goes? It strikes me that in science, the rule of Occam’s Razor, applies here: keep the explanation as simple as possible. A deity choosing to make billions of slightly different creatures for his own amusement, or billions of slightly different creatures evolving from each other? We even have mastered a form of evolution ourselves: we selectively breed dogs. Those which have desirable characteristics are kept for breeding, those which do not, get neutered.

One of the common misunderstandings about evolution is that it is about how we descend from apes. That’s not what evolution is about. Evolution merely says the following: if an organism has a mutation (a “birth defect” is an example of a harmful mutation) - and, if that mutation helps the organism survive, and it reproduces, its offspring will likely have that mutation too. So, think of how you have your ‘mother’s eyes’ or your ‘father’s legs’. That’s an example of evolution in action. The fact that your mother and father were sufficiently competent and attractive enough to mate, entails that their offspring - you - would have inherited those traits that enabled them to mate in the first place. Which gives you a good chance, too. On the other hand, if your parents had had some prohibitive genetic trait, which had, perhaps, caused them to perish before mating, well, quite simply, you’d not exist, and that trait would not have been passed on. It is important to understand that this obvious truth - a truth so obvious that it is possibly a tautology - _is all that there is_ to the Theory of Evolution. The simian ancestry of man follows from this tautology.

Of course, the latest creationist explanatory model is “intelligent design”. The idea is that some things are too complex to have been evolved, and must have been designed by a designer. “What if you found a watch in a desert?” the usual objection goes. “It’s too complex to have appeared in the desert; it must have been made.” Well, unfortunately, _all_ examples of apparently intelligent design can be explained away by science. I won’t waste space going into it. Let’s ask, instead, about obvious cases of _un_intelligent design. Humans, for example, have an oesophagus and windpipe that share a common canal, permitting us to choke quite easily, unlike other animals. Intelligent design? I don’t think so. What about the above-mentioned atavisms - buried limb remnants? Part of the design, or evolutionary leftovers? What about the appendix? Or a urethra that goes through the prostate rather than around? None of these things seem to be particularly intelligent; they’re more like accidental features which have not been bad enough to make the species extinct. If they’re products of design, the Designer is pretty lousy at His job.

This brings us to the second reason that the ‘creation versus evolution’ debate continues to rage. Believers feel that evolution threatens religion. Many apologists for science argue that evolution does not threaten religion. These apologists are partly correct; it is possible to argue that evolution was set up by God and that it operates according to laws He created. There is a position known as ‘deism’, or, to give it its more popular term, ‘guided creation’ or ‘guided evolution’. The idea of deism is that God started the creation, set up the laws of physics and evolution, and then sat back and let life take its course. There are some problems with deism, however, not the least of which is that it has no scriptural support. God is an intervener. He does not sit back.

But there is at least one substantial scriptural problem that the Theory of Evolution does pose - for Christianity, in particular. If evolution is true - which _all_ the empirical evidence points to - then the book of Genesis - particularly the Garden of Eden story - must be false. If there never was an Adam and Eve, then there never was a Fall of Man. If there never was a Fall of Man, we have never been born with Original Sin. If we are not born with Original Sin, then we are born innocent. In which case, to paraphrase Stephen Hawking, What room, then, for a Redeemer?

Thursday, 18 August 2011

Popular misconceptions about Science and Scientists

The popular press, and people in general, seem to misunderstand what scientists are, what they do, and what their goals are. This article clarifies these matters.

1. Scientists do not all work in laboratories, do not all wear lab coats, and they do not all perform wacky experiments with chemicals. The only scientists who work in laboratories are experimental physicists, experimental chemists, and experimental biologists. Sometimes they are assisted by lab technicians. Lab technicians are not full scientists; that is, they are usually low-level employees with training in cleaning bottles and other equipment. Mathematicians and theoretical physicists do not work in labs.

2. Sometimes, lab technicians are higher-level students paying off a bursary or study loan. In that case, they may be postgraduate students. A postgraduate student is one who has already got her undergraduate or bachelor’s degree in her area of study, and is now specialising. Her supervisor, or the person to whom she reports, will usually be a scientist or professor. A professor is a university employee who has published a lot of research papers and has been awarded the title ‘professor’. This is because he ‘professes’ to know his area well. Not all professors are scientists. You can get a professor of Engineering, for example; some Engineers do not consider themselves scientists; ‘scientists’ are more theoreticians, whereas Engineers are more about implementation. Most professors have PhD or doctorate degrees. Your general practitioner (GP) or medical doctor usually just has a Master’s degree - one level lower. There are different degrees in different countries, but the full list, in the order of amount of study involved, is: Diploma (in some countries), Bachelor’s Degree, Honours Degree (in some countries), Master’s Degree, Doctorate, Post-doctorate. A professor can have anything from a Master’s to a Post-doc; ‘professor’ is an honorary title given on the basis of research work. Not all lecturers at university are professors; often each department or subject area only has one full professor and several adjunct or associate professors.

3. A scientist is usually a university employee, and usually a lecturer. But some scientists work for parastatal entities or private research labs, pharmaceutical companies, weapons companies, or government-funded research labs. An example of this is CERN - the European Nuclear Research Centre in Switzerland. They have a large particle accelerator underground spanning many kilometers, and study the particles that make up matter. They’re scientists who work for an entity that is not a university.

4. The term ‘scientist’ usually applies to someone studying a science, rather than, say, economics, education, humanities, architecture, and other disciplines. Mathematics, chemistry, biology, physics, are considered ‘hard’ sciences. ‘Soft’ sciences might include the social sciences, such as population studies, politics, etc. One might consider these to be sciences because they make use of statistics - a form of mathematics - in their research. Statistics is not a form of guesswork. It is a strict form of mathematics with strict formulae and rules.

Importantly: not all scientists work in medicine. You often see the remark online that "why are scientists researching this useless thing when they could be finding a cure for cancer?" - well, that just means that the commenter doesn't understand that science has different research areas. It's like saying, "Why do biologists study living organisms when they could be designing rockets to get to Mars?". Scientists have specialist areas of study. They only study the area that interests them. Few of them are interested in medicine. There are a wide range of research areas. Here's a short list: Mathematics (Applied and theoretical), Computer Science, Statistics, Physics (Particle, Newtonian, Astro, Geo, Relativistic, etc.), Chemistry (Applied, Biological, Industrial, Physical, Inorganic, Organic, etc.), Biology (Botany, Zoology, Evolutionary, Palaeo, Human, Micro), Medicine, Psychology (Neuro, Clinical, Industrial etc - some debate about how much of psychology is a science), Sociology (debatable), Geography, Geology, Palaeontology, Archaeology, Anthropology, etc etc. Only one of these deals with medicine.

5. Mathematics and the hard sciences differ in that mathematics is not experimental, and assumes the truth of its tenets, such as 1 + 1 = 2. The hard sciences - physics, biology and chemistry - experiment with observable evidence, and make up ‘theories’ to explain the evidence. So they use experiments and evidence, whereas mathematics does not. Mathematics has a number of branches, such as computational and applied mathematics, which overlaps with Engineering, Physics, Computer Science and Astrophysics, and it also has the branch of Statistics. Some subject areas, therefore, are a bit fuzzy in terms of which areas they fall under. So, for example, you can study computer circuit design in Computer Science as well as Electrical Engineering.

6. The process used in the hard sciences to ‘discover’ something or come up with a theory or law more or less goes like this. A scientist or layperson will make an observation. In other words, he or she will see something happen or that just exists, and want an explanation. The scientist will then use her existing theoretical knowledge to come up with a ‘theory’. A theory is a rigorous, strict, mathematical model of what could explain the observation. The theory includes a predictive phase, in which it claims that if it is true, certain things that have not been observed yet, will be observed under certain conditions. An experiment will be performed to test the theory. If the experiment succeeds, the scientist will repeat the experiment, and then write a paper on it, which will get reviewed. So, for example, we may want an explanation of what water is. A scientist will theorise that water is a combination of Hydrogen and Oxygen, in a 2:1 ratio, formed by heat. She will perform an experiment to test this theory. She will have a ‘null hypotheses’ which assumes that the theory is false, and she will have a ‘control study’ which tests something that is partly unrelated to see if it produces any water, too. So, the null hypothesis will say that water is _not_ 2H2 + O2. And the control will be something else, say; just keeping the Hydrogen in a gas cylinder without heating it in the atmosphere, and seeing if water just appears inside it.

So, next time you hear about a scientific ‘theory’, please understand that it does not mean the same thing as the term ‘theory’ that we use in the phrase ‘in theory, Man United ought to win’. It actually means something much, much stronger. A ‘theory’ is a series of mathematically corroborated facts and/or predictions, in science. It is called a ‘theory’ purely out of modesty. This is because any theory is still open to testing and verification, or further proof or disproof. But theories can _only_ be disproven by proper scientific method, described above. A scientist’s personal beliefs are irrelevant, and a layperson’s opinions are irrelevant. You cannot merely ‘disbelieve’ in the theory of gravity; you’re sticking to the ground quite tenaciously, and that is just a fact. The same applies to other theories; merely disbelieving them does not make the reality of their testability go away. In science, all theories which pass an experimental test, are considered facts. Atomic theory, the Theory of Relativity, and the Theory of Evolution - these are all well-established scientific facts.

7. Once a theory has been verified by experiment, it is usually written up in a research paper called a ‘journal article’. Scientists have their own magazines - called _journals_ - which describe the latest research that they have done. There are thousands of journals. They each specialise in a particular research area. Not all journals are in the hard sciences. For example, you can get journals of history, politics, and philosophy. A paper usually starts with a section called the ‘abstract’, which summarises the paper. When a scientist or other researcher submits a ‘paper’ to a journal, it undergoes a process known as _anonymous peer review_. That is, the people running the journal give the paper, with the author name removed, to the peers or academic equals of the scientist submitting the paper. The author name is removed to prevent the reviewers from being biased against the submitter, or in her favour. The reviewers or peers will then try to replicate or repeat her experiment, and/or they will check her mathematics and reasoning. They will also check that she has done her research properly, that is, read up existing recent research on the topic in their journal and other journals. If she has performed her experiment properly, if her mathematics are correct, if she has ‘cited’ or referred to existing proper research, and if her conclusions follow from her premises of her argument, her paper will be accepted into the journal for publication, with her name now visible to all. Subscribers to the journal, that is, other scientists who buy the journal, will then have a chance to write responses, usually criticisms. The author will then be able to write responses or work on the queries that come along. Strictly speaking, anyone who writes a response to a journal article is welcome to submit to the journal to see if their response is published. So, for example, if you truly believe that evolution is false - go ahead. Refute it in an academic journal. You’re perfectly welcome to do so. There are no entrance requirements, just that the paper is well-researched.

This is how science increases our knowledge. 

8. The more a paper is ‘cited’ or used as base research material in new research papers, the more the work of that paper is respected and ‘rated’. A highly-rated scientist is one who has produced a lot of cited research material. Such a scientist may get earmarked by her university for promotion to professor, and her theory may become accepted broadly as scientific fact.

9. Instrumentalism versus realism. Some scientists consider their theories to be physical facts. They also consider the entities that they describe in their theories - such as atoms, molecules, energy, waves, quarks, etc. - to be real things. These scientists are called ‘realists’. Other scientists, however, are only interested in whether a mathematical model or theory can generate good predictions, and, they are not concerned with whether their theoretical entities exist (i.e., they don’t care if atoms actually exist, as long as the theory works to make predictions). This type of scientist is called an ‘instrumentalist’, because they consider theories to be merely instrumental or useful, rather than fact. My experience of 23 years at a university tells me that most physicists are instrumentalists, but most chemists, biologists and astrophysicists are realists.

10. Funding. Not all scientists are funded by pharmaceutical companies looking for cures. Some are funded by government. They write a research proposal and a funding proposal, and a funding body, like the NRF in South Africa, or the government (DHET), or whatever, fund the research. Often just the university funds it. So, for example, the university research office may slice up the funding pie between different departments; physics and medicine, say. This means that some research funding goes to physics, rather than just medicine. This means that funding is split up from a budget for various projects. This means that not all funding goes to cures for cancer. But if there was no funding for anything but medicine, we’d not know anything about the world… e.g. how to make computers, how to make aircraft. et.c. It is not the business of a layperson who doesn’t understand science to say what aspect of science should or shouldn’t be funded or in what proportion. Many spinoffs come from funding apparently useless studies. So, for example, the astronauts on the moon resulted in Nike sneakers. Bulletproof vests were a result of someone tinkering with chemistry. Microwave ovens were an accidental side effect of a different experiment that wasn’t looking for ways to heat things. So unexpected benefits can come from apparently useless research. Landing on a comet, more recently, has relevance for our knowledge, e.g. of the early universe, and whether astronomical bodies are worth mining commercially.

The only funding in science which is questionable is medical and weapons funding, because sometimes it has corporate interests behind it. There’s no commercial funding for evolution, gravity, relativity, and other theories that have no obvious commercial motives. The way to check medical research for legitimacy is to look for it in journals and see if it has been replicated by people who are not funded by the pharma company. If it has been replicated by uninterested parties, then it’s not a case of corrupt research which is trying to sell people “unnatural” “non-holistic” cures for diseases. Moreover, empirical evidence of the efficacy of “allopathic” medicine makes it perfectly clear that it’s not merely corporate quackery. Most studies of homeopathic and related medical models show that they’re no better than placebo. But this is a debate for another article. Homeopathy is also big business, also a multi-billion industry. So in terms of research being dubious because it makes money - no; 'all natural' medicine is a huge industry. Money is also irrelevant to empirical results. If something works, and it’s in a journal, and it’s been replicated, that’s all you need. Whether a wealthy funder funded it becomes irrelevant at that point, just as the fact that Michelangelo’s work was great, was not disproven by the fact that he was funded by a very wealthy church.

Funding is awarded by applying for it to a funding body. You have to write a long preliminary study showing why your research should be funded, and the reasons can’t be its commercial viability. The reasons usually have to be benefit to humanity.

This is how science actually works. 

Sunday, 14 August 2011

Scientists find cure for every virus and maybe ageing

Scientists find cure for every virus and maybe ageing

According to a New Scientist article, a potential cure has been found for all the viruses they tested. They also now know what causes ageing.

According to [an article in New Scientist](http://www.newscientist.com/article/dn20788-experimental-drug-could-defeat-any-virus.html), Todd Rider and his colleagues at MIT have found a way to wipe out any virus - or at least - any virus they tested - including the common cold, flu and AIDS. This is ground-breaking news. Just as antibiotics are a silver bullet for bacterial infections, it seems like these scientists have found a silver bullet for viruses.

For the reader who is not sure of the different types of diseases, allow me to briefly elaborate. There are four types of infections one can acquire: cancerous, viral, bacterial, and fungal.

Cancer is not, strictly speaking, an infection. Rather, in cancer, what happens is a cell reproduces itself incorrectly when dividing in the process we call “growing”. The DNA is not correctly replicated, and a faulty copy is produced. This faulty copy, lacking any ability to shut down or stop, then produces another faulty copy. This creates a tumour, or ball of cells.

Fungal infections are, to put it a simply as possible, typically skin or membrane infections. Fungi are plant-like organisms; the type we are most familiar with are mushrooms, but ringworm and athlete’s foot are also fungal. They spread by means of spores or microscopic root systems, known as microrhiza. They cause their harm by feeding on the surface of your skin or membranes. Most fungal infections can be cleared out by exposure to drought and topical creams. As such, they are probably the least deadly of the various types of infections one can get, even if they can be very persistent. Not all fungi are harmful, however. Some fungi, for example, excrete penicillin as a waste byproduct, which is lethal to bacteria. Penicillin, as you probably know, was the first antibiotic.

Bacteria, on the other hand, are free-floating single-celled organisms; miniature animals, for want of a simpler way of explaining it. Just as your body is made of billions of cells working in harmony, bacteria are individual cells that work in isolation. Bacterial infections typically cause their harm by consuming nutrients your body needs, and excreting waste products that are poisonous to your body. Fortunately, bacteria are relatively large, and cannot invade your body’s cells. The body’s immune system can typically deal with them by attaching lethal ‘antibodies’ to the bacteria. Antibodies are chemicals that the body’s immune system produces. They are manufactured in response to the initial invasion, and are kept in the blood stream thereafter, ready to respond to a repeated instance of the same bacterium. However, if a bacterium mutates - that is to say, _evolves_, then the body will no longer have immunity to that bacterium, and this is where antibiotics come in handy.

Lastly, we get to viruses. Viruses are much more tricky to deal with. Firstly, they’re generally small enough to penetrate any cell in your body. Moreover, they’re not even really alive. This is something of a debate in biology. In the biological sciences, a being is considered alive if it moves, takes in nutrients, excretes, and reproduces. Viruses do not take in nutrients or excrete, and they reproduce parasitically. In fact, viruses are more or less just DNA in a box. They lack the ‘organelles’ or energy-processing parts that a bacterium has. So what viruses do, is they invade a cell, and then force the cell’s DNA to replicate the viruses’ DNA instead. When the cell has made too many copies of the virus, the cell bursts open, releasing the new viruses, who go on to the next cell, and the process is repeated.

Up until now, doctors have simply immunised us by injecting us with vaccines. Now, we know that there were recently fusses in the popular press about vaccines causing autism and ADHD, but that has since been debunked. A vaccine is simply a serum containing dead or inactive viruses. When the body encounters these proteins, it produces antibodies to attack the proteins. Hence, at a later stage, when the live virus appears, the body can immediately attack and defeat it, because it has been immunised. This is the best we have been able to do thus far, and it has been successful. For example, smallpox and cowpox (where the name ‘vaccine’ comes from - vacca means ‘cow’). These viruses were made extinct because of immunisation. But certain viruses that mutate rapidly have been impossible to stop with vaccines - the common cold, and AIDS, for example.

But this is where we get to the amazing new trick that Todd Rider and his colleagues have discovered. When all viruses replicate, they force the cell to generate additional RNA - the primitive DNA found in mitochondria - cell organelles which process the energy requirements of the cell. Rider’s solution is to inject a compound which they called DRACO - double-stranded RNA-activated caspase oligomeriser. What the compound does is force the host cell to commit suicide before it replicates the viruses, or, while it is in the process. The result is that the host cell bursts open and releases only parts of the virus. This is especially clever, because the body’s immune system can then develop antibodies as well. This means, ultimately, that DRACO can kill any virus - by forcing the mutated host cell to commit suicide.

The trouble, of course, is if you are heavily infected - e.g., if a statistically significant percentage of your cells are infected, then you will lose the function of the organ containing those cells. The idea, then, would be to apply DRACO before the infection got out of control.

Note, however, that this is very different to chemotherapy. In chemotherapy, radioactive isotopes are introduced into the body. They kill cells indiscriminately. The idea is that if you kill enough cells, that since the cancer cells are a minority, they’ll die off, leaving enough healthy cells for you to continue to survive. DRACO, on the other hand, is a precision-targeting system; it only attacks cells which have viruses in them, and causes them to commit suicide.

Now why, you may be wondering, would cells be able to commit suicide? The answer is to prevent DNA mutation, or even worse, cancer. When a cell reproduces itself or splits by means of mitosis, it makes a copy of itself using the DNA it contains in its nucleus. However, each time it copies itself, some of the DNA is damaged or frayed at the ends. Hence, at the end of each strand of DNA, there is a series of junk DNA kept there that is safe to fray off, called a telomere, which means “end part”. Once the telomeres wear out, the cell can no longer safely replicate without risk of becoming cancerous, so it commits suicide. All cells have the ability to commit suicide when they detect that their DNA is faulty. This is what happens, for example, when you get grey hair. The pigmentation cells which produce the hair colour have died off. They committed suicide, because they found that their telomeres - the ends or tips of their DNA - had finally worn out, after repeated replication. Hence, you no longer produce pigment, and thus, your hair goes grey or white.

[Interestingly, if the telomeres could be preserved, we could potentially stave off ageing, since it is cell death that causes ageing.](http://www.newscientist.com/article/dn19780-dna-trick-throws-ageing-into-reverse.html). This is why different species of animals have roughly constant life expectancies; because their DNA is a certain length.

We may just be heading into a future without viruses and without old age. Isn’t science incredible?

Saturday, 13 August 2011

Mac vs PC

Having been a long-time computer user - since 1983, in fact - I believe I have a fairly authoritative opinion on this traditional debate. Let me give a brief synopsis of my personal experience; this will perhaps explain why I still use a Mac.

I received a Commodore VIC-20 in 1983, and learnt to program BASIC (a programming language). In 1986, I got a Commodore 64 - still the world's best-selling particular computer. Around that time, I encountered a Macintosh 512K that my mom had at work. I was astounded by how different it was to the Commodore; you didn't have to program it. It had an operating system. It booted up, and was useful without any programming required. It also had a mouse, which was weird. I was used to joysticks and cursor keys. And then there was the black and white screen. My mom argued that colour was irrelevant since there were no colour printers in the business sector. True enough. The screen quality was, however, much higher than the Commodore. I enjoyed the Mac. I found it very useful for creating documents, something I still do at a frenetic pace.

Around 1988 I joined my school's computer club and found that they had IBM 286es. I found them disappointing. Their screen resolution was lower than the Mac's, and even though they booted up an operating system - MS-DOS - it was virtually useless. You had to explicitly load BASIC, and write your own programs, just like the Commodore. Except, unlike the Commodore, the PC lacked colour and sound, whereas the Commodore had 16 colours and 3-channel fully synthesised sound. I decided then and there that the PC was vastly inferior, and stopped using it. The next year, a friend of mine showed me Windows 3. I was underwhelmed. It had no direct equivalent of the Mac Finder which gave you a literal representation of the files on your disk; it was a glorified application launcher. And it was ugly. I shrugged, and thought that it would never be a competitor. Unfortunately, I was wrong. IBM bought it wholeheartedly, and Windows dominated the corporate sector by the early 90s.

But Windows was still rubbish. It had "token ring" networks and "Novell Netware" which were really crummy compared to Apple's server solutions, using LocalTalk networks, which had existed since 1988. It was still low resolution, and still ugly. Colour matching to printout was appalling; black would come out dark green, beige would come out russet. The mouse was skippy. It still relied heavily on floppy disks. Filenames were limited to 8 characters. It still had to boot up DOS first. It couldn't multitask; whereas at least on the Mac you could run multiple programs, even if they didn't really cooperate (this is a pun - Apple claimed that it had "cooperative multitasking"). By this time, my brother had acquired a Commodore Amiga. This machine had 4096 colours and multichannel synthesised sound - much better than the Mac at 256-colour single-channel. The PC was out of its league; 16 colours and still no sound unless you bought the first "Sound Blaster". The Macs were starting to come out with CD-ROMs by default, and all had hard disks by default. Not so with PCs and Amigas, however. But the Amiga had one thing: True multitasking. It could play music while you worked on a word processor. No other machine I had seen could do this.

In the mid-1990s I saw my first SGI machine - a UNIX machine with true multitasking, 65536 colours, proper sound, no floppy drive at all, a massive hard drive, memory protection, and multiple users. I was shocked. I was even more shocked when I discovered that UNIX dated back to 1969. I also discovered the Internet, and found out that it ran on UNIX. I was sold. I started to lose faith in the Mac. I was on a programmer's mailing list at Apple. I said: I want a UNIX that looks like the Mac and works like the Mac. I was kicked off the list for starting a flame war (abusive series of exchanges). The Mac, I argued, crashed (it had no memory protection). Its multitasking was crummy. Colour wasn't great. Access to Internet was OK but difficult. UNIX solved this. Windows, of course, was still rubbish. Access to internet required expert training. Graphics were just getting to 256 colours. Sound wasn't bad on a Sound Blaster. But still no multitasking, memory protection, or multiple users. Then came Windows 95. I was annoyed. It was a blatant copy of the Mac, just inferior and more complex to administer. I ignored it, and it won the market.

Then in 1996-7 a miracle happened. Apple abandoned their attempt at a modern operating system - Copland. It was incompatible with applications from the older system. They bought NextStep/OpenStep - Steve Jobs' other company (he also owned Pixar). With it, came Steve Jobs. The company woke up and started to shine. He hired Jonathan Ive. The iMac was released. Then the iPod. Apple was in the news. Then came Mac OS X - the re-modelled NextStep that just looked like the Mac. I went back to that programmer's mailing list, and said "I told you so". They had to eat humble pie. I got what I wanted: a UNIX that worked like the Mac. Proper memory protection, proper multitasking, proper server capabilities, a powerful commandline that could do batch jobs easily, proper multiple users, proper internet capabilities, but best of all, the Mac user interface. I was ecstatic. I still am.

A few years later, Windows XP was released. I was annoyed again; the "XP" was an obvious ripoff reference to Mac OS "X" (ten). But I had to give credit where credit was due. XP was based on Windows NT - a system with a journalling filesystem (crashproof, in English), and proper multiple users and multitasking. Not perfect, but good. I also noticed that it had session suspension - you could suspend your login and let someone else use the machine, and go back, and carry on, later. The Mac didn't have this in early versions of Mac OS X. I felt envy for the first time. That was good, and the journalling filesystem was good. Shortly thereafter, Apple caught up and added those features. They were now officially ahead again, because the had the same or better features, but with greater user-friendliness.

Now we come to Windows 7 and Vista. Everyone hated Windows 7, for reasons I cannot understand - all Windows seems horrid to me. Vista, however, seems quite usable. It doesn't quite have all the things I'd want, as a person accustomed to UNIX, but then, I'm a "power user" - someone who pushes computers to the limits of what they can do. Most people wouldn't notice something like that. At the moment, then, the war between Windows and Mac is characterised in terms of feature comparisons. I'm still fairly confident that the Mac has the lead, but the average Joe Soap user won't be able to tell.

There are some good arguments in favour of the Mac, still to this day.
1. It is less-targeted by hackers and virus writers. In fact, because of its strict user-access-permissions policies, e.g. that you have to enter an administrator password to install software - it will remain hard to sabotage even if it becomes mainstream. And if you stick to Apple's App Store, you can be pretty sure you're not installing a "Trojan" (spy) software package.
2. The Mac is feature-equivalent or superior to a Windows machine.
3. The Mac can run a variety of emulators, such as Parallels Desktop and VirtualBox by Oracle, which let you run Windows if you need to.
4. Mac hardware is of a higher quality and design specification, but can be upgraded with humble standard PC components.
5. The operating system comes free with the machine.

If you take the cost of a hardware-equivalent PC, and add the cost of antivirus software and Windows, you will find that a Mac and a PC pretty much cost the same - except the Mac is much, much sexier. So I must still advocate the Mac.

The advantages to a PC are:
1. You can choose your own hardware components (good luck if Windows understands them).
2. You have more native software, especially games. (But the Mac can run this under an emulator anyway, though admittedly games are too slow).

From a usability and features point of view for the average user, the difference is negligible; you can decide on these points above. But I want to make a different argument.

I think that actually the war will not be decided by features, but by _lack of features_. Average Joe is scared of computers, and doesn't want to have to understand them. He knows his file is "In the Microsoft", he doesn't care about what drive it's on, which subfolder, in which directory, etc. He doesn't care. That's the point. And unfortunately for Microsoft, Apple understand this point very well. So with Mac OS X 10.7 Lion, they have begun a transition away from a traditional Windows/Icons/Mouse/Pulldown-menu (WIMP) environment, to the iOS environment seen on their iPads and iPhones. In other words, I think that really, desktop computers are dead. Tablets will win. Not just because they're smaller and more convenient to carry around. They will destroy laptops as well: Because they're human-usable.

In the 1990s, Apple had a set of guidelines for programmers called the Human Interface Guidelines. Your software was expected to be user-friendly and adhere to scientifically-researched guidelines of non-confusing computer behaviour. The iPad and iPhone are very, very good at being straightforward and simple. But I bet you didn't know this: an iPhone is a full UNIX server. So is an iPad. They both run iOS, which is actually a stripped-down Mac OS X. Now, since Apple are evidently on the path to merging Mac OS X and iOS, to my mind, the decision is actually this: Do you like the iPad or iPhone? If so, then get a Mac, because it works almost the same, and, in the future, _will_ work the same. Whilst a Windows PC will still lag behind on the usability scales, and look more and more like a relic of the 1990s' complex graphical environments, the Mac will become what computers ought to be - a useful household appliance that manages all your data and entertainment.

Friday, 12 August 2011

Does philosophy still have a place in modern society?

I suppose the popular view of philosophy is that it is just "a load of out-of-date beard-stroking with no practical relevance," or that, in a best-case scenario, it's just got something to say about [the meaning of life](http://www.imdb.com/title/tt0085959/). But it's much more than that.

Let me start by giving a brief history of Philosophy for those who think that it's just a "way of thinking" - as in "our company's philosophy is to give good service".

The term "Philosophy" comes from Greek, meaning love of wisdom. The ancient Greeks, from around 500 BC, until their schools were banned by the Christians, dedicated their time to theorising. They dealt with all manner of topics - mathematics, physics, chemistry, biology, and general issues like politics, ethics, and truth. The style of their philosophy was formal and methodical. Now, once Western civilisation reached the "Enlightenment" period, the various topics that philosophy used to cover branched off as separate sciences. Indeed, even now, the "Chair of Philosophy" in some universities is the professorship of Physics. Physics used to be called "natural philosophy". So historically speaking, philosophy is the parent of Western knowledge.

But is it relevant today? I argue that it is. Take Karl Marx, for example. An economist, you might argue, or a politician. But in reality, his work, Das Kapital, is primarily a work of speculative philosophy. It is only nowadays that we call his work 'sociology'; the term is a modern invention. Now consider Friedrich Nietzsche. Officially a linguistics professor, he was a philosopher. But if we combine these two men, we see the history of the 20th century play out before us: Nietzsche was admired, for the wrong reasons, by Hitler. And Marx by Stalin and Lenin. So the whole future of the 20th century, from World War II all the way to the fall of the Berlin wall in 1989, was pretty much determined by the writings of two philosophers in the late 19th century.

If you don't believe this point, ask yourself, what was Jesus, if not an itinerant philosopher? His views on the world have determined the history of the over half the world's nations for 2000 years. Of course philosophy is influential, and of course it is relevant. We cannot begin to think or explain away any social movement without first referring to the philosophers who first penned its foundational beliefs. Consider Jean-Jacques Rousseau and the French Revolution. Consider Machiavelli and the Borgias of Renaissance Italy. Consider the Existentialists and Humanists, and modern socialism's concern for individuals' well-being. Consider Bentham and Mill, and the existence of modern democracy. The world would probably still be run by aristocrats if it weren't for them. Indeed, as the British Government mulls banning hoodies and Facebook due to the riots, they'd do well to read Thomas Hobbes who was quite in favour of such approaches.

But what about today, in the modern world, now that their job is putatively done? Well, let's think about it. What does modern philosophy offer? Before I begin, let me point out that there are at least four types of philosophy practiced in the modern world, of which I am only an expert in one. Specifically, they are: Eastern Philosophy, Continental, Poststructuralist, and Analytic. There was a humanist or Existentialist branch as well, but it is now considered Continental, largely. Eastern Philosophy, broadly speaking, covers the Eastern religions and their take on the world. Continental philosophy is work done by European philosophers who are neither Poststructuralist nor Analytic. Poststructuralists, sometimes called Postmodernists, are a recent variety of philosopher, owing most of their pedigree to French writers like Lacan and Foucault. Their primary concern is not with truth but with power. As such, their critical work is of great importance in sociology, psychology, and politics. I am not an expert in any of these; my area is Analytic, which is largely an Anglo-German affair.

Analytic philosophy concerns itself with correct argument structure, truth-seeking, valid and sound arguments, formal logic, and similar things. This may sound rather dry, until I explain further. This is just its method. The Analytic method also considers itself mathematically rigorous or scientific. That is, it aims to use only strictly verifiable premises or basic ideas on which to build its arguments; it dips into scientific evidence, and uses strict Boolean or computer-style logic to get its answers. Naturally, this is an idealised characterisation of it, but its style is unmistakeable. If you read a piece that concerns itself primarily with the meaning of a phrase or word, and it goes into many thought experiments, examples of use, counter-examples, and so on, you're looking at an Analytic work.

Why, now, would Analytic philosophy be relevant? Well, because of the particular arguments that it deals with by means of its specific method. Analytic philosophy is divided into four official areas: Epistemology, Metaphysics, Ethics, and Aesthetics.

Epistemology covers what we know, and tries to define good argument structure, and what counts as truth. As such, it forms the basis of Boolean logic, which is the foundation of all computers. Furthermore, in its search for truth, Epistemology does not take recourse in blunt statements like "God just does exist" or "I just have faith". No, it insists on logic and evidence. It is the basis, ultimately, of the scientific method. Most epistemologists in the analytic tradition believe that there is an independent truth, which humans can access. Some, however, known as relativists, do not believe this, and believe, rather, that truth is socially constructed. As such, epistemologists consider Poststructuralists to be a subset of relativism. This is a bit of a religious war, so I will leave it there. The point is: truth matters. Because truth informs our beliefs, and our beliefs determine our actions. If you believe you can get away with crime, you will likely commit it, for example. Do you know you will get away with it, or do you just believe it? And so on.

But now we are entering the field of Ethics. The subdivision of Ethics deals with whatever is good to do or not do. There are many positions inside ethics. Let me enumerate just a few. Relativists claim that the good is socially constructed. So, for example, they would argue that a Burqa ought to not be banned, because it is good for Muslims. However, they might be forced to defend Hitler. Absolutists, on the other hand, insist that doctrines like the Ten Commandments are the real good. Then you get pragmatists, such as William James, who argue that whatever makes sense to do, is good. Then there are consequentialists, such as J. S. Mill, who argue that the best consequences for the most people, is what we ought to do. This view, incidentally, is what really gave impetus to modern democracy. So it is not enough to say "The Bible says so". The Bible says you must stone witches to death, as well as your son, if he is disobedient. But we no longer live by those Biblical morals. Ethics provides us with the possibility of a better, more modern path to truly moral behaviour. It is of paramount importance in guiding us. Believe it or not, most modern ethics, such as "the right to privacy", originate in old considerations by philosophers. What about software piracy? Is that ethical? Music piracy? So what if politicians are corrupt? What about abortion? HIV/AIDS confidentiality? Ethics addresses these issues.

Now what about metaphysics? This deals with what exists. So, it deals with issues like what mathematics really are, what fundamental particles could really exist, and importantly, whether God exists. With the current climate of religious violence, metaphysics has an incredibly important job, in considering such a matter. Not that the vigorously devout will heed any answer from a philosopher - indeed, Colossians 2:8 says: "Beware lest any man spoil you through philosophy". But philosophy has much to say about God, and much more clearly, than any religious text. Then there are the scientists at the LHC, who are trying to find all these various particles. They could do well to chat to a philosopher. They may find that they're wasting their time in a massive quest to understand something that can be answered more simply. Or they may not. They may find that the philosopher could clarify their theoretical constructs for them. What is space-time, exactly? What is a superstring, exactly? Does it exist? And if it exists, can we use it? Could there really be multiple universes? Philosophy addresses these questions.

Lastly, Aesthetics. This is of enormous sociological significance. Is pornography, for example, beautiful? Is heavy metal music or gangster rap beautiful? To whom? Why? Can any such things be justified? Is beauty absolute for all people, or is it relative to individuals? Is it relative to societies? Is a woman in a Burqa beautiful, or a fanatic? Is a woman in a bikini beautiful or a prostitute? Are Jackson Pollock's paintings - consisting of splatters of paint and nothing more - beautiful? Should you really pay millions for one? What about Picasso? Or Dali? Is Le Corbusier's crude concrete architectural style beautiful or an eyesore? Should city planners allow it? What about piercings or tattoos? Should they be permitted or are they ugly? Are anorexic fashion models beautiful or hideous? Should the body be exposed, concealed, reviled or worshipped? Is popular music rubbish, is classical music the only true music? Or is it dusty and irrelevant?

Philosophy has much to offer. We ignore it at our peril. It shapes our societies without us realising it. Studying it is like opening your eyes after being blind all your life.

Thursday, 11 August 2011

The ET theory of the origin of life irritates me.

I find this ET life theory completely pointless - that DNA/microbes came on an asteroid to this planet and then evolved upon landing, a bit like the movie "Evolution". Even if it's true, this theory does not answer any questions.

The fact of the matter is at some stage you have to stop and say that life formed SOMEWHERE, and you have to say how. Explaining life on this planet as coming from another one, is just regressing the mystery. It's as thick as saying "god did it", because that doesn't explain where 'god' came from. So with life. Saying 'it came from another world' doesn't explain how or why it formed on that other world.

So just get it right! Explain how it could come about on this world, and the job is done!

Thursday, 4 August 2011

What do the stars hold for you?

Many people religiously consult the astrology section of their favourite newspaper, magazine or website, eagerly anticipating the good news that the stars hold for them. But do these 'predictions' amount to anything serious, or are they just a form of harmless entertainment?

Let's start with the first of the customary accusations levelled against astrology - that it's vague. If you consult your "reading" for today, and substitute the word "you" in the reading, say, for your mom's name, or your best friend's, you will probably find that the "reading" is largely accurate for them, too. Dawkins did a superficial experiment with a small sample of people - he took a reading, told people it was for their star sign, whereas it was in fact for another sign - and then asked them what they thought. Many of the people found it to be fairly accurate, except one person: the person whose sign it was. Surely if the reading were accurate, it would only ring true for the person whose sign it was?

But does astrology even pretend to be a form of prediction? Well, unfortunately, yes. The system was originally based on the observation that regular human events seemed to correlate with observed celestial events, and so when these celestial events recurred, the human events were expected to recur. But astrologers argue that there is more to those 'predictions' that they make. A true astrologer argues that these 'predictions' are actually an indication of possibilities or potentials, likelihoods or probabilities. Astrology is more of a mapping system, which correlates stars and planets to personality types, tendencies, upbringing, the way you will probably be in a relationship, your potential for money earning, and so on. One can look at it in the same way that one would look at a psychological profile. So persons with specific celestial mappings, which can be quite unique, could be said to have certain patterns happen in their lives. Astrology is not, moreover, just a simple matter of the star signs determining the personality. There are other aspects that have to be taken into account, such as planets. A more accurate, true prediction or characterisation, could only be drawn from consulting a Birth Chart. This alone, then, could give an indication of the kind of life you could expect, or events that are likely to happen. Astrology is not, therefore, predictive in the sense that it tries to give precise descriptions of forthcoming events. Rather, it just gives tendencies of your personality, and thus, the kinds of things that are likely to happen to you. The predictions one sees in the newspapers are certainly not meant to be accurate, a true astrologer will argue, because they do not take all the factors into account, such as the relevant planets, your family, and so on. They are very broad, at best. They are more for entertainment purposes.

But how could astrology be accurate at all? What about a case of two people with the same star sign, who have radically different fates and personalities? My stepfathers shared a birthday, but you could not imagine two people with such different fates and personalities. How is that possible in the light of Astrology's claims? Well, the astrologer answers, this case would be one in which the ascendant planets were very relevant, and explained the difference. Suppose we accept that reply. But then what about the case of twins? Twins do not often share the same fate or personality. Yet they should always have identical fates, if astrology were true.

Suppose, now, that astrology admits that it has some predictive tendencies, given that it lavishly tells you in the newspapers what is going to transpire on any particular day. How accurate are these 'predictions'? In the scientific arena, we consider a prediction accurate only if it gives precise details. Scientists discard any theory that does not predict accurately. Remember, when you step on a plane, that you're putting yourself in the hands of [Bernoulli's principle](http://en.wikipedia.org/wiki/Bernoulli%27s_principle#Real-world_application). It predicts very accurately, statistically speaking. How accurate, by comparison, are astrology's predictions? When they say that you are going to "have difficulties with money today," why do they not say "you will lose exactly £10 out of your wallet at this exact address..." They ought to, if astrology were remotely a science.

Now, let's look at the method of astrology. Astrology is not, contrary to what most people assume, a question of which constellation was in the sky at the time of your birth. It also involves the relative positioning of planets and the moon as well as stars and sun, as we’ve mentioned. The time of birth has a big impact, not just the day of birth. There are planetary alignments to consider as well. As such, astrology is a sophisticated system. But the important question is this: do astrologers use telescopes? If not, they cannot possibly obtain an accurate reading - because if they did use telescopes, they'd notice that the constellations that they're expecting to see, are not actually the dominant ones at that point in time. Since Ptolemy first devised our current system, the stars have shifted about 23 degrees. That's an entire star sign! If astrologers bothered to use telescopes, they'd have noticed this. But astrologers use charts, not telescopes. Usually the Earth is central on the chart, and the Sun is a mere planet that orbits the Earth. We last gave the geocentric model of the cosmos credence hundreds of years ago; we now know it to be false.

Third, let's think about the mechanism by which the stars could influence us. Traditionally, no specific energy or causal mechanism is stipulated as the reason for the correlations between star signs and personality or fate; they are merely observed correlations lacking an explanation. So how do the stars and planets influence us? Could it be by means of light? Well, that won't work, because the planets are so dim that the light coming from them will have less influence on us than an LED on your computer screen. That's right - if you're pregnant, and light is the way that the stars influence us, then you're messing with your unborn baby's future by sitting near any artificial light source.

If, however, it's not light, then maybe it's gravity. Well, anyone who's done Physics will recognise the equation F = G(M1M2)/r2. This equation measures the force of gravity between two objects. Let's take an example. Let's try the influence of Jupiter. I don't want to prejudice this by using, say, a star, because stars are much further, and therefore their influence is less. The mass of Jupiter is 1.8986×1027 kg - roughly two billion billion billion kilograms. Let's say a newborn's mass is 3kg (and G is a very small constant). The distance between the baby and Jupiter is between 893 billion and 964 billion metres apart, depending of the positions of the two planets in their orbits. If we do the calculation, it gives us a force of 0.0000004088147 Newtons. For the case of distant stars, it's much worse, since they're thousands to millions of times further away. Now, just so that you understand how weak this force is, the force that 1 Lb of mass exerts on earth is 4.48 N. The force Jupiter exerts on us here on Earth, therefore, is about ten million times less than a 1 Lb weight. No chance that that could mess with your fate; the gravitational field of your mother probably has more influence.

But perhaps the stars exert their influence on us by means of a yet-unknown force. Perhaps there's a mysterious force in the universe - let's call it the Force - that influences us. And let's say that the Force is much stronger than gravity, and therefore, can reach us from these stars and planets, and influence us. Surely, if the Force is strong enough to reach us from planets that are billions of miles away, and surely if it is strong enough to exert a fatalistic or deterministic force on our minds and bodies, it could be detected and measured? Surely, by now, we'd have noticed strange things happening, and developed a way to measure it? We can measure very small things indeed - such as the force of gravity, the fields of particles, and so on. So can we not suppose that something as powerful as this Force would set of alarms in laboratories world-over? And surely the Earth would also have this Force, and completely overrule the other planets purely on the basis of its proximity? You can't debate this one; all forces diminish in strength over distance, as the formula above illustrates. This means that the Earth must be millions of times more dominant or ascendant for everyone in their Birth Chart.

It has been found that there are seasonal effects on personality (google this), but remember that unlike the stars, seasons vary between hemispheres; so even if astrological predictions about personality worked for the northern hemisphere, they’d be completely opposite for the southern.

Lastly, why should the stars at the _time of birth_ be relevant? Does it not make sense to suppose that the measurement should be from the time of _conception_? Logically, the stars and planets cannot exert a fatalistic influence on an already-existing being. Is this not the key reason why we talk of someone being a Scorpio or an Aries - because they were influenced by those constellations _at the time of their birth_? But that doesn't make sense. Think about it. If a person, who has already existed for nine months, can be influenced by stars just because he or she happens to emerge out of a warm damp container at that time, then, anyone who gets out of a heated swimming pool is at risk of having their destiny seriously messed with by prevailing constellations at the time. If astrology were true, the Force would influence us at conception, not birth. But an advocate of astrology may have an answer here. Perhaps it is dated from the time of birth because this is the point in time in which forces and events start to come into play in your life, because you are no longer in the safety of the womb. This answer would be a good one if it wasn't well-known that babies are influenced in utero by what the mother does, environmental sounds, and so on. Moreover, how would some distant stars and planets just happen to "know" when you emerged into the world and therefore that they must now start influencing you? Surely they're emanating their Force all the time, regardless of whether you've been born or not? Their influence must start at conception, not birth. The sign at your birth is irrelevant.

I must conclude that astrology is nonsense. But why should I spoil peoples' fun? For a number of reasons. Firstly, there's the self-fulfilling prophecy problem. It is possible that people consulting an astrological reading might subconsciously _act it out_. Someone might read, for example, that they're going to get very bad news that day, and go about the whole day unconsciously doing stupid things because they're so stressed about what the 'bad thing' might turn out to be. Secondly, astrology is part of a superstitious world-view, one that doesn't connect observed facts to theories by an explanatory causal mechanism. Astrology offers no causal link or explanation at all for why "Scorpios" are "belligerent" or "Taureans" are "stubborn". This world-view can cause harm. Think of how astrology encourages stereotyping and unfair treatment - especially when it comes to dating (“Oh I only date Sagittarians, I’m incompatible with Leos”). Imagine if a newspaper wrote articles generalising about a race or nation of people? That paper would be sued for racism. So why is it OK to typecast and stereotype people on the basis of a completely unscientific, unexplained system like astrology? Some people defend astrology as a kind of predecessor of Psychology, as a kind of theory of personality. But Psychology bases its theories on observed behaviour of persons. It does not _prescribe_ behaviour _to_ persons on the basis of their birthdate. ("Oh, you're a Taurus, so you're stubborn". No free choice in the matter at all).

I think it's written in the stars that astrology's days are numbered.

Sunday, 31 July 2011

Should the British Aristocracy be Abolished?

I recently saw a debate on Facebook about how Prince Charles supports homeopathy. He was referred to by the rather unflattering term 'snake oil salesman', and this was followed by a mostly American series of comments about how the aristocracy should be abolished, and thanks being given to God for having freed America from them.

I found this perfectly ironic. Let's draw some comparisons, shall we?
FeatureBritish AristocracyHollywood CelebritiesMythical/Historical Heroes
Irrational public interest in their lives Y Y Y
Are a source of entertainment, plays, movies, etc. Y Y Y
Draw tourists to locations associated with them Y Y Y
Serve as exemplars - or at least, in their own minds! Y Y Y
Have, did or do express opinions publicly... Y Y Y
... often on matters in which they are not qualified, such as health and politics, and global warming. Y Y Y
Are praised and adulated for mediocre achievements (Oh look! They're going shopping! They bought Gucci!) Y Y Y
Are very widely admired and emulated. N Y Y
Bring money into the country, more than they ultimately extract in their fees. Y Y Y
Are an elite group, difficult to penetrate. Y Y Y
Are overpaid. Y Y N
Can wield political power. Y Y - Reagan, Schwarzenegger Y

The chief difference between the aristocracy and Hollywood stars, then, is that celebrities are somehow not laughed at when they support homeopathy and other eccentric fads. Nor are they booed of the stage when they wax lyrical about global warming, as if they had a PhD in meteorology. So why should the aristocracy be ridiculed? They're probably more qualified than celebrities are, to make pronouncements on random topics, because unlike celebrities, they're required to be educated at the best schools in the UK. So why are they not emulated or admired for their eccentricities? Well, because, apparently, in this day, it's not "cool" to admire a well-educated prince, but it is cool to admire people such as illiterate former gangsters that talk in poorly-written rhymes, or smug actors with an acting degree, at best. I think it is quite ironic that in Ancient Rome, actors were viewed as one step above prostitutes. Yet now they're royalty. How the world has changed!

From this I must conclude that Americans, and our ancestors with their forms of hero worship, are no different to the British in their rather crass obsession with the personal lives and goings on of the aristocracy. I must also conclude that the aristocracy are largely beneficial, as without it, Britain would lack a good portion of their tourism industry, just as America accrues a lot of wealth via movie sales. They do have political power, but it is on paper, only. I do not foresee any of the royals attempting to make any significant political moves or decisions. Could you imagine the outcry if the royals overtly came out in support of some strong political position? It's unthinkable in modern Britain, just as Britain is unthinkable without its strong historical links. That is what the aristocracy provide; a link to the past, that gives Britain its unique character. Without them, it would just be another modern European state; a place with some old buildings, but otherwise, nothing interesting to write home about. The fact that some of the ceremonies - not to mention the bloodlines - are about 1000 years old, is what draws the hordes of tourists, coming to touch the past, and where, in many of their cases, their ancestors came from. Britain, ultimately, is the motherland of the English-speaking world, whether the Americans admit it or not.

List of useful mac apps

This is mostly a note to self so that I can share with others when they ask which apps I recommend. Android File Transfer.app (*) get file...