Monday, 20 June 2011

Correct English


On this page I list common errors people make in their usage of English, and provide the corrections. You'll notice that I can't make up my mind whether English is capitalised or not. I have the same problem with "earth." Anyone want to correct me?

PS If you don't think learning to spell is important, please consider the below:


Apostrophe, possessive and plural forms. This is used to indicate a missing letter which was historically present. Hence, the word "don't" contains an apostrophe because it is a contraction of "do not." the apostrophe replaces the O. Other examples are "let's", which is short for "let us", and "he's", which is short for "he has" or "he is". E.g., "He's got a car" means "he has got a car". Similarly, in the past, i.e., about 1000 years ago, English used -as for plural and -es for possessive (genitive). Since the loss of the vowel in these suffixes, we now use the apostrophe to denote the E in -es (genitive). e.g., John's apple's red colour—meaning, the red colour of the apple of John. If John has more than one apple, it would be apples (without an apostrophe) to indicate the presence of more than one apple, however, if we still want to talk about the red colour of John's apples, we have to add an apostrophe to indicate that we're not putting the -es of possessive form, but we know it should be there. So it would be: John's apples' red colour. American usage retains the second S, i.e., John's apples's red colour.

Now let's look at personal pronouns (he, she, it, and so on). Since the pronouns have had possessive forms for a long time, we do not indicate the possessive form on the pronouns with an apostrophe. So, "He owns the ball, the ball is his" is correct, but "He owns the ball, the ball is he's" is wrong. "He's" does not mean "that which belongs to him," it means "he is." It is a contraction. The same applies to "It's" versus "Its." "Its" means "that which belongs to it," whereas "it's" means "it is." So "That is its problem" is correct, but "that is it's problem" is incorrect, because it means "that is it is problem," which makes no sense.

Plural, except cases like mouse/mice, goose/geese, foot/feet, is always just "-s" without an apostrophe in all cases. So "The five barbecue's" is wrong because it means "that which belongs to the five barbecues." Some cases are debatable, e.g., the plural of abbreviations, such as CD or DVD. Is the correct plural CD's and DVD's or CDs and DVDs? Since apostrophe indicates possessive form, I lean to the view that CDs and DVDs is correct. However, if the abbreviation is in lowercase, it looks odd: cds and dvds, so in that case I think an apostrophe is acceptable because it is signifying the missing letters of the abbreviation—so, in the case of "CD's," the apostrophe is signifying the missing "-isk" letters. But if that were the case then we should have "C'D's" like we have "Don't." Of course, the old convention with abbreviations was to use dots, so, it would then be: C.D.s, which is ugly.

There is a similar problem with dates: "In the 1940s, there was a dictator"—I have indeed seen "In the 1940's, there was a dictator," but I suspect it's wrong. If in doubt, substitute "of" and see if it makes sense—because "of" is genitive like the apostrophe-s. "In the 1940 of there was a dictator." This doesn't make sense, so we can argue that "1940s" is preferable. If it were written in full, we'd not have "nineteen-fortie's," so it shouldn't take an apostrophe.

Lastly, because the apostrophe is used to abridge words by removing a letter, such as "a" in "are", people consequently fail to hear the word "are" in phrases such as "you're" and "how're", "they're". As a result of this lack of understanding of basic grammar, they leave the word "are" out, and write things that are grammatically incorrect such as "how you?" instead of "how are you?" or "they big" instead of "they are big". We'll see more symptoms of this lack of understanding later on.

Question mark. This is often left out when it should be present. All questions take a question mark. So, "where are you going" is not a question unless you put in the question mark.

Colon, semicolon and M-dash. These are used to signify pauses, pending an explanation or elaboration. A colon (:) is followed by a list of items being enumerated, or, an explanation. A semi-colon (;) is a pause with an elaboration forthcoming, e.g., "The weather was sweltering; it was like being in a steam bath." Whereas the colon is for lists, e.g., "The items are as follows: bananas, apples, oranges," or for really big pauses or explanations, e.g.: "Behold: The Lord is Come." You should not separate list items with a semicolon, you always use a comma or a new line. M-dash (the wide dash that connects words) is also used like a semicolon. e.g., "The weather was sweltering—it was like being in a steam bath." It can also replace parentheses (). e.g., "The large man (who, we might add, was tall)—" can also be written: "The large man—who, we might add, was tall—."

Hyphen. The hyphen is only used to join words, e.g., anti-social. Using a minus sign is incorrect, as is using an m-dash. Words that have been joined for a long time, such as "everyday," do not need a hyphen.

Punctuation mark names. 

() Parentheses (pa-ren-thee-sees). Singular: parenthesis. 
[] Brackets. 
{} Braces. 
: Colon. 
; Semi-colon. 
£ Pound. 
& Ampersand. A cursive Latin "Et." (meaning "and"). 
^ Caret. 
# Hash. Not "pound." It just happens to reside on the same key on a keyboard. Means "number." 
The British use No from Latin "Numero." Similar abbreviations are Wm (William), Thos (Thomas), and Bros (Brothers). 

Spelling and the History of Languages

Most people can't spell, which is something I find mysterious. Understandably, English consists of a whole lot of merged languages, primarily Germanic and Latin, but once you know the aetiology of a word or its parts, you know what spelling convention to use. So, for example, Latin-based words do not often use double-letters to signify the length of a vowel sound. Latin-based words are also often pronounced as they are written, except for the suffix -tion which is pronounced "shun." This is because many Latin words came into English via Old Norman French, and the French tend to drop or slur most sounds.

As a general rule, longer words are Latin and therefore spelt more or less as they're pronounced, whereas shorter words are Germanic and therefore older, and therefore have gone through more spelling changes that don't correspond to their pronunciation changes. As an example, consider "rough" and "through." 1000 years ago they were pronounced rooch and throoch, where -ch is the guttural -ch seen in the Scots word "loch." For those of you who have not heard this sound, you pronounce it as follows: Position your vocal apparatus as if you were going to say a K. But instead of a K, exhale heavily while keeping your tongue in that position. It should sound like you're coughing or clearing your throat. A bit like the H in the word "Ahem." Now, in Middle English, the guttural sound dropped out of favour to a heavy H, which diverged into an F and a silent-H, now seen in our modern pronunciations. It is for reasons like this that English spelling and pronunciation, especially of older words, seems erratic. It's not; you just have to understand a bit about history.

A useful thing to understand with spelling is that some words are composites. So, for example, aggression and agglomeration, are spelt with two Gs because actually the Latin has ad-, meaning towards. The ad- was assimilated (ad-similated) into the subsequent letters (sub: below, seque-: to follow, -ent: -ing, thus, "subsequent" means "following under.") Another example of a similar phenomenon is "thunder," which originally didn't have a D; the D was added. Similarly, consider words like "knives," "gnome," etc., where we don't pronounce the stop-consonant (K/G, P/B, T/D are stops)—because it is assimilated into the subsequent nasal consonant. The K/G etc is still there for historical reasons to retain a link to the original; so, e.g. Knave, Knight, Night (English); Knabe, Knecht, Nacht (German); Knife (English); Kniv (Swedish); Gnosis, Psyche (English); Gnosis, Psuche (Greek).

Then there's the famous rule of "I before E except after C," e.g., "Ceiling" but "Brief." This rule works fairly well, but there are exceptions, and you just have to make note of them, such as "seize".

The letter C is generally pronounced as an S only in Latin-derived words, and only before a front-vowel (E, I, Y). So: cyan, ceiling, accelerate. Since Latin doesn't take double letters for a short vowel (E in accelerate is short), one can guess, when spelling the word, that there'd be only one L. If it were a Germanic word, there'd be two Ls. Why then is the first C pronounced K? (I.e., why is it pronounced "Aksellerayt")? Well, the first C is before another C, so it is a K. The second C is before an E, so it is an S. Some scholars believe that Latin was pronounced like Italian, in which case it would originally have been a "ch" sound, i.e., "ach-chelerate." In Germanic words in English, the C is pronounced K or CH, for example: Church, child, kick, and so on. To make sure you don't mispronounce it, a K is used where it might be ambiguous, hence: King, kin, kind, ken. But originally, Old English spelt these words with a C as well, but this was changed to a K when more Latin words entered the language and took the S- pronunciation. If you understand these pronunciations, or you're familiar with them, then you ought to know exactly which spelling to use. Basically, Germanic favours K, Latin favours C. Remember:

  • If it's a long word or "fancy" word, it's probably Latin, and if you're saying an S before a front vowel (E, I, Y), then it's probably a C, e.g., cede.
  • If it's an S-sound before a front vowel in a basic word, e.g., seed, then it is probably an S.
  • If you're saying a K and it's before a front vowel and it's a fancy word, e.g., accelerate, then it's probably a C as well.
  • If it's a K-sound before a front vowel in a simple word, e.g., kite, then it's Germanic, in which case it's a K.
  • If the consonant comes before a back vowel (A, O, U), then spell it as it's said (K is K, S is S), except:
  • If the word is fancy (Latin) and the K-sound is before a back vowel (A, O, U), then it's a C.

The same logic applies to G. If you hear a hard G sound, like "go," then it's always a G (except "ghost" and "aghast," which both come from Old English "gast," meaning ghost). Before E, I or Y, G is, however, a J, with a few exceptions, most significantly, "get" and "give." These are old Germanic/Old English words, so, they kept their older spelling. So, if you understand the rules above for C, use them for G as well. Incidentally, Italian has a similar convention: Ce, Ci are pronounced "che" and "chi," but Che, Chi are pronounced "ke" and "ki," respectively. Italian is just more consistent on this because it is a purer language with fewer foreign words. So if you hear a word with a "J" sound in it, the spelling generally follows Latin rules: if it derives from a Latin word with an I, such as justice (ius), Jupiter (iupiter) or eject (iacta), it has a J. If it derives from a Latin word with a G, then it takes a G, such as General, gentle, genuflect, etc.

The latter pronunciation of the G derives from Italian convention, and the former (I becomes J) derives from French. Think also of Spanish, where J is pronounced like German Ch (soft guttural), e.g., Jose, pronounced "hozey." The I before the vowel (Latin: iosephus) becomes a Y (yosef), which becomes an H or guttural, hence the swishing sound of J. Most pronunciation changes are due to slurring and regional accents. Once an accent is written formally as a difference, you get a new dialect or language emerging. So, for example, the Scots say "hoos," "loos" and "moos" for "house," "louse" and "mouse," as the English did 1000 years ago, and as the Swedes still do. The Germans and Southern English, however, underwent a vowel shift in their accents, hence the Germans and Southern English say "house" (Haus), "louse" (Laus), "mouse" (Maus), which are pronounced the same in both languages. So the rules from G/J are:

  • G sound: always a G (unless followed by a front vowel, e.g. Ghent, and there's the exception Ghost... without the H it would be gosst).
  • J sound followed by a front vowel (E, I, Y): Assume it's a G at the beginning of a word, except "jet." Examples: genuflect, gentle, Genevieve, gender, German, agitate. Otherwise, spell with J if the Latin word has an I (eject vs iacta, as in alea iacta est).
  • J sound followed by a back vowel (A, O, U): spell with J always, except "margarine." (You have to memorise that one). Examples: Joke, Jack, John, Japan, January, Jupiter.

Lastly, as we saw above, there's a simple rule that long vowels are followed by single consonants, e.g. cattle (short A), Kate (long A), rattle (short A), rate (long A), and so on. The exception is short words which do not end on E, such as "cat" and "rat." The -e on the end of "Kate" and "rate" signifies that the vowel is long, whereas if we leave it off, it is short. This only applies to Germanic words; Latin and Greek words, like Apostrophe, don't have double consonants (e.g., Apposstrophe) simply because they weren't written with double consonants in their original language. English tries to retain original spellings of words as far as possible, hence the use of Ph in Greek-derived words to emphasise the Greek letter Phi (Fi).

If you understand these rules above, you cannot make a spelling mistake ("Spelling" is Germanic so it has a double L to signify that the E is short).

Short vowels in English 

"a" in "cat"
"e" in "net"
"i" in "thin" or "tilling"
"o" in "not"
"o" in "nor"
"u" in "but"
"oo" in "foot"

schwa, the blank vowel, occurs in almost every English word. It's transcribed in the international phonetic alphabet as an upside-down e, ə. Almost all vowels which are not the emphasised vowel in the word are pronounced as a schwa. This is the "a" in "sofa" or the "e" in "the". For example, if we write schwa as an apostrophe, a word like "religious" is pronounced "r'l-IH-j'ss" (only the i in -lig- is emphasised, the rest are schwas).

Long vowels 

"a" in "Kate"
"a" in "Father"
"e" in "meet"
"ai" in "air"
"i" in "machine"
"i" in "tiling" or "high"
"o" in "note"
"u" in "cute"
"oo" in "root"

These two lists above should help in judging whether the letter takes a double consonant after it or not. Incidentally, some English long vowels, especially A, I and O, are diphthongs, literally two vowels pronounced together. So long-A is actually short-A,Y and long-I is actually short-English-U,Y and long-O is actually schwa-W.  W and Y are also not really vowels or consonants; they're semivowels; Y derives from I and W from U; hence the Welsh word "cwm" (coom).

Common diphthongs

Diphthongs (double vowels) and certain combined consonants indicate the origins of a word. E.g. Th- (the, then, that, this, their) almost always indicates an ancient Germanic origin (except Thames, which is Gaelic). Sh- (should, ship, show) likewise. Ch- (child, church, chin) likewise. Other combinations give away that a word is loaned from elsewhere, e.g. Ph, Ps- is Greek. Bh is Gaelic. -ng is Germanic. -ant, -ent, -tion is Latin or French. Etc., which means you can guess the spelling conventions.

aa — Usually in foreign words. Scandinavian pronounces this "aw" like in "law", however it appears in some Biblical names like Canaan and Baal. In both cases it's pronounced approximately "ay" like in "day", however, South African English, under the influence of Afrikaans, also uses "Ah" like in "father".

ae — Usually found in Latin words, pronounced "ee", however, in the original Latin it was "i" as in "high". So we pronounce Caesar as "See-zer" but the Romans originally said Kaizer. As in the football club. The spelling Kaiser is the German spelling of Caesar. Also seen in Russian as Czar or Tsar.

ai — eh, as in "air", "hair", etc. This pronunciation often indicates a French word (except if the word is basic like hair, fare, fair, etc, where it was originally spelt -aeger in Old English, and pronounced "ayer" like in "layer" ).

au —usually "o" like in "nor" or "aw" like in "law", however, Americans say "ah" like in Father. 

ea — "ee" as in "meet". Almost all these words are native Old English (meat, beat, eat, heat).

ee — "ee" as in "meet", however, where it occurs in French loanwords it is "ay", e.g. Fiancee (fee-ahn-say).

ei — "ee" as in "ceiling"

eo — "ee-oh" as in "neo".

eu — "ew" as in "new". Usually indicates a Greek word, in which case the native Greek pronunciation is Eff.

ie — "ee" as in "meet"

ii — "ee-eye" as in "radii" (ray-dee-eye). Aways indicates a Latin plural, e.g. Octopi, Radii, etc.

oa — "oh" as in "boat".  Almost all these words are native Old English (oak, loan, goat).

oe — "ee" as in "foetus". Always indicates a Greek loanword.

oi — "oy" as in "boy". There are some exceptions where it indicates a Greek loanword, in which case it might be an "ee", however, I can't think of an example offhand; I'll update this page when I do.

ou — "ow" as in "now" or "uh" as in "nut", but "oo" in French and Greek loanwords, e.g. Noumenal (noo-men-al, from Greek Noos, the mind).

ui — oo-ah, as in "suicide" or oo as in "sluice". Indicates a Latin loanword.


Miscellaneous common errors

Abbreviations of "are." A lot of people don't know that you need to put the "are" after words like "they," in sentences like "they are coming." In casual speech, most people pronounce "they are coming" as "they coming," but in fact there is a subtle "are" being pronounced, hence the correct form is "they're coming." This error is particularly noticeable in the later British Empire—i.e., English-speaking places other than America, Ireland and Scotland, because in the 1800s the English stopped pronouncing their Rs on the ends of their words. Hence, "Car" is pronounced "Kah," whereas the Americans and Scots say "Karrrr." So the same thing happened to "are," which the latter peoples pronounce "arrrr" but which British-RP and Southern Hemisphere speakers pronounce "ah." Hence, "they are coming" becomes "theyah coming" which becomes "theya coming," hence the lazy "they coming." But it's wrong. It's written "they're" and pronounced, at worst, "theya." Another common error is "your" for "you are." The correct version is "you're." "Your" means "that which belongs to you," so, saying "your so annoying" means "that which belongs to you so annoying," which doesn't make sense. The Southern Hemisphere pronunciation of "you're" vs "your" is "year" (often transcribed "yer" when mocking e.g. cockney accent) vs "yaw". So "yaw book" (your book) vs "yer right" (you are right).

Everyday vs every day. "Everyday" means "mundane," "boring," "normal," "common or garden." Why? Because if something is "everyday" it is something you are likely to see every day. "Every day" (two words) means "on each day." So, the following is incorrect: "We will help you everyday." This means, literally, "We will help you boring." Similarly, "This is a very every day kind of thing" is no longer correct; it would mean "This is a very on each day kind of thing." We now join "every" and "day" if we mean "boring" to signify that we don't mean "on each day."

"Would of" instead of "would have." This problem arises from people saying the contraction "would've" and not realising it means "would have." If it meant "would of" it would be written "would'f." Furthermore, "would of" means nothing; "would of gone there" means "wanted to OF go there but didn't." How do you Of-go? That is meaningless. The "have" indicates the past tense, as in "have had the will to go there."

"ECT" instead of "Etc." Etc is a Latin abbreviation, for "et cetera," meaning "and various others." This is why it is Etc. The ECT spelling arises from a pronunciation error where some people pronounce "et cetera" as "eksetera." The correct pronunciation is "et setera".

Then vs than. Some people who do not pronounce English correctly do not understand that "then" and "than" sound different. "A" in English is usually pronounced as "ey"—e.g., "day," or, it is pronounced as "ae," in other words: make your mouth shaped like you are going to say "a" in "father," then say "eh." The result should be RP (received pronunciation) "a," as in "cat." This is a different sound to "e" as in "net." You can only make the "then/than" mistake if you're not pronouncing them correctly. This is particularly true outside the British Isles. "Then" means "consequentially," whereas "than" means "by comparison to." So, for example, "If he is bigger than her," means, "If he is bigger by comparison to her." If we substitute "then," the meaning of the sentence is ruined because "If he is bigger then her" literally means "If he is bigger consequentially her," which is meaningless. Similarly, "If we go to the shops than we buy something," is also meaningless. It should be "then," as in: "If we go to the shops THEN we buy something," meaning, "If we go to the shops, consequentially, we buy something."

Calling a letter an alphabet. A single character, such as "A," or "B," or "C," is called a "letter" or "a letter of the alphabet," or, amongst computer experts, a "character." "The Alphabet" refers only to the WHOLE SERIES of letters, from A to Z. When you talk about "An alphabet" you mean a series of characters used by a language to designate sounds, such as, "the English Alphabet," "the Cyrillic Russian Alphabet," etc. Calling the letter "A" an "alphabet" is as wrong as calling a cow a herd, or a car a traffic, or a house a city. The word "letter" means one of two things: either a single alphabetic character, or, a piece of correspondence, usually on paper. "Alphabet" comes from the Greek "alpha beta," the names of the two first letters of the Greek alphabet. Ultimately these names came from Semitic (Aleph, Beth—meaning an ox and a house, because that's what they originally were—drawings of an ox's head and a house).

Double-negatives. A double negative is a positive. Saying "I don't know nothing" means "I do know something." The phrase "I don't know nothing" is not correct English.

"Alot." The phrase "a lot" is two words. "Lot" means a "batch," so "a lot" means "a batch." "Allot"—one word—means "allocate, give or award." So if you say "I like you allot" you're actually saying "I like you award," which doesn't make sense.

They VS He/She Debate

It is quite common these days to use "they" as a neutral pronoun to avoid the implicit sexism in using "he."

So, for example, these days, we regard sentences such as this as sexist: "If the candidate wishes, he may apply for the job," because it implies only men will be candidates. So many people these days use "they" instead, e.g., "If the candidate wishes, they may apply for the job." This is slightly incorrect because people then proceed to use plural forms to agree with "they," since "they" actually refers to a group of people. So we end up with weird things like "If the candidate wishes, they may bring their ID books to prove their identity"—which is broken, because it says "ID books," which agrees with the plural "they," but "prove their identity" literally means "prove those many people identity" (just so). The currently recognised officially politically neutrally correct way of using pronouns is the awkward "he/she" construct. e.g., "If the candidate wishes, he/she may..."

The use of "they" as the neutral singular personal (as opposed to "It" which is the neutral impersonal), is, however, becoming more popular and prevalent and I fully expect it to be regarded as correct within twenty years. I have seen it in published books and heard it in films. Don't be too horrified; German uses "sie" as the respectful singular second person pronoun (Thou), but it also means "They." French has Tu and Vous. So I suspect it is inevitable that English will eventually use "They" as a singular, too; the personal version of "It". Try avoid it for now until Oxford declares it to be correct.


A lot of people can't tell when to use -tion, -shion or -sion. The rule is simple: if the word's stem ends on D, or S, then use -sion, eg., tension (tense/tend), lesion (laedere in Latin), abrasion (abrade). If the word's stem ends in something else, e.g., N, or T, then use -tion, eg., retention (retain), invention (invent), discretion (discreet). The only exceptions are fashion and cushion, which take -shion.


These two words have the same origin in Latin but they mean different things. Discreet means polite or unobtrusive, whereas discrete means separate and distinct. Remember it this way: with "Discrete", the two Es are separated by a T. So they're separate and distinct. Whereas, one should eat DISCREETLY.

Obscure Plurals and words that are spelt the same

For the most part, English uses -s for plural (house, houses). But it has some obscure plurals. My favourite is words ending on -is, such as thesis, analysis, and so on. They take -es as the plural, hence, theses, analyses. What's particularly interesting about analyses is it can mean "does analyse" (pronounced ANal-izez), or it can mean "many analysis" (anallis-SEEZ). Both are spelt "analyses". It's a bit like "Polish" (the nation), and "polish" (the cleaning agent). Remember: polish (the cleaning agent) is spelt with one L because it comes from Latin (polire), which doesn't do the double-consonant thing for short vowels (IE pollish). Polish (the nation) is spelt with one L because the O is long (Poh-lish), and the Poles themselves call themselves Polska (one ELL).

Other obscure plurals are -us (from Latin), e.g., octupus, virus, which can take -es or -i as plural: octopuses, octopi, viruses, viri. The plural viri is something of a joke because the correct accepted form is viruses, but people love talking about virii on the internet. However, if the word was virius, then the plural would be virii. Since it's virus (no second I), its plural can only be viri. Stick to viruses.

Strong plurals, where a vowel is changed, are rare. I think the exhaustive list is: goose/geese, foot/feet, tooth/teeth, brother/brethren, mouse/mice, woman/women, man/men, child/children, ox/oxen, die/dice. If you can think of any more, let me know.

Old past tenses

The past tense of "wend" is "went". So what is the past tense of "go"? We use "went" these days, but "went" means "wended". The original past tense of go was yede/yode. The reason is that in Old English, the letter Yogh (shaped like a 3 or lowercase g), was used instead of a G. This was understood to mean either a G or a Y according to context. It survives in Shakespeare as "clept" or "yclept", which are the past tense of "clipode", the Old English for to be named/called. The German equivalent of "Clip"/"Clep" is "Heisse" - Ich heisse John - I am named John. Now, why do I say that the Yogh survives in "yclept"? Because in Old English, some past tenses were formed with Ge- like in modern Germanic languages. So Yclept would have been Geclipode in Old English (pronounced roughly: yiclipoder). So, similarly, Go's past tense was "gede", pronounced "yedduh". This became "Yede" in Early Modern English.

Here's an except from Le Mort d'Arthur by Thomas Malory, written in the 1400s. It has both Yode and Yede in it:

AND soon as Sir Launcelot came within the abbey yard, the daughter of King Bagdemagus heard a great horse go on the pavement. And she then arose and yede unto a window, and there she saw Sir Launcelot, and anon she made men fast to take his horse from him and let lead him into a stable, and himself was led into a fair chamber, and unarmed him, and the lady sent him a long gown, and anon she came herself. And then she made Launcelot passing good cheer, and she said he was the knight in the world was most welcome to her. Then in all haste she sent for her father Bagdemagus that was within twelve mile of that Abbey, and afore even he came, with a fair fellowship of knights with him. And when the king was alighted off his horse he yode straight unto Sir Launcelot's chamber


How to Write a Novel

This article covers some basic tips on writing a good novel, by sticking to well-known and accepted conventions. Breaking these conventions outlined here is something that only the very brave or famous can afford to do.

These rules of thumb are pretty much standard across modern novels. If you are skeptical of any of them, pick up any famous, recent novel, read it, and you'll see that it's true.


Ensure that you use the right tense. Most novels are written in the past tense third-person: "Smith went to the shop." Some authors use first-person: "I went to the shop." This is more unusual these days; novels from the 19th century and prior used this tense a lot, but it's very rare nowadays and it sounds odd. Some authors, extremely rare indeed, use first-person present-tense: "I am going to the shop". Avoid this. Another thing to be aware about with tense is that speech and thought in the novel is technically present tense for the actors; they don't think "I went to the shop" or say "Smith - went to the shop". Speech and thought is present-tense in a novel: "I will go to the shop," or "I am going to the shop", or, "Smith - go to the shop!". This also means that when a person in the novel is thinking or talking about the past, you have to indicate this with "would/should/could have", and "have had" or "had had": "Smith had wanted to go to the shop, but couldn't." "I should have had the courage to go to the shop". Or even more importantly: "He had had enough. He had tried so hard to get it right, but couldn't." If there's no-one speaking or thinking, just use the straight past tense.

Chapter Length

Chapters must be of equal length. They should be less than 40 and more than 10 pages long. Take your book, and divide its number of pages by 20. That's roughly how many chapters you should have. Try to not have chapters of unequal length; it makes reading much harder because the reader starts to wonder when the chapter is going to end so that he or she can go to sleep.


Characters are developed by describing their thoughts and actions. You do not need to apply adjectives to them to create an impression of their character. E.g. it's considered amateurish to say something like this: "Smith was a scoundrel." It's better to use descriptions of behaviour, body language, and psychology: "Smith leered at Mary, whilst contemplating how much money she may have in her purse". That kind of thing.

You should have a clear protagonist (hero) and antagonist. The antagonist or "bad guy" does not have to be blatantly obvious or present from the outset. It is more interesting to let the bad guy reveal himself gradually through clues. A common device in modern novels is to trick the reader into believing a character is bad, and then reveal how it's actually someone else that has been causing all the trouble.

The ending or resolution of the story is called the "denouement" (pronounced day-new-mong, roughly). It's recommended that you have a denouement; stories that are left hanging are frustrating to the reader. If you want to do cliff-hangers, put it at the end of each chapter but not the last. It creates a sense of certain characters or storylines having been pointless if they're left unresolved.

Avoid indirect speech or narrations of descriptions of conversations. Rather use direct speech exchanges; they are useful for displaying or developing characters. E.g.: Smith then said to Jones that he didn't really like him as a person - versus - "Jones! You imbecile! Why do you always have to insult me in front of Mary? What's your problem?"

Multiple heads. Each scene has one protagonist from whose perspective the scene is narrated. The chief protagonist of your novel must have the most scenes in which he is the person whose perspective is taken. So, you shouldn't have a scene in which you psychologise about two different people. E.g. "Mary was wondering what John was thinking." "I don't think she likes me, John thought". Choose whose head you're inside, and use only that. E.g. "Mary was wondering what John was thinking" "She looked at John's face. It seemed to her that he was unsure about something." You mustn't confuse your reader about who they're supposed to be sympathetic to, and focusing on. Similarly, you can't say "they thought", as this implies that the narrator is in everyone's heads. You have to say "they agreed", which implies that they said it out loud. "They thought" implies that the narrator could read all their minds. Also, separate scenes in which you've switched from one protagonist's head into another's, with three asterisks and two line breaks or carriage returns above and below the asterisks, like so:

*         *         *

Diction and Style

Only write as you speak if you are writing the direct speech of one of your characters inside quotation marks. In descriptive paragraphs, do not use casual or spoken style.

Break your sentences into shorter sentences, especially in action scenes. Use commas. The word "however", for example, is almost always surrounded by commas. Use one clause (section of meaning) per sentence.

Avoid redundancies, like "sat himself down", "stood up", "shrugged his shoulders", "thought to himself", "five days' time", etc. Replace all of these with "sat", "stood" or "rose", "shrugged", "thought", "five days". Why? Because you can only sit down, you can only stand up, you can only shrug your shoulders, you can only think to yourself, five days are only in time.
"Got/Get". Avoid using this word altogether. It's casual speech. "He got confused" should be "He became confused". "He got into the boat" should be "He boarded". Etc. It's ok in direct speech, e.g. "GET OUT!"

Do not imitate accents, e.g. '"Zey are comeeng," Jacques said.' Rather just try to capture the style of the language through its unique structure. E.g. '"The enemy, they are coming, my friends. It is the life!", Jacques mumbled.'

Avoid -ing on verbs. Replace "he was walking" with "he walked". It's shorter and snappier.
Avoid adverbs and adjectives if you can replace the noun with one that implies both. For example: "he walked rapidly" - swap with - "he strode" or "he jogged." "He beat the enemy severely" - replace with "He smashed the enemy". Etc. It makes the pace seem faster. Obviously, in slow scenes, you use more adverbs and adjectives.

Repetition: avoid using the same word on the same page more than once, especially nouns, adverbs and adjectives. I've noticed many instances in which informal writers use the same word over and over. Unless the word's unavoidable, like "the" or "an" or "but", or unless it's technical, like "bandwidth", use a different word. So, "She was happy, she had no idea how it was possible to be so happy" is repetitive. Rather try "cheerful, cheery, merry, joyful, jovial,jolly, jocular, gleeful, carefree, untroubled, delighted, smiling,beaming, grinning, in good spirits, in a good mood, lighthearted,pleased, contented, content, satisfied, gratified, buoyant, radiant,sunny, blithe, joyous, beatific; thrilled, elated, exhilarated, ecstatic,blissful, euphoric, overjoyed, exultant, rapturous, in seventh heaven,on cloud nine, walking on air, jumping for joy, jubilant". English is not short of synonyms.

It's not conventional to end sentences on prepositions. ("Can I come with?") - Prepositions being things like "in", "on", "around", "at", etc.

Passive voice/active voice. Passive voice: the man was hit by the ball. Active voice: The ball hit the man. Rather use active voice in a novel.

Check that each sentence has a verb. -ing words don't count as verbs. "Walking there", for example, isn't a sentence, it has to be "He was walking there", where "Was" is the verb.
Hyphens. Quite a lot of words are hyphenated in English. Double-barrelled, for example. It's hard to explain the rule. Effectively, if two words have been together for a long time, they gradually acquire a hyphen, and then become one word. So, in Old English, tomorrow was "to morgen". In early modern English, it was "to-morrow". Nowadays, it's just "tomorrow" - one word. I think the easiest way to explain it is to say: put a hyphen or combine the words if the two words that make up a term make it into a new term by being together. So, a red coat is a coat that is red, but a redcoat is a British soldier. And so on.

Write everything in full; do not use digits, e.g., "1st", "1 AM", and so on. Rather: "First," "one in the morning". Only use digits for very large numbers, like a house number in a long street in an address: "Mary lived at 1024 Smith Street".

Don't forget to check my Correct English page, in the menus above.

Sentence run-on. Keep one clause per sentence. Otherwise it sounds rambling. E.g., "The sun was shining and the birds were singing and Smith went for a jog and it was a good jog and it was hard for him to think that just the day before he had been stuck in an office." Rather have this: "The sun was shining. The birds were singing. Smith went for a jog. It was good. It was hard for him to think that just the day before, he had been stuck in an office." Note how it's been split into individual clauses. It makes it easier to follow and read. You may ramble only in direct speech where the character or person speaking is prone to rambling.

Storyline Structure

Gone are the days where you could have a straight, linear story with only one protagonist and everyone else is incidental or peripheral. You have to have different scenes which feature different characters, and the protagonist (hero) must not be in every scene. You should also have scenes which show the world view and experiences of the antagonist, to give him (or it) some depth of character.

It's a common device nowadays to have multiple unrelated scenes with unrelated characters that eventually come into conflict or cooperation through chance. The idea is that initially, the reader can't tell which character is important and what is ultimately going to happen to them. If you start with a particular character and always show the world from their point of view, you're ensuring that the reader knows immediately that this person is the protagonist and is going to be immortal throughout the story, since they cannot disappear from the story if the story is only told from their point of view. A recent modern example is the Sookie Stackhouse series, upon which the TV series "True Blood" is based. You can have no fear or suspense in such a novel because the reader knows that no real harm can come to the character, since the novel is exclusively from her first-person perspective. This is one of the disadvantages to the first-person perspective narrative point of view. From a third-person point of view, all people are equal.

You can, if you like, tell the story asynchronously, that is, start by revealing scenes from later in time, and then lead back to them through subsequent chapters, to show how the starting scene came about. This is a very common device in modern films.

Make sure that you draw up a timeline spreadsheet. Each row is a chapter. Each column is an event that happens to a character in that chapter. You should do this to ensure that you don't end up with anachronisms; where an event happens to character (A) ostensibly at the same time as an event that happens to character (B), but reference is made to an event that happens to character (C) at the same time, but the event that happens to character (C) is impossible because it happened before the event that happened to character (A). To avoid this, simply make a spreadsheet with a row called "Chapter 1", and columns as follows: Character A, Character B, Character C. Then under each character's column, write a word or two about what happens to that character in that chapter, if anything. Then repeat for Chapter 2, etc. That way, your story cannot become asynchronous or anachronistic.


Most stories rely on certain facts. Some stories twist the facts, and some aim to be revisionist. That's OK. The most important thing in writing a story is to make sure your facts are correct. This is especially important in historical or political novels. So, you can't have James Bond going to kill a Communist General in Soviet Russia in the year 2010. And if you want your story to be about that, you can't have Bond driving a BMW Z4, since Communism fell before the Z4 came out. You get the picture. The only time you can introduce anachronisms is if you're doing a re-imagined revisionist history, such as the film "Inglourious Basterds", or some kind of graphic novel or Steam Punk story, whose time and geographical setting are unclear. Introducing an anachronism deliberately is only OK if that anachronism is the entire point of the story, e.g., Jurassic Park, the Land that Time Forgot, and so on.

Sex, Violence and Cursing/Swearing

You may want to include some sex and/or violence (verbal or otherwise) in your novel. Take the following into account. Firstly, is the novel intended for an audience of adults only? What are the odds of a child reading the novel? If you do not intend your novel for adults only, you should probably not have explicit sex or violence in the novel. If, however, it is a romance novel or a horror story, then by all means, put the explicit material in. If it is a story intended for adults, and it's not specifically intended as a horror story, then balance the violence with sex; violence-only doesn't make for a good story. And vice versa.

Publishing your Novel

The hardest thing with a novel is to be seen; I call this the "wood for the trees" problem. Unfortunately, the easiest way to get your book read is to publish it for free on a site like If, however, you want to make money off it and have it widely read, you probably still need to persuade a traditional paper publisher to take it on. This means that it really has to be good and have something unique. If, however, you're mainly wanting to publish it for a few friends to read, then consider online publishers like Lulu. This does not mean that you can't succeed through an online publisher; there are many success stories.

Saturday, 11 June 2011

Religion wants its cake and wants to eat it too - part 3 of 3

Many believers, when challenged on the authenticity of the Bible, resort to scientific proof of various stories; Noah's Ark, for example. The story of Noah’s Ark is perhaps one of the most vivid in the entire Bible. In fact, it has few competitors. The Jesus story, Adam and Eve, Joshua bringing down Jericho, or perhaps David and Goliath. So it’s not surprising that it keeps turning up in debates. The argument is usually about how some Christian has managed to replicate (or find) the Ark, which the atheists usually dispute. So here are two recent cases.

A Hollander, Johan Huibers, has [just built a replica of Noah’s Ark]( and plans to sail it down the Thames. His reasons are religious; if he can do it, Noah could have. And then last year a [group of explorers claim to have found]( yet another possible resting place, nay indeed, the remains, of the Ark.

Let me start by accepting the evidence is real (albeit perhaps not its interpretation). There is indeed an Ark-shaped and Ark-sized anomaly on Mount Ararat that we've been aware of for a while now. It’s entirely stone and seems to show remains of bolts. But if you know anything about fossilisation and the conditions it requires (dampness, quick burial, a few million years for the minerals to leach in and replace the organic compounds), then the stone Ark can't be the real thing because it would not be fossilised. It isn't buried, and it's not damp; Mt. Ararat is quite dry apart from snow. Moreover if fossilisation could happen in 6000 years, then we’d see evidence of partially fossilised Romans. Which we don’t. But the latest find, cited above, which seems to be more promising, is _wooden_, not fossilised. However, it has the anomaly of straw. Straw would definitely have decayed after 6000 years! A believer would have to explain these problems: How the first site of the Ark came to be fossilised so quickly, or, in the second case, why the straw has survived 6000 years. The discussion has to be undertaken in a scientific arena. If physical evidence has been brought to the table, it is the province of scientific investigation. It ceases to be “a matter of faith”.

But it seems as if believers want to use science when it confirms the Bible, and want to reject science when it disconfirms the Bible. They want to have their cake and eat it. But I think that either science is OK to use all the time, or it is never OK to use. Think about how believers appeal to Intelligent Design arguments from microbiology as proof of the existence of God, but how they ignore scientific anomalies in the Bible, such as light being created before the Sun and Moon. 

I’m not saying that Noah’s Ark is necessarily a complete myth, or that physical evidence doesn’t count. I'm pretty sure the Noah story did have an element of truth in it, after all, it appears in the Epic of Gilgamesh from [as many as 1400 years]( before [Genesis]( was written. The point is, finding the Ark is proof of the Epic of Gilgamesh, or at most, proof that the editors of the Bible copied down some accurate information. It is not proof of the whole book.

Evidence in favour of some element of a story isn't proof of the whole story. Imagine I write a novel about JFK in which he has a secret advisor, say an old school teacher. The secret advisor tells him to go on a parade in an open car. It turns out the secret advisor was in on the plot to kill him. Did that secret advisor exist? No, he's a figment of my imagination. So just because JFK existed and went on a parade in an open car, it doesn't mean that my secret advisor existed. Ditto God. Just because there's a book which contains some facts about some actual characters, and that same book mentions other characters, it doesn't prove that just because some of the characters are not fictional, that all the characters are not fictional. In ancient times, people didn't distinguish between narrative history and narrative fiction. So even if we found the Ark, it would not be proof of the truth of the Bible. For if the Ark were proof of the Bible as a whole, then JFK’s existence would prove my novel as a whole.

But is scientific evidence necessarily anti-religious? Last year, the [Origins Museum at the University of the Witwatersrand in Johannesburg]( put Australopithecus Sediba on display - the latest fossil find. I personally saw it close-up. It is a partial skeleton, with a complete skull, 1.95 million years old. I realise that some believers doubt radio-carbon dating. That’s because they aren’t aware of how it works, and that it’s not used for hominid specimens. The mathematics used to calculate the age of a fossil are very basic - junior high-school level. It’s a ratio measure of the amount of remaining, naturally-occurring radioactive matter in the rock. We know how long the radioactive material takes to decay, so when we know how much there is left in the rock, we can tell how old the rock is. Since a fossil is embedded inside the rock, we know the age of the fossil. But Sediba, or any other humanoid fossil, is not proof that God does not exist; it is merely proof that Genesis is a fairy tale. Just as finding Noah's Ark can't prove that God exists by verifying Genesis, finding Sediba does not prove that God does not exist by refuting Genesis.

The implication of empirical data confirming a theory, is a statistical one. The more Bible stories are verified, the higher the probability that the Bible is largely true. The more the stories are refuted, the lower the probability. This is simple mathematics. But the key point still remains: if you are going to use science and mathematics to verify Bible stories, you must stand prepared to answer scientific doubts and questions about the Bible, using scientific counter-arguments. Building an Ark to see if it is possible, is performing a scientific experiment. Finding an archaeological site and claiming it is the Ark, is postulating a scientific theory. In both cases, if science replies to you with a refutation, you are not allowed to whip out the “I take it on faith” answer. If you can’t reply to putative scientific refutation, you should also not be using scientific confirmation to suit your purposes. Either you're using and accepting the higher authority of the scientific method, or you're not. The ball is in the believers’ court.

Atheism is Not a Religion - part 2 of 3

Some people seem to think that Atheism, especially Dawkins’ New Atheism, is a form of religion, because it has a strong belief at its core: that God does not exist. Furthermore, proponents of this idea argue, the New Atheists display all the fervour of a Baptist at a revival, or a Muslim extremist dedicating himself to jihad. The New Atheists have figures that they almost worship - Dennett, Darwin and Dawkins, Hitchins, Harris and Hawking. New Atheism, they charge, functions like a religion, because it:
a) has favoured texts, such as Origin of Species and The God Delusion (Bible)b) has chief advocates, such as Dawkins, Harris et al (Pope)c) brings people together with a common belief and purpose (Church)d) attacks opposing beliefs (Persecutes).
Although I can see the similarity, let's not muck with English. New Atheism just isn't a religion. It's a sociological movement, like suffragettism, the hippie movement, communism, fascism, democracy, egalitarianism, and so on. It is a closer analogue with political movements. For all that, you may as well claim that Communism is a religion or the French Revolution is a religion, because all the above four points are true of Communism and the French Revolution. Indeed, even the American Revolution would count as a religion on these grounds: It had a favoured text - The Declaration of Independence (amongst others), it had a Pope - Jefferson, Washington, et al., it brought people together at a Tea Party and ultimately a war, and it attacked opposing beliefs - King George’s claim to Divine Right of dominion over those territories.
As for atheism being a belief, that involves taking the term for atheism as the definition. Analogous to this would be assuming that Islam, which means supplication, refers to supplication before anything and everything. The fundamental core of atheism is not a belief in a non-existence of God, but rather a practice of empirical observation, testing, and believing only what is observable, testable, verifiable, etc.
George H. Smith says: "An atheist is "a person who does not believe in the existence of God", rather than as one who believes God does not exist: Since an atheist need make no claims about God, it is up to the believer to prove her case."
Religion requires more than fanaticism; they're not synonyms. Religion also requires a belief in a superior non-physical power (God/Karma/Tao), which is attributed with meting out cosmic justice, creating the cosmos, intervening in the cosmos, plus the having of rituals and sacraments, sacred texts, places of common worship, priests, a mythos, unquestioning faith. Using the word "Religion" to describe, for example, sports fanaticism, is really just a _metaphor_. Taking the meaning of religion seriously in attributing religiosity to sports fanaticism is as silly as taking it seriously when you say that "such and such politician is a snake". We know perfectly well that politicians are not literally _squamata_ even though they share many characteristics in common (deceptiveness, soulless eyes, etc). _All_ the required characterstics have to be present. So, atheism has in common with religion only fanaticism - and that is true only of Dawkins' New Atheism; it's not true of say, David Hume - who some suspect of being a closet atheist. 

Most importantly, there's nothing in atheism that requires faith; its precise point is that it requires you to take _nothing_ on faith. This ridiculous claim - that atheism is a faith - is made by the devout in their ignorant discussions of atheism, and it completely misunderstands the point. They think that being excited about a point of view constitutes a religion, because in their minds, there's nothing more to truth than being excited and clapping ones hands.
What we're really witnessing is the throes of a new Reformation. We're now entering a new phase where the conflict is Belief/Nonbelief. And if Marx is right, a new result of the competition between antithesis and thesis, will be a new synthesis. What that new synthesis will be, who knows? Maybe esoterica? Maybe pantheism? Maybe, perhaps, hopefully - unabashed Reason?

Atheism versus Theism - the new Antitheses of Cultural Revolution - Part 1 of 3


Hundreds of years ago, a man named Gutenberg invented the printing press. He used it to run off copies of the first printed book - the Bible. Despite this, he inadvertently opened the way for doubt and heresy, and ultimately, atheism. I believe that it is not inaccurate to say that Gutenberg single-handedly started the cultural religious disputes that we see in the West to this day, because he made possible the large-scale dissemination of information. Once the printing press had been invented, it was feasible for Protestants to mass-produce their books, brochures and leaflets, which not only defied the omnipotent Catholic Church, but empowered people to be aware of other points of view. Gutenberg transmuted religion - from a monopoly - into a veritable cottage industry.

Now take your mind forward a few centuries to 1859. By this stage, science was flourishing. We had geology, biology, palaeontology, physics, astronomy and chemistry. All of these disciplines were starting to raise doubts. Furthermore, the rise of nation states and the rejection of Church authority by these nation states, in affairs of government and criminal prosecution, had enabled men of science to publish their views freely, with little fear of immolation as a martyr. With the growth of the sciences, cracks started to appear in the well-trodden road of unshakeable literalist belief. Genesis, particularly - the book that provided the foundation of Christian eschatology - was under threat. We don’t need a redeemer, or eternal life, or forgiveness, if Genesis is nonsense. Unfortunately for the Church, Darwin’s book, published in 1859, makes Genesis look, at best, like a poor metaphor, and at worst, an ignorant bronze-age shepherd’s attempt at cosmology. But without a printing press to circulate Darwin’s books, and the countless journal articles and research papers that lay behind it - we’d still be burning scientists at the stake.

Now we come to the 21st Century. Thanks to the Gutenberg of our own time - Tim Berners-Lee, the CERN scientist who created the World-Wide Web - we have proliferation and availability of information at a scale that would have driven the medieval Church to distraction. No-one in the vicinity of a computer, these days, has an excuse to be ignorant. Yet religion persists. This is not surprising - we have evidence of [Neanderthals 30-40 000 years ago burying their dead with religious rituals]( That clearly shows that religion is either seriously built into us, like our fascination with fire, or that we have a very, very old habit to break, if break it we must.

The Internet now Tweets and Blogs madly with debate raging back and forth about whether God exists. He has, over time, been somewhat _pared down_. If we think about the God of the Old Testament, compiled about 2500 years ago, he was a blustering interventionist, an old man with a beard, who appeared to people in person (walking in the Garden of Eden, or behind a burning bush talking to Moses). He created everything and orchestrated everything. But over the course of the centuries, as science advanced and explained more and more of nature, God’s role diminished. The Deists emerged - people such as George Washington and David Hume - men who felt that nature could be explained in its own terms, but maybe God did at least start it. Now, however, we have the top physicist on earth - Steven Hawking - denying that there’s “room” for a Creator at all. God seems to have no further role to play. Obviously, some argue that He is the foundation of morals. But there are many who dispute this. So let’s not go into the debate here; the point is, like the Protestant/Catholic wars that raged after Gutenberg, we are now seeing Atheist/Theist wars that are raging after Tim Berners-Lee.

We are at the start of a new cultural revolution which will define the next social direction that the West takes. Just as Protestantism broke the stranglehold of the Holy See on all principalities of the Medieval period, and led, ultimately, to the rise of independent nation-states, perhaps, just perhaps, a major break with religion will finally remove the last great fear that we have - of “other” people with “false” beliefs - and lead us to an international world, finally at peace, with nothing to kill each other about.

Thursday, 9 June 2011

God is not real for you and Atheism is not a belief

God can't be whatever you want him to be because that will lead you into relativism. So, if God is purple for you and pink for me, there is no answer about what God really is. If God is merely in our heads and has no independent true reality, then God is imaginary. If God is independent of us, and exists apart from us, he must have all his properties independently of us. In that case, what he is for each of us is irrelevant – or worse – false. Since only his independent properties could be universally true about him. Either God has independent real existence with real properties, or he is imaginary. That's the implication of difference in experiencings of him.

As for atheism being a belief, you're taking the term for atheism as the definition. Analogous to this would be assuming that Islam, which means supplication, refers to supplication before anything and everything. Atheism is, of course, a term derived by christians as a pejorative. The fundamental core of atheism is not a belief in a non-existence of something, but rather a practice of empirical observation, testing, and believing only what is observable, testable, verifiable, etc.

George H. Smith says: "An atheist is "a person who does not believe in the existence of God", rather than as one who believes God does not exist: Since an atheist need make no claims about God, it is up to the believer to prove her case."

Should America fund Restructuring in the Middle East? - Part 3 of 3


It has recently been argued by (Julie Taylor)[] that there is a case to be made for American funding restructuring in the Middle East - especially the Arab Spring states. (The Arab Spring being the term for the civil uprisings in many Muslim states this year). The arguments in favour are obvious: it will engender good feelings towards America, who has, thus far, been painted as everything pernicious from “The Great Satan” all the way through to “The Crusader”. The argument against, of course, is the Budget Deficit. America has already spent a huge fortune on the Middle East - most notably on wrecking it and depositing large armies in its terrain. The USA, the argument goes, should focus now on spending its tax dollars at home: helping the poor, for example, or on health care (God of the Republicans, forbid!).

Here’s my two cents’ worth. Why is America in a huge deficit over a Middle-Eastern war anyway? Is it not because, in the first instance, America needs to defend herself against an enemy which she ostensibly created in the first place? Up until the World War I, America’s official policy was non-interventionism - the Monroe Doctrine. In practice, America deviated from this, intervening in the Philippines and Panama - but these were not the rule. The non-interventionist or isolationist sentiments persisted even beyond the 19th Century. After helping in World War I, America refused to join the League of Nations or get involved in the Treaty of Versailles. She kept her distance again until Hitler arrived. In part, the US response to Hitler may have been due to the Stock Market crash. As we see now, with the “look to your own house first” sentiments from the American Left, the same applied in 1940 - people were concerned that the US Government was wasting time and money on foreign adventures. But as it turned out, America discovered that war was a profitable business. War was useful for fixing a collapsed economy, when there was nothing else to export but military equipment. This Roosevelt promoted in 1941, when he sold arms to the Allies. Effectively, this tied America’s economy to Allied Victory, and hence, it bound the US to assist her economic allies. The official excuse, of course, was that America stood for liberty, and Hitler and his allies for Fascism, and that America could not tolerate that. As we know, Pearl Harbour provided the required pretext. How is this different to today? Not at all; just swap out the names of the enemy countries involved. And so entered the new era of _America - World Police_.

Skim forward about six decades, to some falling skyscrapers, and we see the results of American interventionism. It is my belief that if America hadn’t meddled so much in Muslim politics - siding first with Saddam and then against him, for example, or not establishing military bases all around the Middle East - those skyscrapers may still have been standing. But could America have pursued a policy of non-intervention in the Middle East? It’s not clear. The threat of nuclear proliferation, or the Muslim nations possibly allying with Soviet Russia (and hence again, subsequent nuclear proliferation), made this impossible. The need for assured access to a primary supply of oil, in addition to the nuclear threat, made it unavoidable. America needed to have forces within striking distance of the Soviets. Interventionism had to happen. 9/11, in a manner of speaking, was an inevitable outcome of the Cold War and geography. Witness, as proof, how America helped the Taliban against the Soviets, but promptly exterminated the Taliban after 9/11.

Soviet Russia has, in the interim, collapsed. Forty-five years of paranoia about a great enemy, threatening the American way of life, is gone. What has happened since? Chaos has ensued. All the strategies developed by the US on the basis of the assumption of a single mega-enemy are no longer relevant. Our modern world is more complex. The biggest threat is no longer an ICBM to be met with a laser-armed satellite; it is now a single anonymous man with a dirty bomb in his backpack. The approach, of a military presence situated in specific strategic points, is not really relevant anymore - it just makes you look like an occupying army. A change in strategy is called for.

Let’s think about some of America’s putative successes: South Korea, Japan and Germany. If America had washed its hands of these nations after dealing with the conflict, what would have happened? Would they be the prosperous first-world democracies that they are today? Look at what happened to East Germany under Soviet rule. When the Berlin Wall fell, it was discovered that East Germany, socially, economically and technologically, was far behind the West. This could only be a result of the two different governmental strategies. I must conclude that the best thing that could happen to the states that have been involved in the Arab Spring, is that they could ask America for restructuring support - and get it.

Poverty breeds ignorance and anger. All the states with the highest rates of infant mortality, low life expectancy, poor governance, corruption, civil strife and violence, are those which have low GDPs and high levels of illiteracy. The worst terrorists come from theocracies - states run by priests with a vested interest in keeping people ignorant. The most human rights violations, per capita, occur in poverty-stricken states such as those in Central Africa. It is therefore imperative, that if America really wants to stem the tide of resentment and anger against her “Bad Cop” foreign policy thus far, that she puts her money where her mouth is - and helps rebuild a geopolitical area which she has in part contributed towards breaking down. It’s analogous to the socialist policies in South Africa. The more poor, starving people there are, the more crime there will be. Increase the social support for the poor, and you’ll reduce the crime levels.

Obviously, there are some key differences in the Arab Spring cases. Firstly, none of the Arab Spring states were under attack by the US, or even threatened. Secondly, the movement to demand democracy was, to use business-speak, a grassroots initiative. It was not imposed by the US. Third, none of the Arab Spring states posed an immediate threat to the West. But there is nonetheless a strong argument to be made in offering help: creating a perception that the US is not an imperialist aggressor. It will give credibility to the claim that the primary concern of the US is the “spread of democracy”. It will give the US a chance to play “Good Cop”.

Remember: the only reason America has to spend so much money on a military presence in the Middle East is _because_ these states are unstable, and in many cases, impoverished theocracies or dictatorships. If they, too, could be helped to flourish, like Germany, Japan and South Korea were, perhaps America would then be able to realistically think about a much-ballyhooed “exit strategy”. But not until then. Until then, these states pose a risk of terror cells, threats to the oil supply, and human rights abuses. Of course, remaining in occupation will bring resentment with it; but if the occupancy was sweetened with restructuring benefits, that resentment might be lessened. West Germany was, in 1945, a military enemy of the US, and was occupied by a US military presence. By 1989, West Germany was a friendly US ally. The same could happen in the Middle East. Can you imagine a future in which you can tour the Middle East without fear of terrorists? It’s possible. America just has to finish the job they started.

Monday, 6 June 2011

Are Muslims experiencing a Holocaust? - Part 2 of 3

The West, it is often charged, is fomenting a kind of Islamophobia against practitioners of the Muslim religion. Witness the rants of the BNP in the UK, as a simple example, or the suppression of Islamic dress in France. Switzerland, furthermore, (banned further building of minarets)[] - the towers whence the Muslim faithful are called to prayer. America and her allies occupy Muslim lands, and have waged war against Islamic nations, causing terrible losses. The Muslims seem, in their view, to be undergoing a similar experience to that endured by the Jews in the 1930s in Europe. Like the Jews, the Muslims are accused of being traitorous - loyal only to their religion, and plotters - aiming to overthrow the world order. Just as the Jews were falsely accused of manipulating the financial systems to further their goals, the Muslims are falsely accused by the likes of the BNP, of (plotting to make us all into Dhimmis - Islamic second-class citizens)[]. The Muslims of today experience racially-motivated hate attacks at the hands of skinheads in the UK and Germany, just as the Jews experienced physical violence at the hands of Hilter's brownshirts and, ultimately, their systematic extermination at the hands of the SS.
This comparison - whether accurate or not - was made late last year by (Job Cohen)[], a Dutch (and Jewish) Labour politician. Certainly, in the Netherlands, the right wing has made some inroads into their parliament. But this is unsurprising, after the murder of the grandson of Vincent van Gogh, (Theo)[] for producing a movie about the oppression of Muslim women with Ayaan Hirsi Ali, his partner, and an apostate of Islam. His corpse had a warning about Jihad pinned to it with a knife. Van Gogh's work included a fictional portrayal of the assassination of (Pim Fortuyn)[], an academic and politician who was also ultimately assassinated for his negative views on Islam.

But let's stop and reflect on how accurate the comparison is - between the Holocaust and a present-day Muslim experience of life in a Western nation. Are the BNP, or any other right-wing xenophobic party, presently in control of any European nation? No. Have the Muslims actively been attacked by official representatives of any sitting political party? No. These are some crucial differences between the Holocaust and the present difficulties in which European Muslims find themselves. A further, more obvious difference, is that Europe and the West are not systematically exterminating Muslims. One might argue, of course, that the 150 000 or so casualties of the Iraqi war could be taken to represent Martyrs to Islam, killed by Crusader invaders from the West. But they are not. They died ostensibly in defence of Saddam Hussein's fascist secular Baath party, not Islam. At any rate, not to diminish the terrible suffering of the Iraqi people, but their suffering, thus far, is about _one-fortieth_ that of the Jews under Hitler. No Iraqis were used for scientific experiments, or burnt in ovens. Furthermore, the West has not attacked any of its own Muslim citizens. Indeed, many concessions have been made to them. For example, in the famous Danish Cartoon incident, many Western newspapers and governments supported the Muslim demands for censorship. Most Western nations have not prohibited Islamic dress. If this was Europe in the 1930s, the governments themselves would be the ones drawing the cartoons and pinning crescent-moon shaped "Das Muslim" labels on people. So I don't think the comparison is quite equivalent.

In fact, quite the opposite may be happening. Yet again, the Jews may ultimately be the victims and scapegoats. Consider the following. For 2000 years, the Jews have wandered the earth, persecuted and killed wherever they went. Even in England, where they have perhaps been best received, the Jews were persecuted during the Middle Ages. They last had a home in 70 AD, when the Romans initiated a genocide against them. After that, they traversed, and were persecuted in, all the nations of Europe. Then they were burnt by the Inquisition, under the auspices of the Catholic Church. Then attacked by the Tsar of Russia's armed forces. Then the Communists. Then Hitler. Yet even today, the Muslim world begrudges them a homeland, demanding the destruction of the state of Israel, and calling anyone who supports it by the pejorative term "Zionist", a term used by latter-day Nazis.

Granted, the modern state of Israel was cemented by force, and is still retained by force. But where else can a Jew go if he has no home elsewhere? Jews in Sweden and Holland report that they are so outnumbered by Muslims now, that they see no alternative but to (flee to Israel)[]. Their claim is that because of the tolerance of Muslims, and because of the large influx of Muslims into tolerant Western nations, Jews are again being scapegoated - this time, for the oppression of Palestinians. As if European Jews had any say or vote in Israel's actions. It's as irrational as blaming a European Muslim for the actions of Al Qaeda.

Israel is, admittedly, the world's only official, still-standing Apartheid state. Palestinians must live on that side of the wall, Jews on this side. To get from one side to the other, you have to show your pass documents. This is no different to Apartheid South Africa - even down to the detail of building occupier towns in formerly-owned land. Furthermore, the Israeli state cooperated and shared nuclear technology with the South African Apartheid state. This is one of the greatest ironies of history, since some members of the Apartheid apparatus openly espoused Naziism. Even today, the modern AWB - Afrikaner Resistance Movement, the remaining racist party in South Africa - flies a three-legged swastika on a Nazi banner.

But like the Apartheid state, Israel's violent acts are claimed to be acts of "self defence" against a perceived and indeed self-created "enemy". These actions derive from a fear of annihilation. Just as the Apartheid government fed off the fear of the "Swart Gevaar" - the Black Danger - Israel insists on the doctrine that if the Muslim world could eradicate Israel, they would. Unfortunately for Muslim sympathisers, Muslim leaders, most notably President Ahmadinejad, _do_ regularly call for Israel's annihilation. And as long as the Muslim world persists in this call, Israel will persist in violence towards its Muslim citizens, perceived as inner threats. Like the ANC bombers of the 1980s in South Africa, Palestinians still plot and do kill unarmed citizens. Unsurprisingly, they are met with murderous responses from the Israeli state. One man's freedom fighter is another man's terrorist. I recall clearly how the ANC were called "terrorists" in the 1980s.

But where does all this intolerance come from? From the absolute certainty harboured by every devout believer of the world's three monotheistic religions: The view that _your_ God is not real, and mine _is_, and that my God has ordered me to kill you. "Oh, that's only the extremists", some argue. "It's not representative of religious people, generally." Unfortunately, that is just a comforting myth. The psychology of mass hysteria - that people act irrationally in large groups, and that individuals will do whatever the group demands - is an established scientific fact. Couple that with direct injunctions to kill unbelievers (Surah 9:5, 9:73, 47:4, and Deuteronomy 13:6-15, 17:3-5, Psalms 139: 21-22, Leviticus 24:16) you have a recipe for war. The cure for the violence in the Middle East is secularism and material prosperity. As long as people are poor and uneducated, they will be prey for the violent rantings and exhortations of theocrats. And as long as theocrats rule, there can be no peace in the Middle East.

Friday, 3 June 2011

Dr Death Dies

After serving eight years in prison, (Jack Kevorkian has died of a thrombosis at the age of 83)[]. For those of you who don’t remember, he’s the doctor who had a special injection machine that helped people to commit suicide when they discovered that they were terminally ill. He was sentenced to jail for second-degree murder. My first thought when I saw the headlines were that it’s a pity he didn’t have the integrity to commit suicide himself using his own machine. At least that would have been consistent. I mean, he was pretty old, and very sick.

Here’s the part I want to debate: helping people to commit suicide. Is it OK? Under what conditions? Nietzsche, the infamous German philosopher from the late 19th Century, felt that suicide, or, as he put it, ‘dying at the right time’, was quite honourable. He admired the Ancient Greeks for exiting at their chosen moment. In fact, he even said: “The thought of suicide is a powerful solace: by means of it one gets through many a bad night.”

I can see three separate issues here. One: ought a person to be allowed to commit suicide? Do they have that right? Two: ought a person to be allowed to assist someone in the process of suicide? Ought it to be legal? Three: is suicide immoral?

Firstly, I will tackle the question of morality. To me, this is a fairly cut-and-dried question. Clearly, if suicide is a form of murder - it’s called “self-murder” in many languages that don’t have a specific word - then suicide is wrong. Indeed, the Bible seems to prohibit it with the Commandment “Thou shalt not kill”, and 1 Corinthians 6:19-20 (KJV), which says that your body is the Lord’s Temple. But there’s no clear injunction - nothing as clear as the injunctions against sex, for example. From a modern moral standpoint, however, people are held to have an inviolable right to life, and therefore, any suicide is wrong from a modern interpretation, too.

I don’t know if this is true, however. If I think about the Utilitarian conception of morality, for example, which takes “the Good” to be the sum of the various possible outcomes, it seems to me that there may be a circumstance under which suicide is better, and therefore the right course of action. Think, for example, of a mass murderer suffering remorse. Would it be better that he lived? Or think of someone captured by a malevolent foreign army, who is certain to be tortured to death. Suppose he has a cyanide pill with him, and if he takes it, he will not be tortured into revealing military secrets. By his suicide, he will spare himself hours of agony and subsequently the defeat of his nation’s army and the deaths of thousands of others. A Utilitarian would say his suicide would be good. But perhaps it’s not so much _good_ as the lesser of two evils. Now take the example a person in a persistent vegetative state, on life support, or someone dying of incurable terminal cancer. Should they be allowed to commit suicide? My intuition says “yes”.

Next, let’s ask whether someone has the right to commit suicide. Let’s think about a common occurrence in our own society. We euthanise our terminally ill or extremely senescent pets. We deem it immoral to allow the animal to continue to suffer with cancer or kidney failure, or dragging itself around on its forelegs with its effluence matting its fur. We give our pets an opportunity to die with dignity, while they are still relatively comfortable. Why not humans? Animals, I argue, have as much a right to life as humans. I do not see them, generally speaking, as lesser beings, disposable like garbage. Why then, while we have the (R/A)SPCA to prevent rights abuses towards animals, do we not acknowledge that maybe the human Right to Life is circumscribed by people’s choices? Surely the right to life entails something about quality of life? I know that if I were in a persistent vegetative state, I’d not want to persist. Remember the huge fuss about (Terry Schiavo)[]?

Life is not just about biological functions. A plant has a circulatory system, takes in food, processes it, grows new cells, etc. Yet we do not hesitate to kill plants, without the slightest regard for their “feelings” or “right to life”. This is why it’s called a “vegetative” state. What really makes a person a person is his or her personality - their ability to interact with others, their conscious mind. If that is completely gone, and is irreparable, what is the point?

Obviously, many people who commit suicide are not in a persistent vegetative state. Indeed, being in that state precludes you from committing suicide, since you can’t make any decisions at all, and suicide is a decision. Most people who commit suicide do it for irrational reasons - such as having been jilted or fired from work or divorced, or even ludicrous things, such as wanting to have their souls taken up by the (Hale-Bopp comet)[]. I think that these cases ought to be excluded by law. This is not hard to work out. A depressed teenager has no right to commit suicide - they have a right to psychological counselling. A person dying of terminal cancer, in my view, does have a right to suicide.

But what about the family, that love the dying person? Surely their rights count too? Surely they have a right to not be put through the anguish of the person’s death? Well, the fact is, if the person is in a vegetative state, or diagnosed with terminal cancer, then the person is either effectively dead or shortly going to be. Insisting that they drag their existence on for another month, six months, or a year, is really just postponing the inevitable. It just prolongs the suffering. If these “loving” people are so loving, and not actually just selfish, they’d want the best for the suffering person - which is an end to their suffering. Just like a dog or a cat that is suffering, they should allow the person they love to choose to end their suffering early.

And what about the argument that it’s cowardly? Well, we’re all going to die. In my view, clinging to life as long as possible is more cowardly than boldly going to face one’s own end. I think that takes far more courage than hanging on as long as possible. Sure, the person fears the potentially lengthy period of suffering that lies ahead, but what benefit, what accolade, will that person gain by enduring it? Admiration? Unlikely - they’ll more likely get people’s pity and condescension.

Lastly, this brings us to the question of assisted suicide. This is the tricky part. Is the person, who helps in the suicide, effectively a murderer or accomplice to murder? Let’s take the case of Armien Meiwes, which I’ve referred to before. His lover volunteered to be eaten. The German courts ruled that Meiwes was therefore only guilty of manslaughter, since his lover had volunteered. The same logic might apply here: the accomplice in the suicide would be guilty of manslaughter. But it also depends on how the accomplice operates. For example, in Kevorkian’s case, he provided the chemicals. I think that if he pressed the button that delivered the chemicals, then he probably was guilty of murder, or at least manslaughter. If, on the other hand, he merely provided the chemicals and the mechanism, and handed the trigger to the person wishing to commit suicide, then he was at most an accomplice in manslaughter, legally speaking.

The point is that governments need to make a decision on this particular series of cases: a terminally ill conscious person, and a person in a persistent vegetative state. Both cases need simple, clear legislation; firstly, to circumscribe the conditions under which euthanasia is legally permissible, and secondly, to specify which methods are legally permissible and exonerate the person performing the euthanasia. It will have to work in a similar way to the logic of an abortion clinic. The medical professional will need to be protected from not only legal ramifications, but also religious persecution. And there will need to be provision for medical professionals to, on grounds of conscience or on religious grounds, to refuse in any particular case, to perform the procedure.