Global Gulag Media Forum

MASS MIND CONTROL + SOCIAL ENGINEERING + PROPAGANDA => Mind Control => Topic started by: Brocke on Aug 31, 2012, 04:29:49 pm

Title: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:29:49 pm
Example 1

You can't even tie your shoes properly...

How to tie your shoes (


Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:32:22 pm

Example 2

All those carrots you ate didn't improve your eyesight.


Carrots have long been touted for their efficacy in improving eyesight, and generations of kids have been admonished to not leave them on their plates lest they end up needing glasses. But are carrots the sight-boosters popular wisdom asserts them to be? And if not, where did this belief begin?

While carrots are a good source of vitamin A (which is important for healthy eyesight, skin, growth, and resisting infection), eating them won't improve vision. The purported link between carrots and markedly acute vision is a matter of lore, not of science. And it's lore of the deliberately manufactured type.

In World War II, Britain's air ministry spread the word that a diet of these vegetables helped pilots see Nazi bombers attacking at night. That was a lie intended to cover the real matter of what was underpinning the Royal Air Force's successes: Airborne Interception Radar, also known as AI. The secret new system pinpointed some enemy bombers before they reached the English Channel.


British Intelligence didn't want the Germans to find out about the superior new technology helping protect the nation, so they created a rumor to afford a somewhat plausible-sounding explanation for the sudden increase in bombers being shot down. News stories began appearing in the British press about extraordinary personnel manning the defenses, including Flight Lieutenant John Cunningham, an RAF pilot dubbed "Cats Eyes" on the basis of his exceptional night vision that allowed him to spot his prey in the dark. Cunningham's abilities were chalked up to his love of carrots. Further stories claimed RAF pilots were being fed goodly amounts of this root vegetable to foster similar abilities in them. (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:34:10 pm

Example 3

The first American slaves were white.


Most of Australia's "convicts" were shipped into servitude for such "crimes" as stealing seven yards of lace, cutting trees on an aristocrat's estate or poaching sheep to feed a starving family.

The arrogant disregard for the holocaust visited upon the poor and working class Whites of Britain by the aristocracy continues in our time because the history of that epoch has been almost completely extirpated from our collective memory.

When White servitude is acknowledged as having existed in America, it is almost always termed as temporary "indentured servitude" or part of the convict trade, which, after the Revolution of 1776, centered on Australia instead of America. The "convicts" transported to America under the 1723 Waltham Act, perhaps numbered 100,000.

The indentured servants who served a tidy little period of 4 to 7 years polishing the master's silver and china and then taking their place in colonial high society, were a minuscule fraction of the great unsung hundreds of thousands of White slaves who were worked to death in this country from the early l7th century onward.

Up to one-half of all the arrivals in the American colonies were Whites slaves and they were America's first slaves. These Whites were slaves for life, long before Blacks ever were. This slavery was even hereditary. White children born to White slaves were enslaved too.

Whites were auctioned on the block with children sold and separated from their parents and wives sold and separated from their husbands. Free Black property owners strutted the streets of northern and southern American cities while White slaves were worked to death in the sugar mills of Barbados and Jamaica and the plantations of Virginia.

The Establishment has created the misnomer of "indentured servitude" to explain away and minimize the fact of White slavery. But bound Whites in early America called themselves slaves. Nine-tenths of the White slavery in America was conducted without indentures of any kind but according to the so-called "custom of the country," as it was known, which was lifetime slavery administered by the White slave merchants themselves.

In George Sandys laws for Virginia, Whites were enslaved "forever." The service of Whites bound to Berkeley's Hundred was deemed "perpetual." These accounts have been policed out of the much touted "standard reference works" such as Abbott Emerson Smith's laughable whitewash, Colonists in Bondage.

I challenge any researcher to study 17th century colonial America, sifting the documents, the jargon and the statutes on both sides of the Atlantic and one will discover that White slavery was a far more extensive operation than Black enslavement. It is when we come to the 18th century that one begins to encounter more "servitude" on the basis of a contract of indenture. But even in that period there was kidnapping of Anglo-Saxons into slavery as well as convict slavery.

In 1855, Frederic Law Olmsted, the landscape architect who designed New York's Central Park, was in Alabama on a pleasure trip and saw bales of cotton being thrown from a considerable height into a cargo ship's hold. The men tossing the bales somewhat recklessly into the hold were Negroes, the men in the hold were Irish. (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:43:34 pm
Example 4

The Great Depression did not start in 1929.

Actually, the great depression was not 1929. The stock market crash of 1929 was a significant event, but it was not the cause of the great depression. A study of the chart above shows that in fact the market had begun a robust recovery signaling growth in earnings and return to growth in the economy as a whole. But what happened in June of 1930 was what really sent the market reeling, the passage of the Smoot - Hawley Tariff. In an attempt to help the American worker the government made the horrible mistake of causing higher unemployment by having a trade war with foreign neighbors. The price of goods and services went higher as the demand globally fell correspondingly to the higher prices of American goods.

What was a political promise made by Hoover to gain worker votes turned out to be a horrible job killer.


Smoot–Hawley Tariff Act

An Act To provide revenue, to regulate commerce with foreign countries, to encourage the industries of the United States, to protect American labor, and for other purposes.

The Tariff Act of 1930, otherwise known as the Smoot–Hawley Tariff (P.L. 71-361) was an act, sponsored by United States Senator Reed Smoot and Representative Willis C. Hawley, and signed into law on June 17, 1930, that raised U.S. tariffs on over 20,000 imported goods to record levels.

The overall level tariffs under the Tariff were the second-highest in US history, exceeded by a small margin only by the Tariff of 1828[3] and the ensuing retaliatory tariffs by U.S. trading partners reduced American exports and imports by more than half.

Some economists have opined that the tariffs contributed to the severity of the Great Depression. (

The Smoot Hawley Tariff Act - WHRHS (

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:45:59 pm
Example 5

We can only account for about 5% of the universe


There are all sorts of theories and fancy terms for what scientists think the universe is made of. But the fact is they just can't account for 95% of the matter that should make up the universe.

It could be said if scientists can only account for 5% of the physical universe, then they really only have 5% knowledge of everything.

Perhaps we are only smart enough to explain only that much of what we think exists around us.

BBC: Abandoning Dark Matter Theory? Most of Our Universe is Missing

Part 1 - Most of our Universe is Missing - BBC Horizon

Part 2 - Most of our Universe is Missing - BBC Horizon

Part 3 - Most of our Universe is Missing - BBC Horizon

Part 4 - Most of our Universe is Missing - BBC Horizon

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 04:58:01 pm
Example 6

The year is 1815 not 2012

The early Middle Ages (614-911 A.D.) never happened.


Do we live in the 18th century? (

A few fringe professors have caused rumblings with their controversial claim that three hundred years of human history have been entirely made up. But why do they believe in phantom time, and how do they think it happened?

Phantom Time Hypothesis and other methodologies of empire building... by Vox.. State University of New York (

Phantom Time Hypothesis and other methodologies of empire building... by Vox.. State University of New York from Vox on Vimeo.

The Phantom Time Hypothesis
Written by Alan Bellows on 04 October 2006

When Dr. Hans-Ulrich Niemitz introduces his paper on the “phantom time hypothesis,” he kindly asks his readers to be patient, benevolent, and open to radically new ideas, because his claims are highly unconventional. This is because his paper is suggesting three difficult-to-believe propositions: 1) Hundreds of years ago, our calendar was polluted with 297 years which never occurred; 2) this is not the year 2005, but rather 1708; and 3) The purveyors of this hypothesis are not crackpots.

The Phantom Time Hypothesis suggests that the early Middle Ages (614-911 A.D.) never happened, but were added to the calendar long ago either by accident, by misinterpretation of documents, or by deliberate falsification by calendar conspirators. This would mean that all artifacts ascribed to those three centuries belong to other periods, and that all events thought to have occurred during that same period occurred at other times, or are outright fabrications. For instance, a man named Heribert Illig (pictured), one of the leading proponents of the theory, believes that Charlemagne was a fictional character. But what evidence is this outlandish theory based upon?

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 05:02:42 pm

Example 7

The 'Peace Symbol' is actually an ancient sign for despair and death.
( ( (

The peace symbol (also called the "broken cross," "crow's foot," "witch's foot," "Nero Cross," "sign of the 'broken Jew,'" and the "symbol of the 'anti-Christ'") is actually a cross with the arms broken - DOWNWARD. It also signifies the "gesture of despair," and the "death of man."

"The Germanic tribes who used it attributed strange and mystical properties to the sign. Such a 'rune', in reverse is said to have been used by 'black magicians' in incantations and condemnations....To this very day the inverted broken cross - identical to the socialists' 'peace' symbol - is known in Germany as a 'todersrune', or death rune. Not only was it ordered by Hitler's National Socialists that it must appear on German death notices, but it was part of the official inscription prescribed for the gravestones of Nazi officers of the dread SS. The symbol suited Nazi emphasis on pagan mysticism." (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 05:05:46 pm
Example 8

The Pledge of Allegiance was written by a socialist and inspired the Nazi salute.


Pledge of Allegiance, Francis Bellamy, Edward Bellamy

The Pledge of Allegiance was the origin of the stiff-armed salute adopted later by the National Socialist German Workers Party (Nazis). For more information visit, the site that archives the discoveries of the noted historian Dr. Rex Curry, author of the book "Pledge of Allegiance Secrets."

Whitest Kids U'Know: Pledge of Allegiance

/me  screams at Palmerston

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 05:08:10 pm

Example 9

Heliocentrism was suggested as early as the second century BC


Plutarch (c. 45-125) reports that Seleucus of Seleucia (born c. 190 BC)  was championing the heliocentric system and teaching it as an established fact, in the second century BC  (Seleucia was an important Greek city in Mesopotamia, on the west bank of the Tigris River).  At that exact same time, however, Hipparchus of Rhodes (190-120 BC) reverted to the geocentric belief and was instrumental in killing the heliocentric idea altogether  [cf. Thomas Little Heath (1861-1940)].

The idea was strongly suppressed by the Church for centuries.  Reviving it took  more than a little courage  on the part of Copernicus and his early followers. (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 05:27:41 pm

Example 10

The Electric Sun/Electric Universe vs The Nuclear Star/Gravitational Universe theory


The Electric Universe (

The Electric Universe theory highlights the importance of electricity throughout the Universe. It is based on the recognition of existing natural electrical phenomena (eg. lightning, St Elmo's Fire), and the known properties of plasmas (ionized "gases") which make up 99.999% of the visible universe, and react strongly to electro-magnetic fields. (

In this day and age there is no longer any doubt that electrical effects in plasmas play an important role in the phenomena we observe on the Sun.  The major properties of the "Electric Sun (ES) model" are as follows: (

 The Electric Universe theory Wiki
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 05:48:40 pm

Example 11

You have more than five senses. You have somewhere between 9 and 21 senses.

( (

There is no firm agreement among neurologists as to the number of senses because of differing definitions of what constitutes a sense. One definition states that an exteroceptive sense is a faculty by which outside stimuli are perceived. The traditional five senses are sight, hearing, touch, smell and taste, a classification attributed to Aristotle. Humans are considered to have at least five additional senses that include: nociception (pain); equilibrioception (balance); proprioception and kinaesthesia (joint motion and acceleration); sense of time; thermoception (temperature differences); and possibly an additional weak magnetoception (direction), and six more if interoceptive senses (see other internal senses below) are also considered.

One commonly recognized categorisation for human senses is as follows:

This categorisation has been criticized as too restrictive, however, as it does not include categories for accepted senses such as the sense of time and sense of pain. (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:00:42 pm

Example 12

Einstein did not invent the "Theory of Relativity".


Though Einstein is the scientist most frequently associated with the theory of relativity, there are several thinkers who are responsible for its formulation.  (he is responsible for the theories of special relativity and general relativity)

The first known person to theorize about relativity was Galileo, who articulated the first "relativity principle" in the seventeenth century. In generating his relativity principle, Galileo removed the distinction between stationary and moving observers, arguing that people on earth cannot tell if they are really at rest or if they are moving with the rotation of the earth each day. To demonstrate this, Galileo used the example of a cannonball falling from the top of a ship's mast. He noted that the cannonball will land at the base of the mast whether the ship is moving steadily through the ocean, or whether it is at rest in a dock. Even if they observe the falling ball, people on the ship cannot tell if they are really at rest or if they are moving with the ship. They cannot distinguish their state of rest from the ship's state by observing motion that takes place within the "reference frame" of the ship. In other words, a person at rest on the deck of a ship cannot determine whether the ship is at rest or moving at a steady speed through the ocean by observing actions that happen on the ship itself. That person must observe the ship relative to its surrounding environment in order to make such a determination. (

QI: Who created the theory of relativity? (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:02:30 pm

Example 13

People of the Middle Ages knew the earth was round.

The Flat Earth Myth


The 'flat earth' myth was concocted and popularized by Washington Irving and a French erudit and the 'flat error' was declared by Darwininst historians, who compared the denial of Darwin's theory to Columbus's struggle for acceptance by his scholastic religious contemporaries.

Neither Christopher Columbus, nor his contemporaries, believed the earth was flat. Yet this curious illusion persists today, firmly established with the help of the media, textbooks, teachers--even noted historians.

We all know that Christopher Columbus encountered stiff resistance about his idea of sailing off West to try and reach the East Indies. Many of us have laboured under the impression that people were concerned that he would sail off the edge of the Earth which was widely believed to be flat. History is thought to have vindicated Columbus against those filled with the Christian superstition of a flat Earth who held on to old fashioned beliefs. A minority of people are even under the impression that Galileo's trial centred on the subject rather than whether the Earth orbited the sun.

It comes as some surprise, therefore, to find that Columbus was wrong and his critics were right - not because the world is actually flat after all, but because at the time everyone knew it was a globe and were arguing about how big it was. The idea that the uncouth people of the Middle Ages thought the Earth was flat is an example of the myth that has been propagated since the nineteenth century to give us a quite unfair view of this vibrant and exciting period.

So what was Columbus's mistake? The disagreement between him and his critics was over the size of the world - not an easy thing to measure. The story of this controversy can be traced back to the ancient Greeks and the various ways their writings were transmitted to the West.

The invention of the flat Earth myth can be laid at the feet of Washington Irving, who included it in his historical novel on Columbus, and the wider idea that the everyone in the Middle Ages was deluded has been widely accepted ever since. (

ref (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:04:28 pm

Example 14

Louis Pasteur did more harm than good. Germs do not cause disease, the terrain or condition of the host is what causes disease.

The germ – or microbian - theory of disease was popularized by Louis Pasteur (1822-1895), the inventor of pasteurization. This theory says that there are fixed, external germs (or microbes) which invade the body and cause a variety of separate, definable diseases. In order to get well, you need to identify and then kill whatever germ made you sick. The tools generally employed are drugs, surgery, radiation and chemotherapy. Prevention includes the use of vaccines as well as drugs, which - theoretically at least - work by keeping germs at bay.

Just prior to the time that Pasteur began promoting the “monomorphic” germ theory, a contemporary by the name of Claude Bernard (1813-1878) was developing the theory that the body's ability to heal was dependent on its general condition or internal environment. Thus disease occurred only when the terrain or internal environment of the body became favorable to disease.

An extremely brilliant contemporary of Claude Bernard's was Antoine Bechamp (1816-1908).  Bechamp had degrees in physics, chemistry and biology. In addition he was a medical doctor and a university professor. Bechamp built upon and extended Bernard's idea, developing his own theory of health and disease which revolved around the concept of “pleomorphism.”

Through meticulous research and in contrast to Pasteur's subsequent, misinformed promotion of "monomorphic" or single-formed, fixed state microbes (or germs), Bechamp had discovered tiny organisms (or microorganisms) he called “microzyma” which were “pleomorphic” or “many-formed.” (Pleo = many and morph = form.) Interestingly, these microzyma were found to be present in all things whether living or dead, and they persist even when the host has died. Many were impervious to heat as well.

Bechamp’s microzyma, including specific bacteria, could take on a number of  forms during the host’s life cycle and these forms depended (as Bernard contended)  primarily on the chemistry of their environment, or the biological terrain, or to put it a third way, the condition of the host. In other words there is no single cause of disease. Instead disease results when microzyma change form, function and toxicity according to the terrain of the host.   Bad bacteria, viruses and fungi are merely the forms assumed by the microzymas when there is a condition or terrain that favors disease and these "bad" microzyma themselves give off toxic byproducts, further contributing to a weakened terrain.

This is how Bechamp himself put it in his last book The Third Element of the Blood: ". . . the microzyma, whatever its origin, is a ferment; it is organized, it is living, capable of multiplying, of becoming diseased and of communicating disease. . . All mycrozyma are ferments of the same order - that is to say, they are organisms, able to produce alcohol, acetic acid, lactic acid and butyric acid.  . . In a state of health the microzymas of the organism act harmoniously, and our life is, in every meaning of the word, a regular fermentation. In a state of disease, the microzymas do not act harmoniously, and the fermentation is disturbed; the mycrozymas have either changed their function or are placed in an abnormal situation by some modification of the medium. . ."

Through his research, Bechamp showed that the essence of life is a “fermentation” process of digestion, assimilation, disassimilation and excretion. Interruptions in any of these functions would result in a lack of energy, full blown disease or even death. Rather than causing disease, Bechamp showed that harmful mycrozyma – which Pasteur took to be external germs attacking a host - actually arises when the body's normal metabolic processes - or "fermentations" - are disturbed.

Thus, according to Bernard, Bechamp and their successors, disease occurs to a large extent as a function of biology and as a result of the changes that take place when metabolic processes are thrown off. Germs become symptoms that stimulate the occurrence of more symptoms - which ultimately culminate in disease. A body thus weakend also naturally becomes vulnerable to external harmful microzyma - or if you prefer pleomorphic germs. So, our bodies are in effect mini-ecosystems, or biological terrains in which nutritional status, level of toxicity and PH or acid/alkaline balance play key roles.

For these and other reasons Bechamp argued strenuously against vaccines, asserting that "The most serious disorders may be provoked by the injection of living organisms into the blood."  Untold numbers of researchers have agreed with him. Nonetheless Pasteur and his like-minded contemporary Robert Koch - both being shameless self-promoters - easily won the propaganda war favoring the widespread use of vaccines - which then made boatloads of money for everyone associated.  In fact, according to researcher E. Douglas Hume, if it had not been for mass acceptance of vaccines, the germ theory might very well have died a quiet death.

Decades after Pasteur's death researchers tried to expose the fact that Pasteur liberally "borrowed", plagiarized and misinterpreted the work of others, especially that of Bechamp - which is how Pasteur came up with the “germ” theory. The efforts of these researchers unfortunately have had little effect on the practice of medicine or the way we think about disease.

Instead, as Dr. Bernard Jensen and Mark Anderson assert in their book Empty Harvest, "The germ theory is still believed to be the central cause of disease because around it exists a colossal supportive infrastructure of commercial interests that built multi-billion-dollar industries based upon this theory. To the scientific satisfaction of many in the health field, it has long been disproven as the primary cause of disease. Germs are, rather, an effect of disease."

Interestingly and to this day, the whole theory of microzymas and how they operate has never been disproved - or proven false - by opposing research. To the contrary, decades of research – beginning with Pasteur himself - has only served to bolster the mycrozyma theory. Not only does the germ theory remain unsubstantiated today, but Pasteur himself recanted it in his private journal, writing the famous words which were revealed many decades after his death:

 “It is not the germ that causes disease but the terrain in which the germ is found.” (

For more information see:

The third element of the blood
Antoine Béchamp, Montague Richard Leverson, 1912


the 1935 books Bechamp or Pasteur?  and Pasteur Exposed, both by E. Douglas Hume.

(   (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:08:01 pm

Example 15

Technology hasn't made music sound better. It is just louder.


The loudness war Video Example (

The Loudness War Analyzed
Recorded music doesn’t sound as good as it used to. Recordings sound muddy, clipped and lack punch. This is due to the ‘loudness war’ that has been taking place in recording studios. To make a track stand out from the rest of the pack, recording engineers have been turning up the volume on recorded music. Louder tracks grab the listener’s attention, and in this crowded music market, attention is important.   And thus the loudness war – engineers must turn up the volume on their tracks lest the track sound wimpy when compared to all of the other loud tracks. However, there’s a downside to all this volume. Our music is compressed. The louds are louds and the softs are loud, with little difference. The result is that our music seems strained, there is little emotional range, and listening to loud all the time becomes tedious and tiring. (

The Loudness Wars: Why Music Sounds Worse (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:19:04 pm

Example 16

The mnemonic rule I before E except after C is wrong 21 time more often than it is correct.


There are more words spelled E before I, than I before E used in the English language! There are 923 words that have the letter combination CIE in them.
There are 21 times as many words that break the rule than that follow it.

Ref. ( ( (
beige, cleidoic, codeine, conscience, deify, deity, deign,
 dreidel, eider, eight, either, feign, feint, feisty,
 foreign, forfeit, freight, gleization, gneiss, greige,
 greisen, heifer, heigh-ho, height, heinous, heir, heist,
 leitmotiv, neigh, neighbor, neither, peignoir, prescient,
 rein, science, seiche, seidel, seine, seismic, seize, sheik,
 society, sovereign, surfeit, teiid, veil, vein, weight,
 weir, weird

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:21:27 pm

Example 17

Being drunk is no excuse...

Alcohol may not cause promiscuity, violence and anti-social behaviour! Alcohol impairs our reaction times, muscle control, co-ordination, short-term memory, perceptual field, cognitive abilities and ability to speak clearly. But it does not cause us selectively to break specific social rules.


Viewpoint: Is the alcohol message all wrong?

BBC Radio
11 October 2011 Last updated at 23:54 GMT

Many people think heavy drinking causes promiscuity, violence and anti-social behaviour. That's not necessarily true, argues Kate Fox.

I am a social anthropologist, but what I do is not the traditional intrepid sort of anthropology where you go and study strange tribes in places with mud huts and monsoons and malaria.

I really don't see why anthropologists feel they have to travel to unpronounceable corners of the world in order to study strange tribal cultures with bizarre beliefs and mysterious customs, when in fact the weirdest and most puzzling tribe of all is right here on our doorstep. I am of course talking about my own native culture - the British.

And if you want examples of bizarre beliefs and weird customs, you need look no further than our attitude to drinking and our drinking habits. Pick up any newspaper and you will read that we are a nation of loutish binge-drinkers - that we drink too much, too young, too fast - and that it makes us violent, promiscuous, anti-social and generally obnoxious.

Clearly, we Brits do have a bit of a problem with alcohol, but why?

The problem is that we Brits believe that alcohol has magical powers - that it causes us to shed our inhibitions and become aggressive, promiscuous, disorderly and even violent.

But we are wrong.

In high doses, alcohol impairs our reaction times, muscle control, co-ordination, short-term memory, perceptual field, cognitive abilities and ability to speak clearly. But it does not cause us selectively to break specific social rules. It does not cause us to say, "Oi, what you lookin' at?" and start punching each other. Nor does it cause us to say, "Hey babe, fancy a shag?" and start groping each other.

The effects of alcohol on behaviour are determined by cultural rules and norms, not by the chemical actions of ethanol.

There is enormous cross-cultural variation in the way people behave when they drink alcohol. There are some societies (such as the UK, the US, Australia and parts of Scandinavia) that anthropologists call "ambivalent" drinking-cultures, where drinking is associated with disinhibition, aggression, promiscuity, violence and anti-social behaviour.

There are other societies (such as Latin and Mediterranean cultures in particular, but in fact the vast majority of cultures), where drinking is not associated with these undesirable behaviours - cultures where alcohol is just a morally neutral, normal, integral part of ordinary, everyday life - about on a par with, say, coffee or tea. These are known as "integrated" drinking cultures.

This variation cannot be attributed to different levels of consumption - most integrated drinking cultures have significantly higher per-capita alcohol consumption than the ambivalent drinking cultures.

Instead the variation is clearly related to different cultural beliefs about alcohol, different expectations about the effects of alcohol, and different social rules about drunken comportment.
Youth drinking Buckfast tonic wine In the UK, heavy drinking is associated with a range of stereotypes

This basic fact has been proved time and again, not just in qualitative cross-cultural research, but also in carefully controlled scientific experiments - double-blind, placebos and all. To put it very simply, the experiments show that when people think they are drinking alcohol, they behave according to their cultural beliefs about the behavioural effects of alcohol.

The British and other ambivalent drinking cultures believe that alcohol is a disinhibitor, and specifically that it makes people amorous or aggressive, so when in these experiments we are given what we think are alcoholic drinks - but are in fact non-alcoholic "placebos" - we shed our inhibitions.

We become more outspoken, more physically demonstrative, more flirtatious, and, given enough provocation, some (young males in particular) become aggressive. Quite specifically, those who most strongly believe that alcohol causes aggression are the most likely to become aggressive when they think that they have consumed alcohol.

Our beliefs about the effects of alcohol act as self-fulfilling prophecies - if you firmly believe and expect that booze will make you aggressive, then it will do exactly that. In fact, you will be able to get roaring drunk on a non-alcoholic placebo.

And our erroneous beliefs provide the perfect excuse for anti-social behaviour. If alcohol "causes" bad behaviour, then you are not responsible for your bad behaviour. You can blame the booze - "it was the drink talking", "I was not myself" and so on.

More: (

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:24:27 pm

Example 18

Hitler's did not "snub" Jesse Owens at the 1936 Olympics.


Who really snubbed 1936 Olympic champion Jesse Owens?

Although many people know better, the myth of Hitler's snub of Jesse Owens at the 1936 Olympics in Berlin is persistent. But that's not where the Olympic myths end. The alleged snub is not even the most important of several Berlin Olympics misconceptions that need correcting.

In his day, Ohio State track star James (“J.C.” Jesse) Cleveland Owens (1913-1980) was as famous and admired as Carl Lewis, Tiger Woods, or Michael Jordan are today. (1996 Olympic champ Carl Lewis has been called the “second Jesse Owens.”) But there are significant differences between then and now. Because of racial discrimination in his native land, Jesse Owens was not able to enjoy anything close to the huge financial benefits that African American athletes can expect today. When Owens came home from his success in Nazi Germany, he faced barriers that he would not face today. After the ticker-tape parades, he received no Hollywood offers, no endorsement contracts, and no ad deals. His face didn't appear on cereal boxes. Three years after his victories in Berlin, a failed business deal forced Owens to declare bankruptcy. He made a modest living from his own sports promotions, including racing against a thoroughbred horse. After moving to Chicago in 1949, he started a successful public relations firm. Owens was also a popular jazz disc jockey for many years in Chicago.

The fact that there were American athletes competing in the 1936 Olympics at all is still considered by many to be a blotch on the history of the U.S. Olympic Committee. Germany's open discrimination against Jews and other “non-Aryans” was already public knowledge when many Americans opposed U.S. participation in the “Nazi Olympics.” Opponents to U.S. participation included the American ambassadors to Germany and Austria. But those who warned that Hitler and the Nazis would use the 1936 Olympic Games in Berlin for propaganda purposes lost the battle to have the U.S. boycott the Berlin Olympiade.

Which brings us to another Olympic myth. It is often stated that Jesse Owens' four gold medals humiliated Hitler by proving to the world that Nazi claims of Aryan superiority were a lie. But Hitler and the Nazis were far from unhappy with the Olympic results. Not only did Germany win far more medals than any other country at the 1936 Olympics, but the Nazis had pulled off the huge public relations coup that Olympic opponents had predicted, casting Germany and the Nazis, falsely, in a positive light. In the long run, Owens' victories turned out to be only a minor embarrassment for Nazi Germany.

But Jesse Owens' reception by the German public and the spectators in the Olympic stadium was warm. There were German cheers of “Yesseh Oh-vens” or just “Oh-vens” from the crowd. Owens was a true celebrity in Berlin, mobbed by autograph seekers to the point that he complained about all the attention. He later claimed that his reception in Berlin was greater than any other he had ever experienced, and he was quite popular even before the Olympics.

The Snub Myth
Hitler did shun a black American athlete at the 1936 Games, but it wasn't Jesse Owens. On the first day of the Olympics, just before Cornelius Johnson, an African American althlete who won the first gold medal for the U.S. that day, was to receive his award, Hitler left the stadium early. (The Nazis later claimed it was a previously scheduled departure.) Prior to his departure, Hitler had received a number of winners, but Olympic officials informed the German leader that in the future he must receive all of the winners or none at all. After the first day, he opted to acknowledge none. Jesse Owens had his victories on the second day, when Hitler was no longer in attendance. Would Hitler have snubbed Owens if he had been in the stadium on day two? Perhaps. But since he wasn't there, he didn't.

Ironically, the real snub of Owens came from his own president. Even after ticker-tape parades for Owens in New York City and Cleveland, President Franklin D. Roosevelt never publicly acknowledged Owens' achievements (gold in the 100 meter, 200 meter, 400 meter relay, and long jump). Owens was never invited to the White House and never even received a letter of congratulations from the president. Almost two decades passed before another American president, Dwight D. Eisenhower, honored Owens by naming him “Ambassador of Sports” — in 1955. (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:26:28 pm

Example 19

There is no such color as pink


There is no pink light (

You've probably seen light that looks pink, but where does this colour come from? Different wavelengths of visible light correspond to colours of the rainbow - and pink isn't one of them. In our latest One-Minute Physics video, animator Henry Reich takes us through the mysterious make-up of pink light.

There’s No Such Thing as the Color Pink

Want to have your brain blown for a few minutes today? Dip your head in some physics, and realize that there's no such thing as pink. Scientifically speaking, that is: it's just something our brain makes up.

MinutePhysics puts it in predictably concise terms: all colors correspond to wavelengths of light. But there's no wavelength in there for pink! Instead, it's a combination of neural trickery—our brains strip green out of the spectrum to fill in for pink. Brains! (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:29:39 pm

Example 20

Statistics are used to trick you.


Go figure: Why nothing is really news at all

Michael Blastland By Michael Blastland

GO FIGURE - Seeing stats in a different way

Seen the news today? It's all about what happens. In his final Go Figure column, Michael Blastland wants to know about what didn't.

Say it's reported that candy floss doubles your risk of dying suddenly. Sounds bad.

Now flip this risk around so that it's expressed as the chance of nothing happening.

The first way of looking at it is a 100% increase in risk.

The second might mean a fall in your chance of nothing happening of 0.00001%.

This is because the actual daily risk of sudden death from accident, violence or poisoning and the like is about one in a million. Double it and you get two in a million. That's your 100%.

Meanwhile, the chance of being OK might fall from 999,999 in a million, to 999,998 in a million. Suddenly, it doesn't sound so bad.

This characteristic of risk, to see things only in terms of what happens, or might happen, rather than what doesn't, is actually a bias. Sometimes what doesn't happen is as important a way of seeing the world as what does.

In a sketch by comedians Mitchell and Webb, a filmmaker (Mitchell) is interviewed about his oeuvre after a clip from his latest work - Sometimes Fires Go Out. In the film, a small fire in the kitchen - yep, you got it - goes out. That's it.

Quotethat mitchell and webb look - Sometimes Fires go out (
The interviewer (Webb) says the film has been reviewed as "unrelentingly real", "a devastatingly faithful rendition of how life is" and "dull, dull, unbearably dull".

He introduces another: "The Man who had a Cough and it's just a Cough and he's Fine." Two Victorian lovers meet on the station platform. The man, spluttering, looks more pallid and doomed with each encounter.

"It's just a cough," he says, stoically. Except that it is - just a cough. In the last scene, he's dandy. It is one of the finest comic sketches about probability you'll ever see. But then, not much competition.

Stories are about what happens - they're not about what doesn't. Anton Chekov said: "If in Act I you have a pistol hanging on the wall, then it must fire in the last act." If nothing happens in a story, it's a joke. But the boring truth in real life is that, usually, nothing happens. Usually, the gun isn't fired.

Likewise, a cough is not a statistically significant event, but if a man coughs in an episode of the hospital drama Casualty, it's a triple heart bypass.

Jerker Denrell teaches at Oxford Business School. He describes hearing a presentation about the attributes of top entrepreneurs. Writing in the Harvard Business review, he said the argument went as follows: "All of these leaders shared two key traits, which accounted for their success: They persisted, often despite initial failures, and they were able to persuade others to join them."

The only trouble was, said Denrell, these selfsame traits are necessarily the hallmark of spectacularly unsuccessful entrepreneurs.

The difference is that the successful ones are still around and they're the ones we look to for examples. The ones for whom success didn't happen have gone - and are often ignored.

Denrell wrote that some studies have shown a failure rate of 50% of all new businesses during their first three to five years. After rapid growth in the US tyre industry for example, the number of firms peaked in 1922 at 274. By 1936, more than 80% were gone. That is, usually, boringly, big success doesn't happen.

"Anyone studying the industry in the 1930s," said Denrell, "would have been able to observe just a very small sample of the population that had originally entered."

If you want to study success, you have to pay as much attention to those for whom it didn't happen as to those for whom it did to see if your explanations for success are exclusive to success. But business bestsellers are not known for stories of those who never made it.

One last example. In his book Picturing the Uncertain World, Howard Wainer describes the apparent success of small schools, bringing massive charitable funding to the cause of making schools smaller.

And it's true that small schools were more often at the top of the leagues than you'd expect. About 12% of the top 50 schools for maths scores came from the smallest 3% of schools overall. The only problem was that small schools were also more often at the bottom of the leagues than you'd expect - about 18% of the bottom 50.

Wainer's explanation for the small schools phenomenon was that the ability of children in small schools just bounces around a lot more from year to year because it's a smaller sample.

But if you're looking for the keys to success, maybe you don't look at the bottom. Lack of success might strike you as a non-event, if it strikes you at all.

This is the last Go Figure. It's about to become a regular non-event. Hearty thanks to all who've followed us.

More: (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:35:55 pm

Example 21

Popular explanations of how wings (aerodynamic lift) work are often erroneous and scientifically unsound


The paper airplane guy (Bernoulli's principle comment at 7:50) (

It is amazing that today, almost 100 years after the first flight of the Wright Flyer, groups of engineers, scientists, pilots, and others can gather together and have a spirited debate on how an airplane wing generates lift. Various explanations are put forth, and the debate centers on which explanation is the most fundamental.
— John D. Anderson, Curator of Aerodynamics at the National Air and Space Museum

Other alternative explanations for the generation of lift
Many other alternative explanations for the generation of lift by an airfoil have been put forward, of which a few are presented here. Most of them are intended to explain the phenomenon of lift to a general audience. Although the explanations may share features in common with the explanation above, additional assumptions and simplifications may be introduced. This can reduce the validity of an alternative explanation to a limited sub-class of lift generating conditions, or might not allow a quantitative analysis. Several theories introduce assumptions which proved to be wrong, like the equal transit-time theory.

An explanation of lift frequently encountered in basic or popular sources is the equal transit-time theory. Equal transit-time states that because of the longer path of the upper surface of an airfoil, the air going over the top must go faster in order to catch up with the air flowing around the bottom, i.e. the parcels of air that are divided at the leading edge and travel above and below an airfoil must rejoin when they reach the trailing edge. Bernoulli's Principle is then cited to conclude that since the air moves faster on the top of the wing the air pressure must be lower. This pressure difference pushes the wing up.

However, equal transit time is not accurate and the fact that this is not generally the case can be readily observed. Although it is true that the air moving over the top of a wing generating lift does move faster, there is no requirement for equal transit time. In fact the air moving over the top of an airfoil generating lift is always moving much faster than the equal transit theory would imply.

The assertion that the air must arrive simultaneously at the trailing edge is sometimes referred to as the "Equal Transit-Time Fallacy".

Note that while this theory depends on Bernoulli's principle, the fact that this theory has been discredited does not imply that Bernoulli's principle is incorrect. (

Popular explanations of how wings work are often erroneous and scientifically unsound


Bernoulli's principle
Wrong explanations may be given by well-meaning teachers and others, but false teaching may sometimes be just for convenience. Many years ago, a famous aerodynamicist, Dr. Theodore Von Karman, instructed his assistant: "When you are talking to technically illiterate people you must resort to the plausible falsehood instead of the difficult truth." (From Stories of a 20th Century Life, ISBN 0-915760-04-5, by W.R. Sears, former assistant to Von Karman). Falsehood, whether intentional or not, is still being taught.


The most popular theory of wing operation, which we may call Hump Theory, because it requires a wing to have a more convex upper surface as compared to the lower, is easily shown to be false. Hump theory is based on an assumption of equal transit times, that air passage over a curved upper wing surface must occur in the same length of time as air passage below where the surface is more flat, and hence of a shorter path length. In order to have the same transit time, flow at the longer path upper surface must be of greater velocity than that at the lower surface. Thus, in accordance with Bernoulli's law, it is reasoned that upper surface pressure must then be less than at the lower surface, thereby producing upward lift. Equal transit time is sometimes illustrated by representing bits of passing flow above and below an airfoil or wing as shown here:

Although Bernoulli's law is sound and well proven, this popular explanation, world-wide, of wing operation is false. Upper surface flow is indeed faster than the lower, so much so that upper surface transit time is normally less than the lower, as indicated here:


Although the assumption of equal transit time is wrong and has no basis in known physics, it can be found in books from otherwise reputable publishers such as National Geographic, Macmillan and others in this country and abroad. College level teaching of aerodynamicists and aeronautical engineers does not include equal transit time, which cannot survive mathematical investigation

The fallacy of equal transit time can be deduced from consideration of a flat plate, which will indeed produce lift, as anyone who has handled a sheet of plywood in the wind can testify. (

A fluid flowing past the surface of a body exerts surface force on it. Lift is any component of this force that is perpendicular to the oncoming flow direction. It contrasts with the drag force, which is the component of the surface force parallel to the flow direction. If the fluid is air, the force is called an aerodynamic force.

Lift is commonly associated with the wing of a fixed-wing aircraft, although lift is also generated by propellers; kites; helicopter rotors; rudders, sails and keels on sailboats; hydrofoils; wings on auto racing cars; wind turbines and other streamlined objects. While the common meaning of the word "lift" assumes that lift opposes gravity, lift in its technical sense can be in any direction since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is flying straight and level (cruise) most of the lift opposes gravity. However, when an aircraft is climbing, descending, or banking in a turn, for example, the lift is tilted with respect to the vertical. Lift may also be entirely downwards in some aerobatic manoeuvres, or on the wing on a racing car. In this last case, the term downforce is often used. Lift may also be horizontal, for instance on a sail on a sailboat.

An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. Non-streamlined objects such as bluff bodies and plates (not parallel to the flow) may also generate lift when moving relative to the fluid, but will have a higher drag coefficient dominated by pressure drag.

Many other alternative explanations for the generation of lift by an airfoil have been put forward, of which a few are presented here. Most of them are intended to explain the phenomenon of lift to a general audience. Although the explanations may share features in common with the explanation above, additional assumptions and simplifications may be introduced. This can reduce the validity of an alternative explanation to a limited sub-class of lift generating conditions, or might not allow a quantitative analysis. Several theories introduce assumptions which proved to be wrong,

"Popular" explanation based on equal transit-time

An illustration of the (incorrect) equal transit-time theory

An explanation of lift frequently encountered in basic or popular sources is the equal transit-time theory. Equal transit-time states that because of the longer path of the upper surface of an airfoil, the air going over the top must go faster in order to catch up with the air flowing around the bottom, i.e. the parcels of air that are divided at the leading edge and travel above and below an airfoil must rejoin when they reach the trailing edge. Bernoulli's Principle is then cited to conclude that since the air moves faster on the top of the wing the air pressure must be lower. This pressure difference pushes the wing up.[36]

However, equal transit time is not accurate[37][38][39] and the fact that this is not generally the case can be readily observed.[40] Although it is true that the air moving over the top of a wing generating lift does move faster, there is no requirement for equal transit time. In fact the air moving over the top of an airfoil generating lift is always moving much faster than the equal transit theory would imply.[6]

The assertion that the air must arrive simultaneously at the trailing edge is sometimes referred to as the "Equal Transit-Time Fallacy". (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:42:06 pm

Example 22

Primates, gorillas and Chimpanzees, cannot lean to "speak" in sign language.


The Reality Behind Koko & Signing Apes - Pt1 (

The Reality Behind Koko & Signing Apes - Pt2 (

Dr. Roberty Sapolsky of Stanford University discussing the the bizzare story of Dr. Francine 'Penny' Patterson and Koko the 'signing' gorilla.

Are gorillas using sign language really communicating with humans?


A couple obvious problems present themselves when one looks into this talking-ape business. The first, as you suggest, is that interpretation of the gorilla's conversation, if such it be, is left to the handler, who generally sees any improbable concatenation of signs as deeply meaningful. During the 1998 on-line chat you saw bits of in Harper's (the whole thing is at (, for example, Koko, without being prompted or questioned, made the sign for nipple, which Francine Patterson, her trainer, interpreted as a rhyme for "people." (Patterson further claimed that this was a reference to the chat session's audience.) Even if you buy the idea that gorillas, who cannot speak, grasp the concept of rhyme, this sounds like wishful thinking. Similar examples abound: "lips" is supposedly Koko's word for woman, "foot" her word for man. Koko made a lot of signs, and sometimes expressed desires or other thoughts, but nothing in the transcript suggests a sustained conversation, even of the simple sort you might have with a toddler.

That brings us to the second problem. What constitutes language use? In 1979 Herbert Terrace of Columbia University published a skeptical account of his efforts to teach American Sign Language to a chimpanzee named Nim Chimpsky. Nim accomplished the elementary linguistic task of connecting a sign to a meaning, and could be taught to string signs together to express simple thoughts such as "give orange me give eat." But in Terrace's view Nim could not form new ideas by linking signs in ways he hadn't been taught--he didn't grasp syntax, in other words, arguably the essence of language. (A dog, after all, may understand that bringing his leash to his owner is a sign that he wants to go out, but nobody sees that as evidence of language use.)

Terrace's work was a major blow to talking-ape proponents. But their case started looking stronger in 1990, when researcher Emily Sue Savage-Rumbaugh of Georgia State University presented evidence of language development in a bonobo chimp named Kanzi. One of the more telling complaints made about gorillas like Koko who communicated via sign language was that they often babbled, producing long, apparently meaningless strings of signs. Their handlers would then pluck a few lucky hits from the noise and declare that communication had occurred. Savage-Rumbaugh got around this problem by teaching Kanzi to point to printed symbols on a keyboard, a less ambiguous approach. She claimed that the ape demonstrated a rough grasp of grammar using this system. What's more, when presented with 653 sentences making requests using novel word combinations, Kanzi responded correctly 72 percent of the time--supposedly comparable to what a human child can do at two and a half years old.

Today, from what I can tell, scientific opinion is divided along disciplinary lines. Many researchers who work primarily with animals accept or at least are receptive to the idea that apes can be taught a rudimentary form of language. Linguists, on the other hand, dismiss the whole thing as nonsense. Personally I'm happy to concede that the boundary between animal and human communication isn't as sharply drawn as we once thought. Animals (not just primates--check out Alex the talking African gray parrot sometime) can use language in limited ways. They can respond to simple questions on a narrow range of subjects; they can express basic thoughts and desires. I'll even buy the possibility that some are capable of employing elementary syntax. However, all this strikes me as the equivalent of teaching a computer to beat people at chess--a neat trick, but not one that challenges fundamental notions about human vs nonhuman abilities. I've seen nothing to persuade me that animals can use language as we do, that is, as a primary tool with which to acquire and transmit knowledge. I won't say such a thing is impossible. But in light of the muddled state of the debate so far, the first task is to decide what would constitute a fair test. (

Does Koko the Gorilla pass the Turing test? (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:44:16 pm

Example 23

It's the FAR side if the moon not the DARK side.


Myth: The side of the moon that is facing away from the Earth is in permanent darkness, hence the name.

Reality: With the exception of the Pink Floyd album of the same name, the idea of the “dark side of the moon” is totally erroneous.

Of course, that doesn’t actually stop film makers representing it as dark.

The reality is that the term “dark side” only really refers to our understanding of the nature of the moon. In space, the “far side”, as it should really be known, gets equal if not greater solar rays upon it’s dusty grey surface. (

Far side of the Moon
"Both the near and far sides receive (on average) almost equal amounts of light from the Sun. However, the term "dark side of the moon" is commonly used poetically to refer to the far side." (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:50:00 pm

Example 24

Flying is NOT the safest form of transport


The air transport industry will almost always choose a per km based statistics, which is optimum for them, as most fatalities occur on landing and take-off, while the distances are large.

The one thing that stands out is that, whichever way you look at it, motorcycles are disastrously the most dangerous form of transport. Bus and rail are the safest form of transport by any measure, while road traffic injuries represent the leading cause in worldwide injury-related deaths, their popularity undermines this statistic. (

Statistically, the rank of transport mode is as follows (per passenger hour):

Per passenger kilometer (which is sometimes used), the ranking is as follows:

I tend to prefer to measure my life by hours, not kilometers.


Now, don't get me wrong. Flying is very safe. But is it the safest, as is often claimed?

Statistics on the subject are based upon the number of deaths per year per transport. On the face of it, air travel does appear to be relatively safer than other options.

In 2004 - 347 died worldwide due to air traffic accidents while in the UK alone 3,221 died due to road accidents. Which on the face of it makes air travel at least ten times safer (not even taking in account all the other road deaths in the world).

However, as things work out its never so simple. For example, the number of UK flights in the same period numbered 3.5 million while car journeys can be estimated to be in the region of 22 billion. So the reality is that car travel is actually safer per journey than air travel, and by some margin.

Killed in an airline flight - 0.000136986301369863013698630 1369863%
Killed by car -                  0.000000018264840182648401826 48401826484% (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:55:46 pm

Example 25

Seamen give women and children priority when ships are sinking.

'The Sinking Ship' by Mark Bryan (

Women and children aren’t saved first

August 1, 2012 - 07:15
By: Hanne Jakobsen

It’s a myth that seamen give women and children priority when ships are sinking. Women have much less chance of surviving and children are even worse off.

When the Titanic sank in 1912, women and children were allowed first on board the lifeboats. It’s one of very few instances in history where this chivalrous maritime norm was actually practiced.

The century-old story of the Titanic is well known:

When it struck an iceberg, the men on board the ocean liner gave women and children priority access to the lifeboats. As a consequence, the odds of survival for women were three times higher than for men.

The notion that women and children should be evacuated first has proliferated in popular culture since the Titanic sank and now it’s seen as a common maritime social norm. It’s called the unwritten law of the sea and such chivalry is regarded as a trait among mariners.

Swedish researchers now say this is hogwash.

In an evaluation of 18 major disasters at sea from around 1800 to today, they found that women have only half as much of a chance of surviving as men. The odds for children to survive are even slimmer.

Men had twice as good odds
The fates of more than 18,000 people at sea were decided in the 18 shipwrecks analyzed by the researchers in this study.

Economists Mikael Elander and Oscar Erixson at the University of Uppsala in Sweden have looked at several factors, including the odds of survival for crew members compared to that of the passengers, whether the captain’s behaviour have any impact on the results and whether the ratio of women to men on board have any significance.

In general there is little indication that this form of chivalry is a nautical norm:

Women had a bit less than an 18-percent chance of surviving these calamities at sea, whereas men had twice that – nearly 35 percent.

In fact, women fared better than men in just two of the shipwrecks: the Titanic in 1912, and aboard the HMS Birkenhead, which went down in 1852.

The latter, a British troopship, was the shipwreck that established the protocol of women and children first, but on this steam frigate the women comprised just a little over one percent of the passengers.

“We were really surprised by these findings. We’d expected the norms to apply,” says Elinder.
The crew before the passengers

Elinder and Erixson started their study of the myth by investigating the Estonia tragedy in 1994. The sinking of the ferry Estonia, which sailed between Tallinn and Stockholm, was Europe’s biggest sea disaster since World War II. Only 137 of the 989 people on board survived.

Of these 137, only 26 were women.

“In this particular wreck, men had a four-fold chance of survival. Estonia was also one of the disasters in which the crew had better odds than the passengers,” says Erlinder.

According to him, some countries have regulations, not just a norm, specifying that the crew must help the passengers safely to lifeboats and life rafts before boarding these themselves. But in reality this rule is often overlooked, the report states:

In nine of the 18 accidents the crew had an advantage over the passengers. The crew are usually alerted to the accident earlier and are better acquainted with the ship and more accustomed to the sea. So the odds are on their side, unless they choose to help passengers evacuate the ship first.
The Estonia accident was a terrible disaster, killing 852 people. Out of the survivors, only 20 percent were women, and researchers think this indicates the 'women and children first' rule wasn’t practiced on board. (Photo: Wikimedia Creative Commons)

No differences in survival rates between passengers and crew were found in the other nine accidents. But the passengers had no advantage, as would be expected if they were given priority.
Worst for the children

The figures showed that the children involved in such disasters are worst off. Erlinder and Erixson didn’t have all the data for children, but information was available about how many children were on board for nine of the ships that went down. These indicate the survival odds for children:

“The children had about a 15-percent chance of surviving a sinking, and that’s the lowest rate of all,” says Erlinder.

“We can only speculate on the reasons, but it coincides well with the picture that the most vulnerable victims perish the most," he says. "If each person only thinks about saving his or her own skin, it’s natural for children to fare the worst.”

Aid from women’s liberation
Other factors the researchers included were the share of female passengers on board, how long the voyage had been before the disaster struck and of course how many passengers the ship had in total. None of these had any impact on relative survival rates.

However, the researchers chalked up one positive trend:

The odds for women have improved in the post-WW II years – the rates of survival between the sexes have evened out some, even though men still have the advantage.

Women’s liberation can take credit – women have generally become more capable of saving themselves. Two factors that help in this context are girls getting more swimming instructions and changes in female clothing styles.

It comes as no surprise that it’s easier to swim in jeans than in heavy skirts, copious undergarments and corsets.
The command evens out natural disadvantages

“These are rather depressing results," says Erlinder. "Nevertheless it’s better to know what the situation really is instead of sustaining and believing a myth. This can help people to better figure out what to do when disaster strikes.”

An obvious question rises in a more modern and gender equal world:

Is there really a need for rescuing women before men today? Isn’t that discriminating?

Erlinder thinks the standing orders to save women and children first should remain in effect. Men are usually stronger than women; they have physical and mental capabilities that increase their odds of survival when a ship is going down.

For instance they are better at scrambling out of chaotic and clogged corridors after a ship capsizes. An aggressiveness fuelled by testosterone can help them fight their way to the deck and to better places in line when it’s 'every man for himself'.

“If you give the command to save women and children first there is still no guarantee they will survive more than men would. But it can ensure the two sexes nearly equal chances,” says Erlinder.

More: ( (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Aug 31, 2012, 06:57:20 pm

Example 26

DNA Evidence is proof of an individuals identity. (matches? we don't need no stinkin' matches!)


A crime lab's findings raise doubts about the reliability of genetic profiles. The bureau pushes back.
July 20, 2008|Jason Felch and Maura Dolan | Times Staff Writers (

State crime lab analyst Kathryn Troyer was running tests on Arizona's DNA database when she stumbled across two felons with remarkably similar genetic profiles. The men matched at nine of the 13 locations on chromosomes, or loci, commonly used to distinguish people. The FBI estimated the odds of unrelated people sharing those genetic markers to be as remote as 1 in 113 billion. But the mug shots of the two felons suggested that they were not related: One was black, the other white.

In the years after her 2001 discovery, Troyer found dozens of similar matches -- each seeming to defy impossible odds.

As word spread, these findings by a little-known lab worker raised questions about the accuracy of the FBI's DNA statistics and ignited a legal fight over whether the nation's genetic databases ought to be opened to wider scrutiny. The FBI laboratory, which administers the national DNA database system, tried to stop distribution of Troyer's results and began an aggressive behind-the-scenes campaign to block similar searches elsewhere, even those ordered by courts, a Times investigation found. At stake is the credibility of the compelling odds often cited in DNA cases, which can suggest an all but certain link between a suspect and a crime scene.

When DNA from such clues as blood or skin cells matches a suspect's genetic profile, it can seal his fate with a jury, even in the absence of other evidence. As questions arise about the reliability of ballistic, bite-mark and even fingerprint analysis, genetic evidence has emerged as the forensic gold standard, often portrayed in courtrooms as unassailable. But DNA "matches" are not always what they appear to be. Although a person's genetic makeup is unique, his genetic profile -- just a tiny sliver of the full genome -- may not be. Siblings often share genetic markers at several locations, and even unrelated people can share some by coincidence.

No one knows precisely how rare DNA profiles are. The odds presented in court are the FBI's best estimates.

The Arizona search was, in effect, the first test of those estimates in a large state database, and the results were surprising, even to some experts.

Lawyers seek searches

Defense attorneys seized on the Arizona discoveries as evidence that genetic profiles match more often than the official statistics imply -- and are far from unique, as the FBI has sometimes suggested. Now, lawyers around the country are asking for searches of their own state databases. Several scientists and legal experts want to test the accuracy of official statistics using the 6 million profiles in CODIS, the national system that includes most state and local databases.

"DNA is terrific and nobody doubts it, but because it is so powerful, any chinks in its armor ought to be made as salient and clear as possible so jurors will not be overwhelmed by the seeming certainty of it," said David Faigman, a professor at UC Hastings College of the Law, who specializes in scientific evidence.

FBI officials argue that, under their interpretation of federal law, use of CODIS is limited to criminal justice agencies. In their view, defense attorneys are allowed access to information about their specific cases, not the databases in general.

Bureau officials say critics have exaggerated or misunderstood the implications of Troyer's discoveries. Indeed, experts generally agree that most -- but not all -- of the Arizona matches were to be expected statistically because of the unusual way Troyer searched for them. In a typical criminal case, investigators look for matches to a specific profile. But the Arizona search looked for any matches among all the thousands of profiles in the database, greatly increasing the odds of finding them.

As a result, Thomas Callaghan, head of the FBI's CODIS unit, has dismissed Troyer's findings as "misleading" and "meaningless."

He urged authorities in several states to object to Arizona-style searches, advising them to tell courts that the probes could violate the privacy of convicted offenders, tie up crucial databases and even lead the FBI to expel offending states from CODIS -- a penalty that could cripple states' ability to solve crimes.

In one case, Callaghan advised state officials to raise the risk of expulsion with a judge but told the officials that expulsion was unlikely to actually happen, according to a record of the conversation filed in court. In an interview with The Times, Callaghan denied any effort to mislead the court. The FBI's arguments have persuaded courts in California and other states to block the searches. But in at least two states, judges overruled the objections. The resulting searches found nearly 1,000 more pairs that matched at nine or more loci. "I can appreciate why the FBI is worried about this," said David Kaye, an expert on science and the law at Arizona State University and former member of a national committee that studied forensic DNA. But "people's lives do ride on this evidence," he said. "It has got to be explained."

Concerned about errors

From her first discovery in 2001, Troyer and her colleagues in the Arizona Department of Public Safety's Phoenix DNA lab were intrigued.

At the time, many states looked at only nine or fewer loci when searching for suspects. (States now commonly attempt to compare 13 loci, and they may be able to search for more in the future. But even now, in many cases, fewer than 13 loci are discernible from crime scene evidence because of contamination or because of degradation over time.) Based on Troyer's results, she and her colleagues believed that a nine-locus match could point investigators to the wrong person.

"We felt it was interesting and just wanted people to understand it could happen," said Troyer, who initially declined to be interviewed, then cautiously discussed her findings by telephone, with her bosses on the line. "If you're going to search at nine loci, you need to be aware of what it means," said Todd Griffith, director of the Phoenix lab. "It's not necessarily absolutely the guy."

Troyer made a simple poster for a national conference of DNA analysts. It showed photos of the white man and the younger black man next to their remarkably similar genetic profiles. Some who saw the poster said they had seen similar matches in their own labs. Bruce Budowle, an FBI scientist who specializes in forensic DNA, told colleagues of Troyer that such coincidental matches were to be expected. Three years later, Bicka Barlow, a San Francisco defense attorney, came across a description of Troyer's poster on the Internet.

Its implications became clear as she prepared to defend a client accused of a 20-year-old rape and murder. A database search had found a nine-locus match between his DNA profile and semen found in the victim's body. Based on FBI estimates, the prosecutor said the odds of a coincidental match were as remote as 1 in 108 trillion.

Recalling the Arizona discovery, Barlow wondered if there might be similar coincidental matches in California's database -- the world's third-largest, with 360,000 DNA profiles at the time. The attorney called Troyer in Phoenix to learn more. Troyer seemed eager to talk about her discovery, which still had her puzzled, Barlow recalled. The analyst told Barlow she had searched the growing Arizona database since the conference and found more pairs of profiles matching at nine and even 10 loci.

Encouraged, Barlow subpoenaed a new search of the Arizona database. Among about 65,000 felons, there were 122 pairs that matched at nine of 13 loci. Twenty pairs matched at 10 loci. One matched at 11 and one at 12, though both later proved to belong to relatives. Barlow was stunned. At the time, such matches were almost unheard of.

That same year, Fred Bieber, a Harvard professor and expert in forensic DNA, testified in an unrelated criminal case that just once had he seen a pair of profiles matching at nine of 13 markers, and they belonged to brothers. He had heard of a 10-locus match between two men, but it was the result of incest -- a man whose father was also his older brother. Indeed, since 2000, the FBI has treated certain rare DNA profiles as essentially unique -- attributable to a single individual "to a reasonable degree of scientific certainty."

Other crime labs have adopted the policy, and some no longer tell jurors there is even a possibility of a coincidental match. Soon after Barlow received the results, Callaghan, the head of the FBI's DNA database unit, reprimanded Troyer's lab in Phoenix, saying it should have sought the permission of the FBI before complying with the court's order in the San Francisco case. Asked later whether Callaghan had threatened her lab, Troyer said in court, "I wouldn't say it's been threatened, but we have been reminded."

Dwight Adams, director of the FBI lab at the time, faxed Griffith, Troyer's boss, a letter saying the Arizona state lab was "under review" for releasing the search results. "While we understand that the Arizona Department of Public Safety, acting in good faith, complied with a proper judicial court order in the release of the nine-loci search of your offender DNA records, this release of DNA data was not authorized," Adams wrote, asking Arizona to take "appropriate corrective action." Arizona officials obtained a court order to prevent Barlow from sharing the results with anyone else.

But it was too late. After a judge found the Arizona results to be irrelevant in Barlow's case, the defense attorney e-mailed them to a network of her colleagues and DNA experts around the country. Soon, defense lawyers in other states were seeking what came to be known as "Arizona searches."

'Don't panic'

For years, DNA's strength in the courtroom has been the brute power of its numbers. It's hard to argue with odds like 1 in 100 billion. Troyer's discovery threatened to turn the tables on prosecutors. At first blush, the Arizona matches appeared to contradict those statistics and the popular notion that DNA profiles, like DNA, were essentially unique. Law enforcement experts scrambled to explain.

Three months after the court-ordered search in Arizona, Steven Myers, a senior DNA analyst at the California Department of Justice, gave a presentation to the Assn. of California Crime Lab Analysts. It was titled "Don't Panic" -- a hint at the alarm Troyer's discovery had set off. Many of the Arizona matches were predictable, Myers said, given the type of search Troyer had conducted. In a database search for a criminal case, a crime scene sample would have been compared to every profile in the database -- about 65,000 comparisons. But Troyer compared all 65,000 profiles in Arizona's database to each other, resulting in about 2 billion comparisons. Each comparison made it more likely she would find a match.

When this "database effect" was considered, about 100 of the 144 matches Troyer had found were to be expected statistically, Myers found. Troyer's search also looked for matches at any of 13 genetic locations, while in a real criminal case the analyst would look for a particular profile -- making a match far less likely.

Further, any nonmatching markers would immediately rule out a suspect. In the case of the black and white men who matched at nine loci, the four loci that differed -- if available from crime scene evidence -- would have ensured that the wrong man was not implicated. The presence of relatives in the database could also account for some of Troyer's findings, the FBI and other experts say. Whether that's the case would require cumbersome research because the databases don't contain identifying information, they say.

Flaws in assumptions?

Some scientists are not satisfied by these explanations. They wonder whether Troyer's findings signal flaws in the complex assumptions that underlie the FBI's rarity estimates.

In the 1990s, FBI scientists estimated the rarity of each genetic marker by extrapolating from sample populations of a few hundred people from various ethnic or racial groups. The estimates for each marker are multiplied across all 13 loci to come up with a rarity estimate for the entire profile. These estimates make assumptions about how populations mate and whether genetic markers are independent of each other. They also don't account for relatives. Bruce Weir, a statistician at the University of Washington who has studied the issue, said these assumptions should be tested empirically in the national database system. "Instead of saying we predict there will be a match, let's open it up and look," Weir said.

Some experts predict that given the rapid growth of CODIS, such a search would produce one or more examples of unrelated people who are identical at all 13 loci. Such a discovery was once unimaginable.

'Dire consequences'

In January 2006, not long after Barlow distributed the results of the court-ordered search in Arizona, the FBI sent out a nationwide alert to crime labs warning of similar defense requests. Soon after, the bureau's arguments against the searches were being made in courtrooms around the country.

In California, Michael Chamberlain, a state Department of Justice official, persuaded judges that such a search could have "dire consequences" -- violating the privacy of convicted offenders, shutting down the database for days and risking the state's expulsion from the FBI's national DNA system. All this for a search whose results would be irrelevant and misleading to jurors, Chamberlain argued. When similar arguments were made in an Arizona case, the judge ruled that the search would be "nothing more than an interesting deep sea fishing expedition."

But in Illinois and Maryland, courts ordered the searches to proceed, despite opposition from the FBI and state officials at every turn. In July 2006, after Chicago-area defense attorneys sought a database search on behalf of a murder suspect, the FBI's Callaghan held a telephone conference with Illinois crime lab officials. The topic was "how to fight this," according to lab officials' summary of the conversation, which later became part of the court record. Callaghan suggested they tell the judge that Illinois could be disconnected from the national database system, the summary shows. Callaghan then told the lab officials "it would in fact be unlikely that IL would be disconnected," according to the summary. In an interview, Callaghan disputed he said that.

"I didn't say it was unlikely to happen," he said. "I was asked specifically, what's the likelihood here? I said, I don't know, but it takes a lot for a state to be cut off from the national database."

A week later, the judge ordered the search. Lawyers for the lab then took the matter to the Illinois Supreme Court, arguing in part that Illinois could lose its access to the federal DNA database. The high court refused to block the search.

The result: 903 pairs of profiles matching at nine or more loci in a database of about 220,000. State officials obtained a court order to prevent distribution of the results. The Times obtained them from a scientist who works closely with the FBI.

A 'unilateral decision'

A similar fight occurred in a death penalty case in Maryland during the summer and fall of 2006. The prosecutor saw a DNA match between a baseball cap dropped at the crime scene and the suspect as so definitive that he didn't plan to tell the jury about the chance of a coincidental match, records show. Seeking to cast doubt on the evidence, the defense persuaded the judge to order an "Arizona search" of the Maryland database. The state did not comply.

After the defense filed a contempt-of-court motion, Michelle Groves, the state's DNA administrator, argued in court and in an affidavit that, based on conversations with Callaghan at the FBI, she believed the request was burdensome and possibly illegal. According to Groves, Callaghan had told her that complying with the court order could lead Maryland to be disconnected from CODIS -- a result Groves' lawyer said would be "catastrophic." Groves' affidavit was edited by FBI officials and the technology contractor that designed CODIS, court records show. Before submitting the affidavit, Groves wrote the group an e-mail saying, "Let's see if this will work," the records show.

It didn't. After the judge, Steven Platt, rejected her arguments, Groves returned to court, saying the search was too risky. FBI officials had now warned her that it could corrupt the entire state database, something they would not help fix, she told the court. Platt reaffirmed his earlier order, decrying Callaghan's "unilateral" decision to block the search.

"The court will not accept the notion that the extent of a person's due process rights hinges solely on whether some employee of the FBI chooses to authorize the use of the [database] software," Platt wrote.

The search went ahead in January 2007. The system did not go down, nor was Maryland expelled from the national database system. In a database of fewer than 30,000 profiles, 32 pairs matched at nine or more loci. Three of those pairs were "perfect" matches, identical at 13 out of 13 loci. Experts say they most likely are duplicates or belong to identical twins or brothers. It's also possible that one of the matches is between unrelated people -- defying odds as remote as 1 in 1 quadrillion.

Maryland officials never did the research to find out.

Matching profiles

As databases grow, so do the chances of finding a coincidental match. Three states have searched their DNA databases for pairs of profiles that have nine or more genetic markers in common. The more profiles in the database, the more matches were found.

Maryland: 33 matches in a database of 20,000 profiles
Arizona: 144 matches in a database of 65,000 profiles
Illinois: 903 matches in a database of 230,000 profiles

California: State database has more than 1 million profiles. Several search requests have been denied.

FBI: The national DNA database, maintained by the FBI, has almost 6 million DNA profiles. It has never been searched for such coincidental matches.

Birthday paradox

Experts use an analogy called the birthday paradox to explain that the way you search for a DNA profile can dramatically affect your chances of finding a match. In some circumstances, matches are far more likely than many people think.

Imagine you're at a party with 99 other guests. If you randomly pull aside one of them, the odds he or she will share your day and month of birth are 1 in 365.

But the probability that anyone at the party shares your birthday is far higher: about 1 in 4. When you compare your birthday with 99 other people's, each comparison makes a match more likely. (The math: Multiply the odds of 1/365 by 99, the number of comparisons, to get the approximate probability.)

For the same reason, the odds that anyone at the party shares a birthday with anyone else are higher still. In fact, it's almost a certainty. As everyone looked for a match with everyone else, they made 4,950 comparisons. (The math: Multiply 100 people by the 99 other guests they compare themselves with, then divide by two because people who compare with each other count as a single comparison.)

How many people need to be at the party for it to be likely that two guests share a birthday?

The answer may surprise you: just 23.

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 07:56:20 pm

Example 27

There’s no such thing as a fish: There is no direct scientific equivalent for our casual word "fish", and what most people call fish are a broad and diverse group. The word "fish" is a vague word that could mean many things; it doesn't really have a place in biological literature. We use the word ‘fish’ to refer to a number of different branches of the animal kingdom rather than the single branch that was originally intended to be known as fish, so in a way the word has lost its meaning.


We often group species together in a superficial way and it is only by looking at the genome of a species that we can begin to fully understand evolutionary lineage. Placing different plants or animals under an umbrella term can make life easier but it makes no sense on a genetic level to say that all things that live in the sea are fish, just as it makes no sense to say that all things that can fly are birds.

A lifetime study of sea creatures led the esteemed biologist and paleontologist Stephen Jay Gould to conclude that there is, in fact, no such thing as a fish. He explained that the term had no biological meaning and was an over-simplification that grouped aquatic creatures together.

As evolutionary biologist Richard Dawkins explains in his book The Greatest Show on Earth, “trout and tuna are closer cousins to humans than they are to sharks, but we call them all “fish”.”

QI XL S08E03

There’s no such thing as a fish

New Scientist 2 Jan 1986
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 07:59:58 pm
Example 28

Chastity belts were used in the Middle Ages to prevent the crusading knights' ladies from indulging in sexual indiscretions.


The idea of a crusader clapping his wife in a chastity belt and galloping off to war with the key round his neck is a nineteenth-century fantasy designed to titillate readers. There is very little evidence for the use of chastity belts in the Middle Ages at all. The first known drawing of one occurs in the fifteenth century.

Konrad Kyeser’s Bellifortis was a book on contemporary military equipment written long after the crusades had finished. It includes an illustration of the “hard iron breeches” worn by Florentine women. In the diagram, the key is clearly visible—which suggests that it was the lady and not the knight who controlled access to the device, to protect herself against the unwanted attentions of Florentine bucks. In museum collections, most “medieval” chastity belts have now been shown to be of dubious authenticity and removed from display.

As with “medieval” torture equipment, it appears that most of it was manufactured in Germany in the nineteenth century to satisfy the curiosity of “specialist” collectors. The nineteenth century also witnessed an upturn in sales of new chastity belts—but these were not for women.Victorian medical theory was of the opinion that masturbation was harmful to health. Boys who could not be trusted to keep their hands to themselves were forced to wear these improving steel underpants. But the real boom in sales has come in the last fifty years, as adult shops take advantage of the thriving bondage market.

There are more chastity belts around today than there ever were in the Middle Ages. Paradoxically, they exist to stimulate sex, not to prevent it.
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 08:12:37 pm
Example 29

Seatbelts in automobiles save lives.


Since 1963 the federal government has spent billions of dollars to persuade, and force, the American people to wear seatbelts in automobiles. It has done this without any research, without any basis in fact, without any evidence that wearing a seatbelt improves a person’s chance of survival in an automobile accident. Indeed, research has shown that the opposite is true.1


After motorists were first required to use seatbelts, reports began to come in from emergency rooms that people were being killed by seatbelts. Instead of heeding these reports and repealing the seatbelt laws, congress put forward the theory that it was merely a mistake in seatbelt design, and ordered the addition of shoulder belts to seatbelts. This resulted in an increase in seatbelt fatalities, as motorists were now being killed by their shoulder belts as well as by their lap belts.1


When it became clear that seatbelts were not effective in preventing fatalities in head-on collisions the National Highway Traffic Safety Administration (NHTSA), in line with its continuing mandate to promote seatbelts, put forward the theory that they would prevent people from being killed in roll-overs. Quite apart from the fact that most people couldn’t roll their cars over if they tried, it turned out that most people who were killed in roll-overs were killed by being crushed when the roof caved in. The best chance of survival in such a case is to duck down, jump clear or be thrown clear, all of which are prevented by a seatbelt. The effect of seatbelts in roll-overs was thus to increase, not decrease, the number of fatalities.

The first seatbelt law was passed in the United States in 1963. This merely required that new cars made after l964 be equipped with seat belts. There was no requirement that people actually use them. When it was first suggested to Henry Ford II that he put seatbelts in new Ford cars, his response was, “That’s the craziest thing I ever heard”. During the hearings held both before and after the passage of these laws, experts from the automobile industry repeatedly warned the members of congress that putting seatbelts in cars was not a good idea. The congress chose to ignore these warnings.

The case for seatbelts in automobiles was based on five false assumptions, something congress could easily have discovered before passing this legislation if they had bothered to ask the experts or, indeed, if they had merely listened to the experts, for the experts did try to tell them. They not only did not ask, they turned a deaf ear when they were told. As a result, thousands have died.1

The five false assumptions were these:

1. Most people who are killed in automobile accidents are killed in head-on collisions. In fact, according to the government’s own data, fewer than two percent of all collisions are head-on collisions and fewer than 14% of all fatal collisions are head-on collisions.

2. People are killed in head-on collisions by being thrown through the windshield. In fact, according to the latest available government data, of the 36,281 vehicle occupants who were killed in 2001 (the last year for which the government listed head-on collisions as a separate category) only 145 were “thrown through the windshield”.

3. Vehicle occupants would be saved if they were prevented from being thrown through the windshield by wearing a seatbelt. In fact, if the force on the occupant is sufficiently great to throw him through the windshield, the injury inflicted on the wearer by the seatbelt itself would be enough to kill him.

4. The passenger compartment is never safe in fatal collisions. In fact, the overwhelming majority of motorists who are killed in fatal collisions are killed by being crushed to death when the passenger compartment is caved in. The seatbelt acts like an anchor, holding the occupant in place while he is being crushed to death.

5. The seatbelt itself will not injure the wearer. In fact, in a head-on collision as low as 30 miles per hour with one foot of crush, the seatbelt will exert a force on the wearer of 30 times his body weight, i.e., enough to kill him. The fact that the seatbelt itself might injure the wearer never occurred to them. The seatbelt proponents had never heard of Newton's second law of motion!

The key question, what is the effect of seatbelts on people in other types of accidents, was not considered.1

Seat-belt laws have also failed to reduce highway fatalities in the numbers promised by supporters to get such laws passed. According to the National Highway Traffic Safety Administration, there were 51,093 highway fatalities in 1979.9 Five years later, 1984, the year before seat-belt laws began to pass, there were 44,257 fatalities. That is a net decrease of 6,836 deaths in five years, which represents a 13.4 percent decline with no seat-belt laws and only voluntary seat-belt use. In 1999, there were 41,611 fatalities. That is a net decrease of 2,646 deaths, a 6 percent decrease over 15 years of rigid seat-belt law enforcement, with some states claiming 80 percent seat-belt use. If the passage of seat-belt laws did anything, it slowed the downward trend in highway fatalities started years before the passage of such laws.2

Fatalities per year comparisons. Figure 15-1 shows the change in the simplest measure of safety performance, total traffic deaths per year. While fatalities in the 23 year period declined in the US by 16.2%, declines of 46.0%, 49.9%, and 51.1% occurred in Britain, Canada, and Australia (Table 15-1). In the prior 1960-1978 period the comparison countries did not systematically outperform the US. On the contrary, fatalities in Canada and Australia increased by 65% and 50% (compared to a 38% increase in the US), but in GB decreased by 2%.

The number of traffic deaths that would have occurred in the US in 2002 if US fatalities had declined by the same percents as in the comparison countries from 1979-2002 are shown in Table 15-2. If the US total had declined by 46.0%, as it did in Great Britain, then US fatalities in 2002 would have been 27,598 instead of the 42,815 that occurred. (All derivations are based on calculations including more decimal places than shown in tables). By matching the British decline, 15,217 fewer Americans would have been killed in 2002. The corresponding fatality reductions for matching Canadian and Australian performance are 17,229 and 17,837.2



2. The Fraud of Seat-Belt Laws (

3. Traffic Safety by Leonard Evans. Chapter 15 "The dramatic failure of US safety policy" (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 08:18:09 pm
Example 30

DDT (dichlorodiphenyltrichloroetha ne) is a poison that causes cancer and is devastating to wildlife.

Michael Crichton on DDT


        "In fact, DDT prevents cancer. "DDT in the diet has repeatedly been shown to enhance the production of hepatic enzymes in mammals and birds.
        Those enzymes inhibit tumors and cancers in humans as well as wildlife."

        "The search for an effective substitute for DDT continues to fail 30 years after the Ruckelshaus ban.
        The search for a treatment for malaria continues to fail; the mutations of the malaria virus soon make a drug ineffective. The search for a malaria-vaccine continues to fail."

The chemical compound that has saved more human lives than any other in history, DDT, was banned by order of one man, the head of the U.S. Environmental Protection Agency (EPA). Public pressure was generated by one popular book and sustained by faulty or fraudulent research. Widely believed claims of carcinogenicity, toxicity to birds, anti-androgenic properties, and prolonged environmental persistence are false or grossly exaggerated. The worldwide effect of the U.S. ban has been millions of preventable deaths.

In World War I, prior to the discovery of the insecticidal potential of DDT, typhus killed more servicemen than bullets. In World War II, typhus was no problem. The world has marveled at the effectiveness of DDT in fighting malaria, yellow fever, dengue, sleeping sickness, plague, encephalitis, West Nile Virus, and other diseases transmitted by mosquitoes, fleas, and lice.

Today, the greatest killer and disabler is malaria, which kills a person every 30 seconds. By the 1960s, DDT had brought malaria near to extinction. "To only a few chemicals does man owe as great a debt as to DDT. In little more than two decades, DDT has prevented 500 million human deaths, due to malaria, that otherwise would have been inevitable," said the National Academy of Sciences.

        Unable to find harm to human health, DDT opponents turned to bird health, alleging a decline of bald eagles and other birds of prey, which they associated with heavy DDT usage.
        Rachel Carson led the accusation. It has been repeated so often and so passionately that the public is still convinced of it.

But the handwriting was on the wall when William Ruckelshaus, administrator of the Environmental Protection Agency, in an address to the Audubon Society in Milwaukee in 1971, clearly stated his position:

"As you know, many mass uses of DDT have already been prohibited, including all uses around the home. Certainly we'll all feel better when the persistent compounds can be phased out in favor of biological controls. But awaiting this millennium does not permit the luxury of dodging the harsh decisions of today.

Rachel Carson began the countrywide assault on DDT with her 1962 book, Silent Spring. Carson made errors, some designed to scare, about DDT and synthetic pesticides. "For the first time in the history of the world, every human being is now subjected to contact with dangerous chemicals, from the moment of conception to death," she intoned.

"This is nonsense," commented pesticide specialists Bruce N. Ames and Thomas H. Jukes of the University of California at Berkeley. (Ames is a professor of biochemistry and molecular biology, world renowned. Jukes, who died a few years ago, was a professor of biophysics and a leader in the defense of DDT.) "Every chemical is dangerous if the concentration is too high. Moreover, 99.9 percent of the chemicals humans ingest are natural... produced by plants to kill off predators," Ames and Jukes wrote in Reason in 1993.

Carson, not very scrupulous, implied that the renowned Albert Schweitzer agreed with her on DDT by dedicating Silent Spring "to Dr. Albert Schweitzer, who said 'Man has lost the capacity to foresee and forestall. He will end by destroying the earth.'" Professor Edwards doubted the implication. He got a copy of Schweitzer's autobiography. Dr. Schweitzer was referring to atomic warfare. Professor Edwards found on page 262, "How much labor and waste of time these wicked insects do cause, but a ray of hope, in the use of DDT, is now held out to us."

But Miss Carson's skillful writing was enough to direct a new-born environmental industry looking for a hot issue into a feverish campaign against DDT. "Rachel Carson set the style for environmentalism. Exaggeration and omission of pertinent contradictory evidence are acceptable for the holy cause," wrote Professors Ames and Jukes.


DDT, Fraud, and Tragedy

DDT: A Case Study in Scientific Fraud" by the late J. Gordon Edwards, Professor Emeritus of Entomology at San Jose State University in San Jose, California.
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 08:21:28 pm
Example 31

Edison invented the light bulb


In 1840, British Astronomer and Chemist, Warren de la Rue, enclosed a platinum coil in a vacuum tube and passed an electric current through it, thus creating the world’s first light bulb – a full 40 years before Edison was issued a patent for creating it.

Actually, historians list up to 22 inventors of the incandescent lamp before Thomas Edison, starting with Sir Humphry Davy in the early 19th Century.

But in 1878, Edison challenged himself and his workers to produce a commercially viable and longer lasting light bulb, based on the work of inventors before him. In October 1879, by creating an extremely high vacuum inside a bulb and using a carbon filament, he filed a US patent for the first practical high-resistance lamp capable of burning for hundreds of hours.

So while he didn’t actually invent the lightbulb, he did produce the first version that was practical for everyday use.

( (

some added truth...

The Phoebus cartel

VIDEO: The Lightbulb Conspiracy (

In the early 1900′s, the goal was to make the light bulb last as long as possible. Edison’s lamp lasted 1500 hours, and in the 1920′s, manufacturers advertised lamps sporting a 2500 hour life. Then leading lamp manufacturers came up with the idea that it might be more profitable if the bulbs were made less durable.

In 1924, the Phoebus cartel was created in order to control global lamp production, to which they tied manufacturers all over the world, dividing the various continents between them. In the documentary, historian Helmut High shows the original cartel document that states: “The average life of lamps may not be guaranteed, advertised or published as more than 1 000 hours.” The cartel pressured its members to develop a more fragile incandescent bulb, which would remain within the established 1000-hour rule. Osram tested life and all manufacturers that did not keep the lower standards were heavily fined. Bulb life was thereby reduced to the required 1000 hours.

The film claims that there are patents on incandescent light bulbs with 100 000 hours lifetime, but they never went into production – except Adolphe Chaillets bulb of Livermore Fire Department in California, which has burned continuously since 1901. In 1981, the East German company Narva created a lamp for a long life lamp and showed it at an international light fair. Nobody was interested. (It later became accepted as a special ‘long-life’ lamp but was never a commercial hit.)

Wikipedia states that the Phoebus cartel included Osram, Philips, Tungsram, Compagnie des Lampes, Associated Electrical Industries, ELIN, International General Electric, and the GE Overseas Group. “They owned shares in the Swiss corporation proportional to their lamp sales.”

    “The Phoebus Cartel divided the world’s lamp markets into three categories:

       1. home territories, the home country of individual manufacturers
       2. British overseas territories, under control of Associated Electrical Industries, Osram, Philips, and Tungsram
       3. common territory, the rest of the world

In 1921 a precursor organisation was founded by Osram, the Internationale Glühlampen Preisvereinigung. When Philips and other manufacturers were entering the American market, General Electric reacted by setting up the International General Electric Company in Paris. Both organisations were involved in trading patents and adjusting market penetration. Increasing international competition led to negotiations between all major companies to control and restrict their respective activities in order not to interfere in each other’s spheres.”

According to the documentary, the cartel officially never existed (even though their memorandum remains in archives). Their strategy has been to rename all the time, but still exists in one form or another. The film mentions The International Energy cartel, but that seems to be more about controlling world energy production rather than light bulbs specifically.
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 09:56:43 pm

You are living in the past


We persistently have the illusion that it is the present. Just like we have the illusion of a single vision instead of 32, or of one body instead of 7. We are living approximations, and every single thing we do or sense is an approximation. It's a wonderful illusion though.


Watch David Eagleman discuss time:

Flash Lag Effect demonstration:

Previous Vsauce videos about time and perception:

Why does time feel faster as we age?

Video and the FRAME RATE of the eye:

Stopped Clock Illusion:

Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 10:04:16 pm
Example 33 (yes, yes, I know...)

The City of London is not the city named London.


The Great City of London, known for its historical landmarks, modern skyscrapers, ancient markets and famous bridges.  It's arguably the financial capital of the world and home to over eleven thousand people.

Wait, what?  Eleven... thousand?

That's right: but the City of London is a different place from London -- though London is also known for its historical landmarks, modern skyscrapers, ancient markets, famous bridges and is home to the government, royal family and seven million people.

But, if you look map of London crafted by a careful cartographer that map will have a one-square mile hole near the middle -- it's here where the City of London lives inside of the city named London.
Despite these confusingly close names the two Londons have separate city halls and elect separate mayors, who collect separate taxes to fund separate police who enforce separate laws.

The Mayor of the City of London has a fancy title 'The Right Honourable the Lord Mayor of London' to match his fancy outfit.   He also gets to ride in a golden carriage and work in a Guildhall while the mayor of London has to wear a suit, ride a bike and work in an office building.

The City of London also has its own flag and its own crest which is awesome and makes London's lack of either twice as sad.

To top it off the City of London gets to act more like one of the countries in the UK than just an oddly located city -- for uniquely the corporation that runs the city of London is older than the United Kingdom by several hundred years.

So how did the UK end up with two Londons, one inside of the other?  Because: Romans.

2,000 years ago they came to Great Britain, killed a bunch of druids, and founded a trading post on the River Thames and named it Londonimium. Being Romans they got to work doing what Romans do: enforcing laws, increasing trade, building temples, public baths, roads, bridges and a wall to defend their work.

And it's this wall which is why the current City of London exists -- for though the Romans came and the Romans went and kingdoms rose and kingdoms fell, the wall endured protecting the city within.  And The City, governing itself and trading with the world, grew rich.

A thousand years after the Romans (yet still a thousand years ago) when William the Conqueror came to Great Britain to conqueror everything and begin modern british history he found the City of London, with its sturdy walls more challenging to defeat than farmers on open fields.

So he agreed to recognize the rights and privileges City of Londoners were used to in return for the them recognizing him as the new King.

Though after the negotiation, William quickly built towers around the City of London which were just as much about protecting William from the locals within as defending against the Vikings from without.

This started a thousand-year long tradition whereby Monarchs always reconfirmed that 'yes' the City of London is a special, unique place best left to its own business, while simultaneously distrusting it.

Many a monarch thought the City of London was too powerful and rich.  And one even built a new Capital city nearby, named Westminster, to compete with the City of London and hopefully, suck power and wealth away from it.  This was the start of the second London.

As the centuries passed, Westminster grew and merged with nearby towns eventually surrounding the walled-in, and still separate City of London.  But, people began to call the whole urban collection 'London' and the name became official when Parliament joined towns together under a single municipal government with a mayor.

But, the mayor of London still doesn't have power over the tiny City of London which has rules and traditions like nowhere else in the country and possibly the world.

For example, the ruling monarch doesn't just enter the City of London on a whim, but instead asks for permission from the Lord Mayor at a ceremony. While it's not required by law, the ceremony is, unusual to say the least.

The City of London also has a representative in Parliament, The Remembrancer, whose job it is to protects the City's special rights.

Because of this, laws passed by Parliament sometimes don't apply to the City of London: most notably voting reforms, which we'll discuss next time.  But if you're curious, unlike anywhere else in the UK elections in the City of London involve Medieval Guilds and modern companies.

Finally, the City of London also owns and operates land and buildings far outside its border, making it quite wealthy.

Once you start looking for The City's Crest you'll find it in lots of places, but most notably on Tower Bridge which, while being in London is operated by City of London,
These crests everywhere when combined with the City of London's age and wealth and quazi-independent status make it an irresistible temptation for conspiracy nuts.  Add in the oldest Masonic temple and it's not long before the crazy part of the Internet yelling about secret societies controlling the world via the finance industry from inside the City-state of London.  (And don't forget the reptilian alien Queen who's really behind it all.)

But conspiracy theories aside, the City of London is not an independent nation like the Vatican is, no matter how much you might read it on the Internet, rather it's a unique place in the United Kingdom with a long and complicated history.

The wall that began all this 2,000 years ago is now mostly gone -- so the border between London and its secret inner city isn't so obvious.   Though, next time you're in London, if you come across a small dragon on the street, he still guards the entrance to the city in a city in a country in a country.

The (Secret) City of London, Part 1: History (

The (Secret) City of London, Part 2: Government (
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 10:20:46 pm
Example 34

The Left-Brain Right-Brain Myth


Everyone knows the popular myths about the two brain hemispheres: The right brain is artistic, musical, spatial, intuitive, and holistic; the left brain is linear, rational, analytical, and linguistic. There is some truth in these labels. But, not surprisingly, they are mostly oversimplifications of tendencies, not fixed rules.

When asked to address some of the conceptions about hemispheric differences, split-brain expert and the author of The Lopsided Ape, Michael Corballis, his assessments were very much along these lines.

On the subject of creativity and language-two skills often polarized as examples of right and left brain thinking-Corballis said, “I don’t see any good evidence that the right hemisphere is more creative than the left. Language itself is highly creative-every sentence you construct is a new creation-and one could make a case for supposing that the left hemisphere is really the creative one.” He goes on, “But I think artistic creativity is likely to invoke more right-hemisphere capacities, simply because of the right- hemisphere bias for spatial skills. And there are aspects of language, such as prosody, and perhaps pragmatic aspects such as an understanding of metaphor or sarcasm, that may be more right than left hemispheric. So it’s always a question of balance.”

“Quite simply,” writes Michael Gazzaniga, a former student of split-brain pioneer Roger Sperry, “all brains are not organized the same way.” Like with everything else human, genes collide with environment and the result is not a predictable thing.

In short, our reduction of the sides of the brain to the seats of this or that skill or quality misses the point entirely. “On the whole,” said Corballis, “I think it would be better for educationalists and therapists to forget about the hemispheres and concentrate on the skills themselves. The hemispheres are convenient pegs on which to hang our prejudices.”


Why the Left-Brain Right-Brain Myth Will Probably Never Die
The myth has become a powerful metaphor, but it's one we should challenge
Published on June 27, 2012 by Christian Jarrett, Ph.D in Brain Myths

Psychology for Designers - Left Brain / Right Brain Myth

“Left Brain” “Right Brain”: The Mind in Two
Gerald Gabriel | July 27, 2008

When I asked split-brain expert and the author of The Lopsided Ape, Michael Corballis, to address some of the conceptions about hemispheric differences, his assessments were very much along these lines.

On the subject of creativity and language-two skills often polarized as examples of right and left brain thinking-Corballis said, “I don’t see any good evidence that the right hemisphere is more creative than the left. Language itself is highly creative-every sentence you construct is a new creation-and one could make a case for supposing that the left hemisphere is really the creative one.” He goes on, “But I think artistic creativity is likely to invoke more right-hemisphere capacities, simply because of the right- hemisphere bias for spatial skills. And there are aspects of language, such as prosody, and perhaps pragmatic aspects such as an understanding of metaphor or sarcasm, that may be more right than left hemispheric. So it’s always a question of balance.”

“Quite simply,” writes Michael Gazzaniga, a former student of split-brain pioneer Roger Sperry, “all brains are not organized the same way.” Like with everything else human, genes collide with environment and the result is not a predictable thing.

In short, our reduction of the sides of the brain to the seats of this or that skill or quality misses the point entirely. “On the whole,” said Corballis, “I think it would be better for educationalists and therapists to forget about the hemispheres and concentrate on the skills themselves. The hemispheres are convenient pegs on which to hang our prejudices.”
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Jun 28, 2013, 10:35:40 pm
Example 35

Placebo Buttons


Our lives are filled with lying buttons! We're talking those pesky crosswalk buttons, close door elevator buttons and more! Why are they around, and why do we always hit them expecting magic? Anthony explores this hidden world of lies and deceit!


The Door Close button is there mostly to give passengers the illusion of control. In elevators built since the early '90s. The button is only enabled in emergency situations with a key held by an authority.

According to a 2008 article in the New Yorker, close buttons don’t close the elevator doors in many elevators built in the United States since the 1990s. In some elevators the button is there for workers and emergency personnel to use, and it only works with a key. The key-only settings isn’t always active though, as the blog Design with Intent asserts. Each elevator is different. In some, the emergency function requires a long-press of several seconds longer than the average user attempts.

Non-functioning mechanisms like this that motivate you to fool yourself are called placebo buttons, and they’re everywhere.

Computers and timers now control the lights at many intersections, but at one time little buttons at crosswalks allowed people to trigger the signal change. Those buttons are mostly all disabled now, but the task of replacing or removing all of them was so great most cities just left them up. You still press them though, because the light eventually changes.

In an investigation by ABC news in 2010, only one functioning crosswalk button could be found in Austin, Texas; Gainsville, Fla.; and Syracuse, NY.
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Nov 30, 2013, 03:32:49 pm

Example 36

Laws vs Rules vs Theories

The Variability of Fundamental Constants

 Do physical constants fluctuate?

As the name implies, the so-called physical constants are supposed to be changeless. They are believed to reflect an underlying constancy of nature. In this chapter I discuss how the values of the fundamental physical constants have in fact changed over the last few decades, and suggest how the nature of these changes can be investigated further.

 There are many constants listed in handbooks of physics and chemistry, such as melting points and boiling points of thousands of chemicals, going on for hundreds of pages: for instance the boiling point of ethyl alcohol is 78.5°C at standard temperature and pressure; its freezing point is -117.3°C. But some constants are more fundamental than others. The following list gives the seven most generally regarded as truly fundamental.

The Fundamental Constants
Fundamental quantitySymbol
Velocity of lightc
Elementary chargee
Mass of the electronme
Mass of the protonmp
Avogadro constantNA
Planck's constanth
Universal gravitational constantG
Boltzmann's constantk

All these constants are expressed in terms of units; for example, the velocity of light is expressed in terms of meters per second. If the units change, so will the constants. And units are [arbitrary], dependent on definitions that may change from time to time: the meter, for instance, was originally defined in 1790 by a decree of the French National Assembly as one ten-millionth of the quadrant of the earth's meridian passing through Paris. The entire metric system was based upon the meter and imposed by law. But the original measurements of the earth's circumference were found to be in error. The meter was then defined, in 1799, in terms of a standard bar kept in France under official supervision. In 1960 the meter was redefined in terms of the wavelength of light emitted by krypton atoms; and in 1983 it was redefined again in terms of the speed of light itself, as the length of the path traveled by light in 1/299,792,458 of a second.

As well as any changes due to changing units, the official values of the fundamental constants vary from time to time as new measurements are made. They are continually adjusted by experts and international commissions. Old values are replaced by new ones, based on the latest 'best values' obtained in laboratories around the world. Below, I consider four examples: the gravitational constant (G>); the speed of light; Planck's constant; and also the fine structure constant a, which is derived from the charge on the electron, the velocity of light, and Planck's constant.

The 'best' values are already the result of considerable selection. First, experimenters tend to reject unexpected data on the grounds that they must be errors. Second, after the most deviant measurements have been weeded out, variations within a given laboratory are smoothed out by averaging the values obtained at different times, and the final value is then subjected to a series of somewhat arbitrary corrections. Finally, the results from different laboratories around the world are selected, adjusted, and averaged to arrive at the latest official value.
Faith in eternal truths
 In practice, then, the values of the constants change. But in theory they are supposed to be changeless. The conflict between theory and empirical reality is usually brushed aside without discussion, because all variations are assumed to be due to experimental errors, and the latest values are assumed to be the best.

But what if the constants really change? What if the underlying nature of nature changes? Before this subject can even be discussed, it is necessary to think about one of the most fundamental assumptions of science as we know it: faith in the uniformity of nature. For the committed believer, these questions are nonsensical. Constants must be constant.

Most constants have been measured only in this small region of the universe for a few decades, and the actual measurements have varied erratically. The idea that all constants are the same everywhere and always is not an extrapolations from the data. If it were an extrapolation it would be outrageous. The values of the constants as actually measured on earth have changed considerably over the last fifty years. To assume they had not changed for fifteen billion years anywhere in the universe goes far beyond the meager evidence. The fact that this assumption is so little questioned, so readily taken for granted, shows the strength of scientific faith in eternal truths.

According to the traditional creed of science, everything is governed by fixed laws and eternal constants. The laws of nature are the same in all times and at all places. In fact they transcend space and time. They are more like eternal Ideas--in the sense of Platonic philosophy--than evolving things. They are not made of matter, energy, fields, space, or time; they are not made of anything. In short, they are immaterial and non-physical. Like Platonic Ideas they underlie all phenomena as their hidden reason or logos, transcending space and time.

Of course, everyone agrees that the laws of nature as formulated by scientists change from time to time, as old theories are partially or completely superseded by new ones. For example, Newton's theory of gravitation, depending on forces acting at a distance in absolute time and space, was replaced by Einstein's theory of the gravitational field consisting of curvatures of space-time itself. But both Newton and Einstein shared the Platonic faith that underlying the changing theories of natural science there are true eternal laws, universal and immutable. And neither challenged the constancy of constants: indeed both gave great prestige to this assumption, Newton through his introduction of the universal gravitational constant, and Einstein through treating the speed of light as absolute. In modern relativity theory, c is a mathematical constant, a parameter relating the units used for time to the units used for space; its value is fixed by definition. The question as to whether the speed of light actually differs from c, although theoretically conceivable, seems of peripheral interest.

For the founding fathers of modern science, such as Copernicus, Kepler, Galileo, Descartes, and Newton, the laws of nature were changeless Ideas in the divine mind. God was a mathematician. The discovery of the mathematical laws of nature was a direct insight into the eternal Mind of God. Similar sentiments have been echoes by physicists ever since.

Until the 1960s, the universe of orthodox physics was still eternal. But evidence for the expansion of the universe has been accumulating for several decades, and the discovery of the cosmic microwave background radiation in 1965 finally triggered off a great cosmological revolution. The Big Bang theory took over. Instead of an eternal machine-like universe, gradually running down toward thermodynamic heat death, the picture was now one of a growing, developing, evolutionary cosmos. And if there was a birth of the cosmos, an initial 'singularity', as physicists put it, then once again age-old questions arise. Where and what did everything come from? Why is the universe as it is? In addition, a new question arises. If all nature evolves, why should the laws of nature not evolve as well? If laws are immanent in evolving nature, then the laws should evolve too.

Today these questions are usually discussed in terms of the anthropic cosmological principle, as follows: Out of the many possible universes, only one with the constants set at the values found today could have given rise to a world with life as we know it and allowed the emergence of intelligent cosmologists capable of discussing it. If the values of the constants had been different, there would have been no stars, nor atoms, nor planets, nor people. Even if the constants were only slightly different, we would not be here. For example, with just a small change in the relative strengths of the nuclear and electromagnetic forces there could be no carbon atoms, and hence no carbon-based forms of life such as ourselves. 'The Holy Grail of modern physics is to explain why these numerical constants . . . have the particular numerical values they do.'

Some physicists incline toward a kind of neo-Deism, with a mathematical creator-God who fine-tuned the constants in the first place, selecting from many possible universes the one in which we can evolve. Others prefer to leave God out of it. One way of avoiding the need for a mathematical mind to fix the constants of nature is to suppose that our universe arose from a froth of possible universes. The primordial bubble that gave rise to our universe was one of many. But our universe has to have the constants it does by the very fact we are here. Somehow our presence imposes a selection. There may be innumerable alien and uninhabitable universes quite unknown to us, but this is the ony one we can know.

This kind of speculation has been carried even further by Lee Smolin, who has proposed a kind of cosmic Darwinism. Through black holes, baby universes may be budded off from pre-existing ones and take on a life of their own. Some of these might have slight mutations in the values of their constants and hence evolve differently. Only those that form stars can form black holes and hence have babies. So by a principle of cosmic fecundity, only universes like ours would reproduce, and there may be many more or less similar habitable universes. But this very speculative theory still does not explain why any universes should exist in the first place, nor what determines the laws that govern them, nor what maintains, carries, or remembers the mutant constants in any particular universe.

Notice that all these metaphysical speculations, extravagant though they seem, are thoroughly conventional in that they take for granted both eternal laws and constant constants, at least within a given universe. These well-established assumptions make the constancy of constants seem like an assured truth. Their changelessness is an act of faith. ...If measurements show variations in the constants, as they often do, then the variations are dismissed as experimental errors; the latest figure is the best available approximation to the 'true' value of the constant.

Some variations may well be due to errors, and such errors decrease as instruments and methods of measurement improve. All kinds of measurements have inherent limitations on their accuracy. But not all the variations in the measured values of the constants need necessarily be due to error, or to the limitations of the apparatus used. Some may be real. In an evolving universe, it is conceivable that the constants evolve along with nature. They might even vary cyclically, if not chaotically.
Theories of changing constants
 Several physicists, among them Arthur Eddington and Paul Dirac, have speculated that at least some of the 'fundamental constants' may change with time. In particular, Dirac proposed that the universal gravitational constant, G, may be decreasing with time: the gravitational force weakening as the universe expands. But those who make such speculations are usually quick to avow that they are not challenging the idea of eternal laws; they are merely proposing that eternal laws govern the variation of the constants.

The proposal that the laws themselves evolve is more radical. The philosopher Alfred North Whitehead pointed out that if we drop the old idea of Platonic laws imposed on nature, and think instead of laws being immanent in nature, then they must evolve along with the nature:

Since the laws of nature depend on the individual characters of the things constituting nature, as the things change, then consequently the laws will change. Thus the modern evolutionary view of the physical universe should conceive of the laws of nature as evolving concurrently with the things constituting the environment. Thus the conception of the Universe as evolving subject to fixed eternal laws should be abandoned.

I prefer to drop the metaphor of 'law' altogether, with its outmoded image of God as a kind of law-giving emperor, as well as an omnipotent and universal law-enforcement agency. Instead, I have suggested that the regularities of nature may be more like habits. According to the hypothesis of morphic resonance, a kind of cumulative memory is inherent in nature. Rather than being governed by an eternal mathematical mind, nature is shaped by habits, subject to natural selection. And some habits are more fundamental than others; for example, the habits of hydrogen atoms are very ancient and widespread, found throughout the universe, while the habits of hyenas are not. Gravitational and electromagnetic fields, atoms, galaxies and stars are governed by archaic habits, dating back to the earliest periods in the history of the universe. From this point of view the 'fundamental constants' are quantitative aspects of deep-seated habits. They may have changed at first, but as they became increasingly fixed through repetition, the constants may have settled down to more or less stable values. In this respect the habit hypothesis agrees with the conventional assumption of constancy, though for very different reasons.

Even if speculations about the evolution of constants are set aside, there are at least two more reasons why constants may vary. First, they may depend on the astronomical environment, changing as the solar system moves within the galaxy, or as the galaxy moves away from other galaxies. And second, the constants may oscillate or fluctuate. They may even fluctuate in a seemingly chaotic manner. Modern chaos theory has enabled us to recognize that chaotic behavior, as opposed to old-style determinism, is normal in most realms of nature. So far the 'constants' have survived unchallenged from an earlier era of physics: the vestiges of a lingering Platonism. But what if they, too, vary chaotically?
The variability of the universal gravitational constant
 In spite of the central importance of the universal gravitational constant, it is the least well defined of all the fundamental constants. Attempts to pin it down to many places of decimals have failed; the measurements are just too variable. The editor of the scientific journal Nature has described as 'a blot on the face of physics' the fact that G still remains uncertain to about one part in 5,000. Indeed, in recent years the uncertainty has been so great that the existence of entirely new forces has been postulated to explain gravitational anomalies.

In the early 1980s, Frank Stacey and his colleagues measured G in deep mines and boreholes in Australia. Their value was about 1 percent higher than currently accepted. For example, in one set of measurements in the Hilton mine in Queensland the value of G was found to be 6.734 ± 0.002, as opposed to the currently accepted value of 6.672 ± 0.003. The Australian results were repeatable and consistent, but no one took much notice until 1986. In that year Ephrain Fischbach, at the University of Washington, Seattle, sent shock waves around the world of science by claiming that laboratory tests also showed a slight deviation from Newton's law of gravity, consistent with the Australian results. Fischbach proposed the existence of a hitherto unknown repulsive force, the so-called fifth force (the four known forces being the strong and weak nuclear forces, the electromagnetic force, and the gravitational force)

The possible existence of a fifth force is not particularly relevant to possible changes in G with time. But the very fact that the question of an extra force affecting gravitation could even be raised and seriously considered in the late twentieth century serves to emphasize how imprecise the characterization of gravity remains more than three centuries after the publication of Newton's Principia.

The suggestion by Paul Dirac and other theoretical physicists that G may be decreasing as the universe expands has been taken quite seriously by some metrologists. However, the change proposed by Dirac was very small, about 5 parts in 1011 per year. This is way below the limits of detection using conventional methods of measuring G on Earth. The 'best' results in the last twenty years differ from each other by more than 5 parts in 104. In other words, the change Dirac was suggesting is some ten million times smaller than the differences between recent 'best' values.

In order to test Dirac's hypothesis, a variety of indirect methods have been tried. Some depend on geological evidence, such as the slopes of fossils and dunes, from which the gravitational forces at the time they were formed can be calculated; others depend on records of eclipses over the last 3,000 years; others on modern astronomical methods.

The problem with all these indirect lines of evidence is that they depend on a complex tissue of theoretical assumptions, including the constancy of the other constants of nature. They are persuasive only within the framework of the present paradigm. That is to say that if one assumes the correctness of modern cosmological theories, themselves presupposing the constancy of G, the data are internally consistent, provided that all actual variations from experiment to experiment, or method to method, are assumed to be a result of error.
The fall in the speed of light from 1928 to 1945
 According to Einstein's theory of relativity, the speed of light in a vacuum is invariant: it is an absolute constant. Much of modern physics is based on that assumption. There is therefore a strong theoretical prejudice against raising the question of possible changes in the velocity of light. In any case, the question is now officially closed. Since 1972 the speed of light has been fixed by definition. The value is defined as 299,792.458 ± 0.001 # 2 kilometers per second.

As in the case of the universal gravitational constant, early measurements of c differed considerably from the present official value. For example, the determination by Römer in 1676 was about 30 percent lower, and that by Fizeau in 1849 about 5 percent higher.

In 1929, Birge published his review of all the evidence available up to 1927 and came to the conclusion that the best value for velocity of light was 299,796 ± 4 km/s. He pointed out that the probable error was far less than in any of the other constants, and concluded that 'the present value of c is entirely satisfactory, and can be considered as more or less permanently established.' However, even as he was writing, considerably lower values of c were being found, and by 1934 it was suggested by Gheury de Bray that the data pointed to a cyclic variation in the velocity of light.
 From around 1928 to 1945, the velocity of light appeared to be about 20 km/s lower than before and after this period. The 'best' values, found by the leading investigators using a variety of techniques, were in impressively close agreement with each other, and the available data were combined and adjusted by Birge in 1941 and Dorsey in 1945.

In the late 1940s the speed of light went up again. Not surprisingly, there was some turbulence at first as the old value was overthrown. The new value was about 20 km/s higher, close to that prevailing in 1927. A new consensus developed. How long this consensus would have lasted if based on continuing measurements is a matter for speculation. In practice, further disagreement was prevented by fixing the speed of light in 1972 by definition.

How can the lower velocity from 1928 to 1945 be explained? If it was simply a matter of experimental error, why did the results of different investigators and different methods agree so well? And why were the estimated errors so low?

One possibility is that the velocity of light really does fluctuate from time to time. Perhaps it really did drop for nearly twenty years. But this is not a possibility that has been seriously considered by researchers in the field, except for de Bray. So strong is the assumption that it must be fixed that the empirical data have to be explained away. This remarkable episode in the history of the speed of light is now generally attributed to the psychology of metrologists:

The tendency for experiments in a given epoch to agree with one another has been described by the delicate phrase 'intellectual phase locking.' Most metrologists are very conscious of the possible existence of such effects; indeed ever-helpful colleagues delight in pointing them out! . . . .Aside from the discovery of mistakes, the near completion of the experiment brings more frequent and stimulating discussion with interested colleagues and the preliminaries to writing up the work add fresh perspective. All of these circumstances combine to prevent what was intended to be 'the final result' from being so in practice, and consequently the accusation that one is most likely to stop worrying about corrections when the value is closest to other results is easy to make and difficult to refute.

But if changes in the values of constants in the past are attributed to the experimenters' psychology, then, as other eminent metrologists have observed, 'this raises a disconcerting question: How do we know that this psychological factor is not equally important today?' In the case of the velocity of light, however, this question is now academic. Not only is the velocity fixed by definition, but the very units in which velocity is measured, distance and time, are defined in terms of light itself.

The second used to be defined as 1/86,400 of a mean solar day, but it is now defined in terms of the frequency of light emitted by a particular kind of excitation of caesium-133 atoms. A second is 9,192,631,770 times the period of vibration of the light. Meanwhile, since 1983 the meter has been defined in terms of the velocity of light, itself fixed by definition.

As Brian Petley has pointed out, it is conceivable that:
 (i) the velocity of light might change with time, or (ii) have a directional dependence in space, or (iii) be affected by the motion of the Earth about the Sun, or motion within our galaxy or some other reference frame.

Nevertheless, if such changes really happened, we would be blind to them. We are now shut up within an artificial system where such changes are not only impossible by definition, but would be undetectable in practice because of the way the units are defined. Any change in the speed of light would change the units themselves in such a way that the velocity in kilometers per second remained exactly the same.
The rise of Planck's constant
 Planck's constant, h, is a fundamental feature of quantum physics and relates the frequency of a radiation, v, with its quantum of energy, E, according to the formula E=hv. It has the dimensions of action (energy x time).

We are often told that quantum theory is brilliantly successful and amazingly accurate. For example: 'The laws that have been found to describe the quantum world. . . are the most accurate and precise tools we have ever found for the successful description and prediction of the workings of Nature. In some cases the agreement between the theory's predictions and what we measure are good to better than one part in a billion.'

I heard and read such statements so often that I used to assume that Planck's constant must be known with tremendous accuracy to many places of decimals. This seems to be the case if one looks it up in a scientific handbook--so long as one does not also look at previous editions. In fact its official value has changed over the years, showing a marked tendency to increase.

The biggest change occurred between 1929 and 1941, when it went up by more than 1 percent. This increase was largely due to a substantial change in the value of the charge on the electron, e. Experimental measurements of Planck's constant do not give direct answers, but also involve the charge on the electron and/or the mass of the electron. If either or both of these other constants change, then so does Planck's constant.

Millikan's work on the charge on the electron turned out to be one of the roots of the trouble. Even though other researchers found substantially higher values, they tended to be disregarded. 'Millikan's great renown and authority brought about the opinion that the question of the magnitude of e had practically got its definitive answer.' For some twenty years Millikan's value prevailed, but evidence went on building up that e was higher. As Richard Feynman has expressed it:

It's interesting to look at the history of measurements of the charge on the electron after Millikan. If you plot them as function of time, you find that one is a little bigger than Millikan's, the next one's a little bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number that is higher. Why didn't they discover that the new number was higher right away? It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they would look for and find a number closer to Millikan's value when they didn't look so far. And so they eliminated the numbers that were too far off, and did other things like that.

In the late 1930s, the discrepancies could no longer be ignored, but Millikan's high-prestige value could not simply be abandoned either; instead it was corrected by using a new value for the viscosity of air, an important variable in his oil-drop technique, bringing it into alignment with the new results. In the early 1940s, even higher values of e led to a further upward revision of the official figure. Sure enough, reasons were found to correct Millikan's value yet again, raising it to agree with the new value. Every time e increased, so Planck's constant had to be raised as well.

Interestingly, Planck's constant continued to creep upwards from the 1950s to the 1970s. Each of these increases exceeded the estimated error in the previously accepted value. The latest value shows a slight decline.

Planck's Constant from 1951 to 1988 (Review Values)
AuthorDateh(x 10-34 joule seconds)
Bearden and Watts19516.623 63 ± 0.000 16
Cohen et al.19556.625 17 ± 0.000 23
Condon19636.625 60 ± 0.000 17
Cohen and Taylor19736.626 176 ± 0.000 036
19886.626 075 5 ± 0.000 004 0


Several attempts have been made to look for changes in Planck's constant by studying the light from quasars and stars assumed to be very distant on the basis of the red shift in their spectra. The idea was that if Planck's constant has changed, the properties of the light emitted billions of years ago should be different from more recent light. Little difference was found, leading to the seemingly impressive conclusion that h varies by less than 5 parts in 1013 per year. But critics of such experiments have pointed out that these constancies are inevitable, since the calculations depend on the implicit assumption that h is constant; the reasoning is circular. (Strictly speaking, the starting assumption is that the product hc is constant; but since c is constant by definition, this amounts to assuming the constancy of h.)

 Fluctuations in the fine-structure constant
 One of the problems of looking for changes in a fundamental constant is that if changes are found in the constant, then it is difficult to know whether it is the constant itself that is changing, or the units in which it is measured. However, some of the constants are dimensionless, expressed as pure numbers, and hence the question of changes in units does not arise. One example is the ratio of the mass of the proton to the mass of the electron. Another is the fine-structure constant. For this reason, some metrologists have emphasized that 'secular changes in physical "constants" should be formulated in terms of such numbers.'

Accordingly, in this section I look at the evidence for changes in the fine-structure constant, a, formed from the charge on the electron, the velocity of light, and Planck's constant, according to the formula the fine structure constant = [charge on the electron, squared]/2 [Planck's constant][the velocity of light][the permittivity of free space]. It gives a measure of the strength of electromagnetic interactions, and is sometimes expressed as its reciprocal, approximately 1/137. This constant is treated by some theoretical physicists as one of the key cosmic numbers that a Theory of Everything should be able to explain.

Between 1929 and 1941 the fine-structure constant increased by about 0.2 percent, from 7.283 x 10-3 to 7.2976 x 10-3. This change was largely attributable to the increased value for the charge on the electron, partly offset by the fall in the speed of light, both of which I have already discussed. As in the case of the other constants, there was a scatter of results from different investigators, and the 'best' values were combined and adjusted from time to time by reviewers. As in the case of the other constants, the changes were generally larger than would be expected on the basis of the estimated errors. For example, the increase from 1951 to 1963 was twelve times greater than the estimated error in 1951 (expressed as the standard deviation); the increase from 1963 to 1973 was nearly five tims the estimated error in 1963.

The Fine-Structure Constant From 1951 to 1973
AuthorDatea x 10-3
Bearden and Watts19517.296 953 ± 0.000 028
Condon19637.297 200 ± 0.000 033
Cohen and Taylor19737.297 350 6 ± 0.000 006 0


Several cosmologists have speculated that the fine-structure constant might vary with the age of the universe, and attempts have been made to check this possibility by analyzing the light from stars and quasars, assuming that their distance is proportional to the red-shift of their light. The results suggest that there has been little or no change in the constant. But as with all other attempts to infer the constancy of constants from astronomical observations, many assumptions have to be made, including the constancy of other constants, the correctness of current cosmological theories, and the validity of red-shifts as indicators of distance. All of these assumptions have been and are still being questioned by dissident cosmologists.
Do constants really change?
 As we have seen with the four examples above, the empirical data from laboratory experiments reveal all sorts of variations as time goes on. Similar variations are found in the values of the other fundamental constants. These do not trouble true believers in constancy, because they can always be explained in terms of experimental error of one kind or another. Because of continual improvements in techniques, the greatest faith is always placed in the latest measurements, and if they differ from previous ones, the older ones are automatically discredited (except when the older ones are endowed with a high prestige, as in the case of Millikan's measurement of e). Also, at any given time, there is a tendency for metrologists to overestimate the accuracy of contemporary measurements, as shown by the way that later measurements often differ from earlier ones by amounts greater than the estimated error. Alternatively, if metrologists are estimating their errors correctly, then the changes in the values of the constants show that the constants really are fluctuating. The clearest example is the fall in the speed of light from 1928 to 1945. Was there a real change in the course of nature, or was it due to a collective delusion among metrologists?

So far there have been only two main theories about the fundamental constants. First, they are truly constant, and all variations in the empirical data are due to errors of one kind or another. As science progresses, these errors are reduced. With ever-increasing precision we come closer and closer to the constants' true values. This is the conventional view. Second, several theoretical physicists have speculated that one or more of the constants may vary in some smooth and regular manner with the age of the universe, or over astronomical distances. Various tests of these ideas using astronomical observations seem to have ruled out such changes. But these tests beg the question. They are founded on the assumptions that they set out to prove: that constants are constant, and that present-day cosmology is correct in all essentials.

There has been little consideration of the third possibility, which is the one I am exploring here, namely the possibility that constants may fluctuate, within limits, around average values which themselves remain fairly constant. The idea of changeless laws and constants is the last survivor from the era of classical physics in which a regular and (in principle) totally predictable mathematical order was supposed to prevail at all times and in all places. In reality, we find nothing in the kind in the course of human affairs, in the biological realm, in the weather, or even in the heavens. The chaos revolution has revealed that this perfect order was a beguiling illusion. Most of the natural world is inherently chaotic.

The fluctuating values of the fundamental constants in experimental measurements seem just as compatible with small but real changes in their values, as they are with a perfect constancy obscured by experimental errors. I now propose a simple way of distinguishing between these possibilities. I concentrate on the gravitational constant, because this is the most variable. But the same principles could be applied to any of the other constants too.

An experiment to detect possible fluctuations in the universal gravitational constant
 The principle is simple. At present, when measurements are made in a particular laboratory, the final value is based on an average of a series of individual measurements, and any unexplained variations between these measurements are attributed to random errors. Clearly, if there were real underlying fluctuations, either owing to changes in the earth's environment or to inherently chaotic fluctuations in the constant itself, these would be ironed out by the statistical procedures, showing up simply as random errors. As long as these measurements were confined to a single laboratory, there would be no way of distinguishing between these possibilities.

What I propose is a series of measurements of the universal gravitational constant to be made at regular intervals--say monthly--at several different laboratories all over the world, using the best available methods. Then, over a period of years, these measurements would be compared. If there were underlying fluctuations in the value of G, for whatever reason, these would show up at the various locations. In other words, the 'errors' might show a correlation--the values might tend to be high in some months and low in others. In this way, underlying patterns of variation could be detected that could not be dismissed as random error.

It would then be necessary to look for other explanations that did not involve a change in G, including possible changes in the units of measurement. How these inquiries would turn out is impossible to foresee. The important thing is to start looking for correlated fluctuations. And precisely because fluctuations are being looked for, there is more chance of finding them. By contrast, the current theoretical paradigm leads to a sustained effort by everyone concerned to iron out variations, because constants are assumed to be truly constant.

Unlike the other experiments proposed in this book, this one would involve a fairly large-scale international effort. Even so, the budget would not need to be huge if it took place in established laboratories already equipped to make such measurements. And it is even possible that it could be done by students. Several inexpensive methods for determining G have been described, based on the classical method of Cavendish using a torsion balance, and an improved student method has recently been developed which is accurate to 0.1 percent.

One of the advantages of the continual improvement in precision of metrological techniques is that it should become increasingly feasible to detect small changes in the constants. For example, a far greater accuracy in measurements of G should be possible when experiments can be done in spacecraft and satellites, and appropriate techniques are already being proposed and discussed. Here is an area where a big question really would need big science.

But there is in fact one way that this research could be done on a very low budget to start with: by examining the existing raw data for measurements of G at various laboratories over the last few decades. This would require the cooperation of the scientists concerned, because raw data are kept in scientists' notebooks and laboratory files, and many scientists are reluctant to allow others access to these private records. But given this cooperation, there may already be enough data to look for worldwide fluctuations in the value of G.

The implications of fluctuating fundamental constants would be enormous. The course of nature could no longer be imagined as blandly uniform; we would recognize that there are fluctuations at the very heart of physical reality. And if different fundamental constants varied at different rates, these changes would create differing qualities of time, not unlike those envisaged by astrology, but with a more radical basis."
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 02, 2014, 06:13:37 am
Example 37

Global Warming

Man made Global Warming or 'Climate Change' is a complete and deliberate lie.

It is impossibe to watch the below videos and maintain any belief that there is man made global warming. Do you know someone that believes in Global Warming and you want to wake up? If you can convince them to watch these interviews they will change their mind.

The 50 to 1 Project

What is the TRUE cost of climate change?  Is stopping it early really the cheapest plan in the long run?  50 to 1 explores the costs of stopping climate change vs adapting to it as and if it's required, and uncovers a simple truth; it's 50 times more expensive to try and STOP climate change than it is to simply ADAPT to it as and if required.

50 to 1 Project Interviews

Full length interview with Joanne Nova
Topher interviews Joanne Nova, a veteran science communicator and regular commentator on the ABC and many other places. Joanne speaks of her own journey and how she went from being a ‘veteran believer’ in Global Warming to being the high-profile skeptic she is today.

Full length interview with David Evans
Topher interviews David Evans, former modeler for the Australian Greenhouse Office, now prominent skeptic. He explains the reasons for his change of mind and why he’s so become so vocal on the issue.

Full length interview with Anthony Watts
Topher interviews Anthony Watts, former weatherman and passionate believer in global warming, now world famous skeptic responsible for the ‘surface stations’ project which has found serious issues with the global temperature measuring network, and key figure within the ‘Climategate’ scandal.

Full length interview with Christopher Essex
Topher interviews Christopher Essex, Professor of Applied Mathematics, who promptly ‘flips the checker board’ with questions about the very validity of such a thing as ‘Global temperature’.

Full length interview with Donna Laframboise
Topher interviews Donna Laframboise, former journalist turned investigative author. Donna has critiqued the Intergovernmental Panel on Climate Change’s claims about itself, its authors and its peer review process, and found them very VERY wanting…

Full length interview with Marc Morano
Topher interviews Marc Morano, accused ‘criminal against humanity’ and alleged ‘central cell of the climate denial machine’ and gets an insiders look into the politics and collateral damage caused by clumsy political responses to fears about climate change.

Full length interview with Fred Singer
Topher interviews Fred Singer, atmospheric and space physicist and long time hero of the environmental movement, and finds out why he founded the NON Governmental Panel on Climate Change and why he’s taken a high profile stand against the Intergovernmental Panel on Climate Change.

Full length interview with Henry Ergas
Topher interviews Henry Ergas, a high profile Australian economist with a lot to say about carbon taxes and emissions trading schemes, and discovers some of the underlying reasons why politicians love carbon taxes and emissions trading schemes and why these ‘markets’ always seem to fail.
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 03:56:55 pm
Example 38

Modern man lives longer - the avarage age fallacy


The idea that our ancestors routinely died young has no basis in scientific fact!

Human Lifespans Nearly Constant for 2,000 Years

August 21, 2009

The Centers for Disease Control and Prevention, often the harbinger of bad news about e. coli outbreaks and swine flu, recently had some good news: The life expectancy of Americans is higher than ever, at almost 78.

Discussions about life expectancy often involve how it has improved over time. According to the National Center for Health Statistics, life expectancy for men in 1907 was 45.6 years; by 1957 it rose to 66.4; in 2007 it reached 75.5. Unlike the most recent increase in life expectancy (which was attributable largely to a decline in half of the leading causes of death including heart disease, homicide, and influenza), the increase in life expectancy between 1907 and 2007 was largely due to a decreasing infant mortality rate, which was 9.99 percent in 1907; 2.63 percent in 1957; and 0.68 percent in 2007.

But the inclusion of infant mortality rates in calculating life expectancy creates the mistaken impression that earlier generations died at a young age; Americans were not dying en masse at the age of 46 in 1907. The fact is that the maximum human lifespan — a concept often confused with "life expectancy" — has remained more or less the same for thousands of years. The idea that our ancestors routinely died young (say, at age 40) has no basis in scientific fact.

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 04:05:34 pm
Example 39

Science as a Public Good - Science as preached and practiced by the established higher educational institutions, corporate funded think tanks, and comercial businesses produces scientific data that is biased at best and deliberatly false at worst.

The questionalable credibility of the Peer Review System

Experimental error and lack of reproducibility have dogged scientific research for decades. Of even greater concern are proliferating cases of outright fraud.

Medicine and the social sciences are particularly prone to bias, because the observer (presumably a white-coated scientist) cannot so easily be completely removed from his or her subject.

Double-blind tests (where neither the tester and the subject know for sure whether the test is real or just a control) are now required for many experiments and trials in both fields.

The Myth of Science as a Public Good (by Terence Kealey)

Vice Chancellor of the University of Buckingham (Britain's only independent university), Terence Kealey is a vocal critic of government funding of science. His first book, 'The Economic Laws of Scientific Research,' argues that state funding of science is neither necessary nor beneficial, a thesis that he developed in his recently published analysis of the causes scientific progress, 'Sex, Science and Profits.' In it, he makes the stronger claim that not only is government funding not beneficial, but in fact measurably obstructs scientific progress, whilst presenting an alternative, methodologically-individualist understanding of 'invisible colleges' within which science resembles a private, not a public, good.

Recorded at Christ Church, University of Oxford, on 22nd May 2009.


For Science's Gatekeepers, a Credibility Gap (2006)
Recent disclosures of fraudulent or flawed studies in medical and scientific journals have called into question as never before the merits of their peer-review system.

Impartial judgment by the "gatekeepers" of science: fallibility and accountability in the peer review process.
Hojat M, Gonnella JS, Caelleigh AS.

Misconduct in science communication and the role of editors as science gatekeepers

Science Journal Pulls 60 Papers in Peer-Review Fraud

Report finds massive fraud at Dutch universities

Scientific fraud, sloppy science – yes, they happen

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 05:16:37 pm
Example 40

The Black [Bubonic] Plague (1334 - 1348) was probably not a bacterial plague. It was most likley a hemorrhagic plague.


Researchers Argue 'Black Death' Was Due To Ebola, Not Bubonic Plague

A new book titled Biology of Plagues: Evidence from Historical Populations, argues that the "Black Death" may not have been caused by the bubonic plague, as history textbooks would suggest, but rather, an Ebola-like [hemorrhagic] virus.

The authors, Christopher Duncan and Susan Scott of the University of Liverpool, claim that the bubonic plague could not have spread across Europe at the rate in which the Black Death did.

Duncan says, "If you look at the way it spreads, it was spreading at a rate of around 30 miles in two to three days. Bubonic plague moves at a pace of around 100 yards a year."

Duncan and Scott also analyzed the symptoms described in historical texts. Autopsy reports detail the internal organs of victims having had dissolved along with the appearance of black liquid. The liquidization of internal organs is a trademark of the Ebola virus and causes its victims tremendous pain.

The oozing lymph nodes that so notoriously accompanied the Black Death could also be symptomatic of an Ebola-like virus. In both cases, hemorrhagic fevers come on fast and causes blood vessels to burst underneath the skin. This is what brings out the welts, or "buboes" as they were called during the time of the Black Death.

The authors also noted that efforts to quarantine the Black Death were successful - something that would not had been possible had the disease been transmitted by rats, as history has suggested, since rats do not observe quarantines.

But not everyone is convinced. Ann Carmichael, a historian and expert on the Black Plague says, "It is problematic to assimilate evidence over four centuries and draw conclusive theories," she says, "We must look at it on a plague-by-plague basis."


Molecular Clues Hint at What Really Caused the Black Death

The Black Death arrived in London in the fall of 1348, and although the worst passed in less than a year, the disease took a catastrophic toll. An emergency cemetery in East Smithfield received more than 200 bodies a day between the following February and April, in addition to bodies buried in other graveyards, according to a report from the time.

The disease that killed Londoners buried in East Smithfield and at least one of three Europeans within a few years time is commonly believed to be bubonic plague, a bacterial infection marked by painful, feverish, swollen lymph nodes, called buboes. Plague is still with us in many parts of the world, although now antibiotics can halt its course.  [Pictures of A Killer: A Plague Gallery]

But did this disease really cause the Black Death? The story behind this near-apocalypse in 14th century Europe is not clear-cut, since what we know about modern plague in many ways does not match with what we know about the Black Death. And if plague isn't responsible for the Black Death, scientists wonder what could've caused the sweeping massacre and whether that killer is still lurking somewhere.

Now, a new study using bone and teeth taken from East Smithfield adds to mounting evidence exhumed from Black Death graves and tantalizes skeptics with hints at the true nature of the disease that wiped out more than a third of Europeans 650 years ago.

This team of researchers approached the topic with open minds when they began looking for genetic evidence of the killer.

"Essentially by looking at the literature on the Black Death there were several candidates for what could have been the cause," said Sharon DeWitte, one of the researchers who is now an assistant professor of anthropology at the University of South Carolina.

Their first suspect: Yersinia pestis, the bacterium that causes modern plague, including bubonic plague.

The speed of plague


In 1894, Alexander Yersin and another scientist separately identified Y. pestis during an epidemic in Hong Kong. Years later the bacterium was given his name. Yersin also connected his discovery to the pestilence that swept Europe during the Black Death, an association that has stuck.

One problem, however, is that compared to the wildfire-like spread of the Black Death, the modern plague moves more leisurely. The modern plague pandemic began in the Yunnan Province of China in the mid-19th century, then spread to Hong Kong and then via ship, to India, where it exacted the heaviest toll, and to San Francisco in 1899, among many other places.

The disease that caused the Black Death is believed to have traveled much quicker, arriving in Europe from Asia in 1347, after the Golden Horde, a Mongol Army, catapulted plague-infected bodies into a Genoese settlement near the Black Sea. The disease traveled with the Italian traders and later appeared in Sicily, according to Samuel Cohn, a professor of medieval history at the University of Glasglow and author of "The Black Death Transformed: Disease and Culture in the Early Renaissance Europe" (Bloomsbury USA, 2003).

By about 1352, roughly five years after arriving in Europe, it had not only spread across the continent, the worst of the disease had already run its course.

This wave of devastation becomes particularly surprising considering the complicated and time-consuming process by which plague has been thought to spread. You can't catch bubonic plague from another person; instead, the process involves two classic villains: rats and fleas.

Once a flea bites a rat infected with plague, the pathogen Y. pestis grows in its gut. After about two weeks, the bacteria block the valve that opens into the flea's stomach. The starving flea then bites its host, by now probably a new, healthy rat or a person, more aggressively in an attempt to feed. All the while, the flea tries to clear out the bacterial obstruction and so regurgitates the pathogen onto the bite wounds, according to Ken Gage, chief of flea-borne disease activity with the U.S. Centers for Disease Control and Prevention.

The bulk of cases during the modern plague pandemic are believed to have been spread by rats and their fleas, according to Gage. The last rat-borne plague epidemic in the U.S. occurred in 1925; wild rodents have since become the primary source for infections. However, rat-associated outbreaks continue to occur in developing countries, according to the CDC.

Fast, furious and unfamiliar

( Plague Feature.jpg)

Not only has the disease slowed down, it also seems to have become more restrained. The Black Death wiped out at least 30 percent of Europe's population at the time. But the peak of the modern pandemic, in India, killed less than 2 percent of the population, DeWitte has calculated from census data.

The list of discrepancies goes on: There is evidence the Black Death spread directly between humans — no rats and their fleas involved — and to areas where rats and their fleas didn't even live. In fact, archaeological and documentary evidence indicates rats were scarce during the mid-14th century.

What's more, bubonic plague doubters point out, deaths during the Black Death appear to have followed a different seasonal cycle than plague deaths in modern times. Some also point to discrepancies in the symptoms.

Alternative theories


With the plague's role called into question, other theories have been offered to fill the gap.

"There is a lot of evidence that suggests that Yersinia pestis may not have been the causative agent for the Black Death, and it was likely something else, and something else that is out there right now," said Brian Bossak, an environmental health scientist at Georgia Southern University.

He is among those who suspect a hemorrhagic virus — which causes bleeding and fever, like ebola — swept through 14th-century Europe. The high lethality, rapid transmission and periodic resurgences seen in the Black Death are characteristic of a virus, according to Bossak, who frames this as a question in urgent need of resolution.

"Who knows if it won't happen again," he said. "It seems like every so often some disease comes out of nowhere."

Two other proponents of the virus theory, Susan Scott and Christopher Duncan of the University of Liverpool in the United Kingdom, have pointed to a possible genetic legacy left by a viral Black Death: a mutation, known as CCR5-delta32, found among Europeans, particularly those in the north. This mutation confers resistance against HIV, another virus, but does not prevent plague. It's possible that by passing over those with this mutation, the Black Death selected for this change in the genetic code, making it more common among Europeans, they argue.

To at least some degree, an alternative form of plague, pneumonic plague, offers a solution. While bubonic is the most common form of plague, plague can also infect the lungs, causing high fever, cough, bloody sputum and chills. This infection can spread person-to-person, and without antibiotic treatment it is nearly 100 percent fatal. Outbreaks have occurred in modern times, and it can develop as a result of a bubonic infection. But, it is unclear how much of a role it played in the Black Death — some evidence suggests it is not as contagious as commonly thought.

Rats and fleas


The Black Death just doesn't appear to have behaved the way the typical, modern rat-associated plague does, according to Gage, the flea expert. Even so, he says he is convinced that bubonic plague was responsible.

A group of French researchers found another possible insect carrier for the Black Death: lice. They were able to transmit fatal plague infections from sick rabbits to healthy ones via human body lice that fed on the rabbits. Substitute humans for bunnies, and this scenario offers a simpler, more cold-climate-friendly explanation than the conventional rat-flea model.

But fleas aren't out of the picture yet. Gage and his colleagues have found that many species of flea — including the Oriental rat flea, a widespread and important spreader of plague — can begin transmitting the infection much sooner than thought, before the bacterium blocks off its stomach. This supports the idea that a species of human-inhabiting fleas, whose guts the bacterium can't block well, could have spread the infection from person to person in areas without rats, Gage said. [10 Deadly Diseases That Hopped Across Species]

Plague isn't picky about its warm-blooded victims; it can infect almost any mammal, although some, like humans, cats and rats, become severely ill when infected, according to Gage. The lack of records of massive rat die-offs during the Black Death also calls into question the role rats may have played then.

CSI: Black Death


Plague kills quickly and does not leave marks on the remains that archeologists are digging up centuries later. But in recent years scientists have begun searching for the  molecular clues in the remains of the dead, including DNA left by the killer bacterium.

While a number of studies have turned up positive results from graves believed to hold European plague victims, the results haven't always been clear-cut. For instance, a 2004 study of remains in five burial sites, including East Smithfield, was unable to find any evidence of the bacteria.

Looking for evidence of the genetic traces of a pathogen within 650-year-old bones is a challenging proposition, according to Hendrik Poinar, an evolutionary geneticist at McMaster University who worked with DeWitte, then at the University of Albany, on the most recent study. After so many years in the ground, the DNA is damaged and  present only in tiny fragments, and, what's more, each sample contains only a miniscule amount of the pathogen — the rest belongs to the person and interlopers like soil bacteria, fungi, insects, even animals.

"You have to come up with a way to pull out the things of interest," Poinar said. So, after screening to detect the presence of Yersinia pestis in the 109 samples from the cemetery in East Smithfield, his lab employed a sort of sensitive fishing technique, using tiny segments of DNA that matched up with segments from a ring of DNA, called a plasmid, found in the bacterium.

Once they had retrieved this DNA, they assembled the full plasmid and compared it with modern versions of the bug. They found this plasmid matched many of the modern versions. They also sequenced a short section of DNA from the bacterium's nucleus and revealed three small changes unseen in the modern strains.

The results prove that a variant of Yersinia pestis infected the victims of the Black Death, the authors write in a recent issue of the journal Proceedings of the National Academy of Sciences.

Same bug, different disease?


This finding comes about a year after another genetic study, led by Stephanie Haensch of Johannes Gutenberg University in Germany, found evidence of two previously unknown strains of Yersinia pestis in the remains of European victims, and hints at a solution that could allow both sides to be right.

"People have always assumed the two diseases were the same," said Cohn, the medieval historian, referring to modern plague and the Black Death. "Even if it is the same pathogen, the diseases are very different."

Bossak, who has questioned the role of plague in the Black Death, agrees.

"This new (study) seems to support these earlier claims, and reinforces the notion that what we know of the epidemiology in modern Y. pestis plague may not fit the Black Death, perhaps because these ancient strains of Y. pestis are no longer present (assuming Y. pestis was indeed the causative agent)," he wrote in an email.

However, Poinar is more cautious. Although they had hoped to find changes that explained why the pathogen might have become less aggressive over the centuries, none have turned up so far. In fact, it's too early to say the changes detected represent any significant difference between the modern and ancient versions of the bacterium, according to him.

"We need the entire genome to say anything about this," Poinar wrote in an email, "and that is for future work."


Definition: hemorrhagic plague (The hemorrhagic form of bubonic plague.)

What caused the Black Death?
Duncan CJ, Scott S.

New Theories Link Black Death to Ebola-Like Virus
By MARK DERR New York Times

How the Black Death Worked
by Molly Edmonds

Did Yersinia pestis really cause Black Plague? Part 1: Objections to Y. pestis causation

On the trail of the Black Death
By Peter Lavelle

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 05:19:27 pm
Example 41

The world map we commonly use is distorted, incorrect and 500 years old.


The Mercator projection (above in blue) is a cylindrical map projection presented by the Flemish geographer and cartographer Gerardus Mercator in 1569. It became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines or loxodromes, as straight segments which conserve the angles with the meridians. While the linear scale is equal in all directions around any point, thus preserving the angles and the shapes of small objects (which makes the projection conformal), the Mercator projection distorts the size and shape of large objects, as the scale increases from the Equator to the poles, where it becomes infinite.


The Gall–Peters projection (above in green), named after James Gall and Arno Peters, is one specialization of a configurable equal-area map projection known as the equal-area cylindric or cylindrical equal-area projection. It achieved considerable notoriety in the late 20th century as the centerpiece of a controversy surrounding the political implications of map design.


The Peters Projection World Map is one of the most stimulating, and controversial, images of the world. When this map was first introduced by historian and cartographer Dr. Arno Peters at a Press Conference in Germany in 1974 it generated a firestorm of debate. The first English-version of the map was published in 1983, and it continues to have passionate fans as well as staunch detractors.

The earth is round. The challenge of any world map is to represent a round earth on a flat surface. There are literally thousands of map projections. Each has certain strengths and corresponding weaknesses. Choosing among them is an exercise in values clarification: you have to decide what's important to you. That is generally determined by the way you intend to use the map. The Peters Projection is an area accurate map.

Maps based on the projection are promoted by UNESCO, and they are also widely used by British schools.





Mexico - Larger than Alaska by 100,000 square miles
Africa - 14 times larger than Greenland
South America - double the size of Europe
Germany - Is located in the northern most quarter of the earth


We have been misled by a flawed world map for 500 years

What does the world really look like?

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 05:21:27 pm
Example 42

Myth: There are fewer slaves now than in the past.

Truth: There Are More Slaves Today Than at Any Time in Human History.


Although slavery is illegal in every country in the modern world, it still exists, and even on the narrowest definition of slavery it's likely that there are far more slaves now than there were victims of the Atlantic slave trade.

The last country to abolish slavery was the African state of Mauritania, where a 1981 presidential decree abolished the practice; however, no criminal laws were passed to enforce the ban. In August 2007 Mauritania's parliament passed legislation making the practice of slavery punishable by up to 10 years in prison.

One hundred forty-three years after passage of the 13th Amendment to the U.S. Constitution and 60 years after Article 4 of the U.N.'s Universal Declaration of Human Rights banned slavery and the slave trade worldwide, there are more slaves than at any time in human history...27 million!

Today’s slavery focuses on big profits and cheap lives. It is not about owning people like before, but about using them as completely disposable tools for making money.

During the four years that Benjamin Skinner researched modern-day slavery, he posed as a buyer at illegal brothels on several continents, interviewed convicted human traffickers in a Romanian prison and endured giardia, malaria, dengue and a bad motorcycle accident.

But Skinner is most haunted by his experience in a brothel in Bucharest, Romania, where he was offered a young woman with Down syndrome in exchange for a used car.

The institution of slavery is as old as civilization. Many nations and empires were built by the toil and suffering of slaves.

But what kinds of people were enslaved, and why? In ancient civilizations, slaves were usually war captives. The victors in battle might enslave the losers rather than killing them. Over time, people have found other reasons to justify slavery. Slaves were usually considered somehow different than their owners. They might belong to a different race, religion, nationality, or ethnic background. By focusing on such differences, slave owners felt they could deny basic human rights to their slaves.

And despite many efforts to end slavery, it still exists today. Some 27 million people worldwide are enslaved or work as forced laborers. That's more people than at any other point in the history of the world.

Cheap, Disposable People

An average slave in the American South in 1850 cost the equivalent of $40,000 in today’s money; today a slave costs an average of $90.

In 1850 it was difficult to capture a slave and then transport them to the US. Today, millions of economically and socially vulnerable people around the world are potential slaves.

This “supply” makes slaves today cheaper than they have ever been. Since they are so cheap, slaves are today are not considered a major investment worth maintaining. If slaves get sick, are injured, outlive their usefulness, or become troublesome to the slaveholder, they are dumped or killed. For most slave holders, actually legally ‘owning’ the slave is an inconvenience since they already exert total control over the individuals labor and profits. Who needs a legal document that could at some point be used against the slave holder? Today the slave holder cares more about these high profits than whether the holder and slave are of different ethnic backgrounds; in New Slavery, profit trumps skin color. Finally, new slavery is directly connected to the global economy. As in the past, most slaves are forced to work in agriculture, mining, and prostitution. From these sectors, their exploited labor flows into the global economy, and into our lives. ▮ (


BBC Modern slavery

There Are More Slaves Today Than at Any Time in Human History

Understanding Slavery


Modern Slavery: The Secret World of 27 Million People
Publisher: Oneworld Publications; 1 edition (June 1, 2009), ISBN-10: 185168641X, ISBN-13: 978-1851686414

Slavery Affects 27 Million Lives Today: Legal Abolition vs. Effective Emancipation

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 05:58:06 pm
Example 43

Myth: Too much sun exposure causes skin cancer

Truth: The Sun Has Never Been Directly Linked to a single Skin Cancer Case


Whether we have fair skin or dark skin, thinking about how much sun exposure we get is a good idea because we don’t want to burn or get heat stroke. But another side effect that often is brought up with sun exposure is skin cancer. Does the sun cause skin cancer?

The sun is the giver of life on this planet, there is no doubting that. But is it dangerous to our health? What may be surprising right off the top is that up until the time of writing this the sun has never ever been linked to solely causing skin cancer. All medical and governmental reports regarding patients with skin cancer have never been linked to the sun being the main cause. There have been suggestions that it may have played a role, but results are flimsy due to the lack of evidence.

This myth was created and brought forward by the sunscreen industry, dermatologists and the cancer industry. Like with many other mainstream medical conditions, unfortunately people who do get skin cancers are often just thrown into the category of having got it from sun exposure without ever really being looked at as to why they got it. This has been an issue the medical field has not looked at greatly when it comes to various diseases because it often requires much time and testing with specific patients to really determine the causes.

We hear it everywhere that if you go out in the sun without sunscreen you have a higher risk of getting skin cancer because of harmful UV rays. To more accurately put it, you do not get skin cancer from the sun you get skin cancer because you are not properly nourished and are often exposed to other chemicals that are linked to cancer. The National Academy of Sciences published a review stating that the Omega 6:3 ratio is the key to preventing skin cancers.

“Epidemiological, experimental, and mechanistic data implicate omega-6 fat as stimulators and long-chain omega-3 fats as inhibitors of development and progression of a range of human   cancers, including melanoma.”

Interestingly, when we look at products we use daily that contain known cancer causing agents, we find that the likelihood our skin cancers are linked to these products is much higher than it being linked to the sun. However, these products are overlooked and the immediate blame is put on the sun. A problem that has become a matter of public belief based on nothing but opinion and propaganda created amongst several industries.

Here are a list of some popular products used daily that are linked to causing skin cancer:

Processed foods
Pharmaceutical drugs & prescriptions
Most Shampoos & Conditioners
Many health & beauty products (creams, lotions, makeup)
 Tap water (chlorine, fluoride)
Most deodorant & cologne
Ink (computers, newspapers, magazines, fliers)
Cleaning supplies
Chemical fertilizers (used on most commercial produce)
Pesticides & herbicides (found on non organic foods)
Hair dye
Oil products
Most plastics
 Fluorescent light bulbs (energy efficient bulbs)

I think that we often forget to look at the obvious when we look at things in this world. When we step back and think about it, how can the one thing that gives life to everything on this planet be so harmful to you? Yet here we are as a community stating that every time we get skin cancer is most likely was caused by over sun exposure. But everyday we were using chemicals that had a much much higher chance of creating skin cancer within the body.

We have been heavily trained to use protective products on our skin when we go out in the sun. We all know these products as sunscreens. The fact is, these “protective” agents are actually accelerating the creation of cancer within our bodies because they are toxic chemicals.

Researchers at the Environmental Working Group, a Washington-based nonprofit, released a report confirming nearly half of the 500 most popular sunscreen products actually increase the speed at which malignant cells develop and spread skin cancer because they contain vitamin A and its derivatives, retinol and retinyl palmitate. These substances have been known to be cancer causing and toxic for years by the FDA but they simply have not taken any action in notifying the public of the dangers.

We must also look at the vitamin D factor here. We all have been told the sun provides vitamin D for us as it is absorbed through the skin. This is true. However, if you are wearing sunscreen, you body is unable to take in any vitamin D. How many times have you gone out in the sun to get some fresh air and an intake of vitamin D and put on sunscreen? For many people, it is probably most of the time. This process increases the risk of cancer not just because a cancer causing chemical is being applied to the skin, but because vitamin D is responsible for the decrease of 4 out of 5 cancers of every kind as well, as many other diseases when levels are properly maintained in the body.

So if you are the type of person that spends a lot of time in the sun and wants to avoid burning the skin, you have many of simple options that will not cause you any harm. First off, nourish your body and skin by having the proper vitamins and antioxidants. If you are deficient in antioxidants and vitamin B, you are much more likely to get a sun burn as your skin is not healthy enough to take in the UV rays properly. Those with darker skin are able to have their skin act as a built in sunscreen which means they will have a much harder time getting burnt. When out in the sun, limit your exposure to smaller increments to build your skins health up. Always use sunscreen as a last resort when out in the sun for a long time. When shopping for sunscreens be sure to read the labels and avoid buying sunscreens loaded with toxic chemicals. Look our for oxybenzone and retinyl palmitate. It may be tough to find but a trip to a natural health store can often do the trick. Look for sunscreens that contain zinc and titanium minerals as opposed to toxic chemicals as listed above. Only use sunscreens when absolutely necessary!


Melanoma epidemic: a midsummer night’s dream?
N.J. Levell1,
C.C. Beattie2,
S. Shuster1 and
D.C. Greenberg2

Article first published online: 9 JUN 2009
DOI: 10.1111/j.1365-2133.2009.09299.x (

Medical Records State: The Sun Has Never Been Directly Linked to Skin Cancer Case December 18, 2011 by Joe Martino

Sunlight alone does not cause skin cancer: The truth you've never been told
Aticle -
Video -

Sunlight Does Not Cause Skin Cancer - Dr. Michael Teplitsky MD

Sunscreen Causes, Not Prevents Skin Cancer

Myth: Sunlight Causes Skin Cancer

Scientists Blow The Lid on Cancer & Sunscreen Myth

Don't let the phoney melanoma scare keep you out of the sun


Published on Jun 25, 2016
Dr. Group joins the Show to talk about the dangers of toxic Sunscreens.

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: Brocke on Oct 09, 2017, 06:03:28 pm
Example 44

Common knowlege: There are 3 blood types A, B, O, with Rh and positive/negative subtypes

Truth: Actually there are more than 30 different blood group systems and over 100 blood subtypes within the ABO system alone


The number of blood types there are depend on the blood group system you use.

There are a number of different blood group systems, with the International Society of Blood Transfusion recognizing up to 30 major group systems. The two main blood group systems are ABO antigens and RhD (Rhesus) antigens. Most antigens are protein molecules that are situated on the surface of the red blood cells, it is with these two antigens that blood types are classified.

Researchers at the University of Vermont have discovered two new proteins on red blood cells that confirm the testable existence of two new blood types. It's an important discovery, one that'll greatly reduce the risk of incompatible blood transfusions among tens of thousands of people. But what we were more struck by in this press release was the fact that these two new blood types - named Junior and Langereis - bring the total number of recognized blood types up to 32. 32!

Turns out there's much more than just A, B, AB, and O: there are now 28 other, rarer types, often named after the person in whom they were discovered. These rarer types are identified by the presence of a particular group of antigens (substances that tell your immune system to send out antibodies), and many, like the Kell and MNS blood types, can actually be concurrent with more common blood types like A or O.,377209

Your blood is typed, or classified, according to the presence or absence of certain markers (antigens) found on red blood cells and in the plasma that allow your body to recognize blood as its own.

The ABO system consists of A, B, AB, and O blood types. People with type A have antibodies in the blood against type B. People with type B have antibodies in the blood against type A. People with AB have no anti-A or anti-B antibodies. People with type O have both anti-A and anti-B antibodies. People with type AB blood are called universal recipients, because they can receive any of the ABO types. People with type O blood are called universal donors, because their blood can be given to people with any of the ABO types.

There are over 100 other blood subtypes. Most have little or no effect on blood transfusions, but a few of them may be the main causes of mild transfusion reactions. Mild transfusion reactions are frightening, but they are rarely life-threatening when treated quickly.

Mild hemolytic transfusion reactions can happen when there is a mismatch of one of the more than 100 minor blood types. Most of the time, these reactions to the minor blood types are less serious than a mismatch of the ABO or Rh blood types.

Last Edit by Palmerston
Title: Re: Everything You Know Is Wrong - I was social-engineered
Post by: EvadingGrid on Dec 24, 2017, 05:20:59 am
Just a quick edutainment video form a History Prof on World War 2

Bump - Peter Kuznick: Three False Myths Americans Believe

Regis Tremblay
Published on May 22, 2016

American University professor, Peter Kuznick, destroys the WWII myths that have underscored US foreign policy since 1945. Peter co-authored The Untold History of the United States with Oliver Stone. The book and a ten part film series are available from Amazon.

Last Edit by Humphrey