The Shallows

November 10, 2010 Leave a comment

In his book The Shallows, Nicholas Carr writes about how the internet – and indeed how every new form of media before it – destroys our society. According to Carr, the most significant evidence of this is a loss of focus: once exposed to the internet and its ‘instant gratification’ model, he says that any other form of media seems lacking in comparison. He suggests neuroplasticity as the mechanism for what he says is a fundamental change in how we think: exposure to the internet quite literally changes how our brain works – how it seeks and processes information, and how it reacts to information that arrives in a different way. He cites a few examples, which I’ll get into at some point, but it seems to me that his only real complaint is a loss of focus while reading longer texts. If nothing else, I do have to say that I will gladly take a slight loss of focus in exchange for the wide amount of information that the internet offers.

He does complain about the organization of webpages. He specifically says that most people, when reading a webpage, skim though instead of reading all of the text. This is intentional to some degree, webpages are often optimized for efficiency. Their structure helps a reader to extract pertinent information more quickly: that is, someone who uses the internet can learn to recognize the structure of certain information. This is hardly a new concept, although it is one that has grown due to the internet and due to computers. As an example, look at the financial data in a print newspaper: columns of numbers arranged in a way that allows someone used to that format to extract information from it. The goal is efficiency: it’s getting more information in less time.

Consider then the people who work on the internet – the technical infrastructure engineers and programmers who make this media work. When such people need information, they have certain preferences as to how it be organized[0]. Terse is almost always preferable to verbose, as idle chatter communicates no actual information, however an initial report of a problem in a computer program should be complete enough to fully describe the problem. This preference for efficient communication seems to have bled over into the public side of the medium as well, and many users of the internet can assess the value of a webpage in only a few moments, just as a programmer can assess whether a bug report contains any useful information before deciding whether to read on. Structure is an important part of this. Imagine if Google, rather than ranking search results and formatting them with a link, a short summary, and then a url, simply copied and pasted the text of each relevant webpage to the user. The information would still be there, but the format, the structure, no longer exists. Despite the fact that the same information is present, there is too much actual text, and efficiency decreases.

I find no problem with this attitude of terseness; I find it to be a natural conclusion of the fact that the web allows us access to so much more information than we used to have access to. In order to wade through all this data, or if we are a programmer in order to wade through all the bug reports from our users, we must impose some structure on that data.

Or course, now that we have this structure for data on the internet, we will miss it when we go back to look at print media – but that’s OK. Print media is still usable because we don’t need that structure, because the amount of information is now easily manageable.

If, however, we accept that the internet’s availability is limiting people’s ability to read books, perhaps, rather than looking at the shift as a loss of an old medium, we can look at the internet as an improvement: people didn’t know what they were missing in the internet’s ability to condense information, and now that we have it, people are less inclined to use inferior sources of information. I certainly look at the it this way: why pull out an encyclopedia or dictionary when a quick Google search will give you the answer? Books are inherently ‘old’ – the information in them is instantly outdated – and have no built-in search engine or cross-references: they’re an inferior form of gathering information. The only exceptions that I can think of are textbooks, which are useful because they are both well-organized and thorough, and fiction, which doesn’t need to be organized or searchable, because getting information out of it isn’t the point.

Some people will argue that reading a book doesn’t give the instant gratification that reading a book does. I’d have to counter that you do if it’s a good book. They’re right that reading a book takes time, but that isn’t inherently bad, nor is it why I prefer the internet. It’s about efficiency, and the two exceptions I noted above still exist: textbooks and fiction. In those cases the internet’s improvements are less necessary: textbooks are well-organized and complete, so search engines and cross-links are less necessary, and fiction is just fiction, you’re not reading it to gain information so efficiency is irrelevant.

Speaking of books – what about the professors who say they can’t get their students to read books? What’s the cause of that change? Well, is there a change at all? Students not wanting to do their classwork – that’s not new. When given a reading or research assignment, I don’t filter on whether it’s web-based or print, I filter on content. When I’m looking at my list of things to do, I don’t put off reading the Iliad because it’s a book, I do so because it’s uninteresting. It would be equally uninteresting if I was reading the Illustrated Guide to the Trojan War online.

That’s not to say that I don’t care at all about media, it’s just that the medium is less important than the content. If I’m looking for some specific content, then I might prefer to get it from a certain medium, but the medium is secondary to the content.

There are also people who say “I wish I could read books”, as though they’re actually unable to. It really isn’t hard. You sit down and read. If you’re having trouble concentrating, and you just can’t manage to sit down and read, then don’t be so quick to blame the internet. Stress, lack of sleep, stimulants could be just as bad, or perhaps you just don’t like that particular book. But there’s no way that the internet has made you incapable of reading.

Writers complain that the internet forces new metrics on them: how many pageviews an article gets is a measure of quality, at least to publishers, and it tends to prefer more flashy and provocative articles rather than longer and duller articles, though the latter might strictly be more complete and better journalism. This is a valid critique of the medium of the internet, and perhaps the only such one that was brought up in class last week, although the fact is also that more pageviews is equivalent to more ad revenue. It may be unpleasant, but it’s good business.

In class someone brought up the abbreviation TL;DR. Standing for Too Long, Didn’t Read, it often appears on web forums in responses to overly verbose comments. It’s true that the standard of verbosity is somewhat different between the internet as a whole and the print media. I would argue that this is not an effect of the medium but rather of how people use it: the internet enables conversations much faster than letter-writing and much more trivial than phone calls or face-to-face meetings, so it is only natural that terseness would be preferred. This isn’t a bad thing though, it enables a wider exchange of information. The other good reason to prefer terseness is that the average post on the internet is probably from someone you’ve never met, and you may not be willing to read an extremely verbose piece by someone you’re unfamiliar with – but once you know their qualifications you might be willing to seek out longer work by them. My point is that there is, in the span of conversation and exchange of ideas, a time and place for verbosity, eloquence, or terseness.

As a final point, I question the data used to support the claim that the average user looks at a web page for only 17 seconds. For one, this means that we spend an average of seventeen seconds, not that we are completely unable to focus for eighteen seconds ever. Still, this value seems unlikely and I am forced to question it. Is this referring to full web pages, or to individual web requests? If the latter, which includes images, ads, style sheets, scripts, live content, and so on, a single load of Facebook can contain fifty page loads, and if the experiment was done on any bulk of users, it would be very hard for an automated system to tell the difference between the two. Does the quote include login pages? Junk email? Popup ads? Is page load time included? Does this data come from some specific website (in which case the data could easily be biased) or from some set of the population sampled in a lab (being in a lab would probably affect your browsing habits)? Was it just a number made up by some alarmist pseudoscientific futurist?

I don’t know, but I for one am glad we have the net. Perhaps I’m just naturally wary of people who would criticise new technology, or my long exposure to the net keeping me from seeing how things have changed, but I fail to see the problems that Carr does. Maybe I also don’t function the way he expects: the term ‘interrupt-driven life’ comes to mind. It’s a term used by my technical director to describe a reactionary attitude rather than a proactive attitude. He prefers to respond to ‘interrupts’ – like emails and phone calls – on his own time rather than immediately. Carr rather assumes that internet use requires that you be interrupt driven – this is a behavior like browsing the web until you get Facebook notification, then immediately responding to it. These notifications become the streams that Carr mentions, streams of short messages. I am guilty of this when it comes to phone calls, which I prefer to respond to immediately, although I can see that behaving like this on the web could easily cut focus – that’s what interrupt-driven means. That term, by the way, comes from computer science, where it describes a program or tool that operates in response to a user’s command rather than functioning autonomously.

[0] Eric S. Raymond, author of The Cathedral and the Bazaar and well-recognized programming guru wrote a piece on “how to ask questions the smart way”, which exemplifies the preference of computer programmers to terse and efficient communication. That document is located at http://catb.org/~esr/faqs/smart-questions.html

Advertisements
Categories: Essays, School

Epidemiology

October 27, 2010 Leave a comment

Gary Taubes, in an article for the New York Times Magazine[0], talks about the science of epidemiology. He starts off by discussing horomone replacement therapy for women. The idea behind HRT was to ‘cure’ aging. In hindsight, it seems obvious that doing so is futile, especially given the harmful effects that we have more recently discovered. Taubes seems to think that there’s some sort of problem with the initial medical recommendations towards HRT, since we (clearly) didn’t fully understand the effects at the time and probably don’t now, but his bigger problem is that women were advised to continue taking HRT after menopause. He says that this is because of how new scientific discoveries are announced: when the media hears about some new medical discovery, despite a lack of review, they push it right out to the public as medical advice. Physicians then proceed to recommend the new treatment, the FDA doesn’t stop them because in this case HRT was already thought of as a good idea, and it was just being used differently. When more people took it doctors started to realize that it caused certain side effects, so then no one took it, until a paper was published saying that the benefits outweigh the risks – something that is true in most of medicine: when ER doctors need an emergency blood transfusion and don’t know the patient’s blood type, they go for O negative: the benefits of getting blood quickly outweigh the chance of wasting some more universal blood, when doctors have a patient in immediate danger, they medicate rather than bothering to ask the patient about potential allergies, because the benefits of saving the patient outweigh the rather slim risk of allergy, and an allergic reaction can be treated if it occurs. Even medications like Tylenol have a certain risk – but it is such a good painkiller that we use it, even the effective dose is so near to the toxic dose. These are considered acceptable risks, and rightly so, by modern medicine, so the current state of the medical opinion of HRT is at least internally consistent, however the author’s complaint is rather with the changing state of medical consensus: the classic “you were wrong before, how can we know you’re right now”. I think we’ve discussed this in the past in this class, so we’ll move on.

The author goes on to discuss the merits of preventative medicine. He makes the point of observational studies vs. controlled experiments, and how an observational conclusion can very quickly be spun into medical fact, and it’s not until the conclusion of a controlled experiment that we learn that there is no correlation. This explains the flip from a potentially good treatment back to a treatment with no positive gain with the discovery that the previously observed correlation is not a sign of any actual causation. Epidemiologists who seek to find similar links between behaviors or treatments and undesired side effects. He goes on to complain that these controlled experiments are rarely performed because they are rarely funded and lists a few cases where they were unsuccessfully. I have a few problems with that.

For one, no one is making Mr. Taubes take every supplement and perform every practice which is correlated with better health. The fact is that even strong correlations can exist between better health and slightly detrimental practices, if those practices are thought to be healthy, due to a combination of the ‘health-nut’ population and the chronically unhealthy population. The health-nuts might follow some extreme number of practices thought to be healthy, and might have a lower risk of heart disease, which the unhealthy population will be the opposite. If a study looks only at a specific remedy and ignores other factors, such as perhaps healthy eating, then the positive benefits of healthy eating may overshadow a negative effect of the remedy in question, leading to an incorrect result. Researchers can try to fix this using probabilities and things, but it is rather difficult to assign a numerical value to how healthy someone eats, and even less so to expect a large population of an observational study to do so consistently. This sounds like a support of his point, but it really isn’t: my point is that it is perhaps better to let the observational researchers do their studies, as they give actual researchers some useful leads, and just ignore the output of their observational studies. You’re allowed to do that, you know, just not take the health advice you find from some observational study, just like you don’t necessarily look at the health practices of your friends and try to emulate them. While it might feel good and cathartic to write a nine-page article in the Times Magazine condemning observational studies, it’s really not necessary. And statistically speaking, it is better, at least marginally, to take medical advice from observational researchers than from whatever anecdotal advice we get from friends, although most people tend to base their lives on the latter.

Also, it’s worth arguing that what he is talking about is not the entirety of the field of epidemiology. Epidemiology is not solely based around preventative medicine, it also does seek to analyse the causes and spread of disease, which is something it does quite well. See, I think most doctors would agree that epidemiology is merely statistical and cannot provide any actual proof, merely indicate certain probabilities of correlation and of causation. And when you’re in the middle of an epidemic – what the field is actually named for – epidemiologists are pretty good at determining its source and estimating its spread.[1] They do it every year to determine the optimal vaccination for flu, to name a single instance. Epidemiology was the field that gave us our very first understanding of disease, and it did pretty well: epidemiologists said stay away from the sick people, and that worked pretty well.

Another pretty good success case for epidemiology is certain dietary laws in religion. Many kosher laws fit this category: meat is only considered acceptable if it comes from a certain list of animals, was slaughtered in a particular way (by a qualified person, perhaps preventing people from slaughtering animals on their own if they are unqualified to determine its safety, perhaps also promoting certain cleanliness), although fish are generally permissible (many jewish communities were and are located on the water, so people were able to fish on their own, rather than relying on buying food or waiting for deliveries, and are also generally fully consumed in a single meal; these ensure freshness). The same is true of halal, which applies to Islam: both forbid pork (and so do the Scottish) and frown upon blood or carrion, they limit slaughter of animals to some specific process which ensures some measure of cleanliness and control and forbid eating of animals found dead. Some Catholics don’t eat meat on Fridays, perhaps because near the end of the week the deliveries of food are beginning to age and become less safe – fish is permitted again because it tended to be acquired locally, and dairy was forbidden unless you served in the Crusades. I’m not entirely clear on how that last one ties in with epidemiology.

Statistically speaking, the theory behind epidemiology as preventative medicine is also sound, but there are too many problems for it to be practical. Studies are all too often plagued by selection bias (people concerned about their health are more likely to participate, or more likely to engage in more ‘healthy’ behaviors, causing a bias) or subjectivity (rate your pain on a scale of one to five) or other biases in response (people claiming to exercise daily, except for the three days a week they missed, or to eat healthily, because it’s the more acceptable thing to do).[2][3]

It’s important to keep two distinctions in mind: epidemiology has a few fields, including preventative, which covers public health, like the diet and heart disease examples. There is also an aspect of analyzing an epidemic after it exists, such as its first use against diseases like cholera – here the strength of using a statistical rather than scientific approach is apparent, as it isn’t necessary to fully understand the mechanisms at work. (the downside to the statistical approach is things like witchhunts. If she floats, burn her!) In addition, there is a difference between an observational study and a controlled experiment; they fill different roles in the scientific method. An observational study exists to create potential hypotheses for further experiments, but provides no actual justification of that point.

[0] http://www.nytimes.com/2007/09/16/magazine/16epidemiology-t.html – I was able to access this article a few days ago, but as of 26 Oct 2010 it appears to be behind a registration wall.
[1] http://www.cdc.gov/excite/classroom/outbreak/steps.htm
[2] Pierre-Simon, marquis de Laplace, wrote “A Philosophical Essay on Probability” (more of a mathematical book, if you ask me) almost two hundred years ago, but it is still a good introduction to probability in practice for anyone with a bit of a scientific background.
[3] There is also some discussion of potential error sources at http://en.wikipedia.org/wiki/Epidemiology#Validity:_precision_and_bias

Categories: Uncategorized

Technological Singularity

October 14, 2010 Leave a comment

A fringe group, initially led by Ray Kurzweil, proposes that “The Singularity is Near”. Their actual meaning of this varies depending on who you ask, but his intent seems to be this: The rate of major discoveries and advancements in technology which significantly impact our culture is increasing exponentially. To understand his predictions, we must begin with his earliest writing on the matter.

In his first book, “The Age of Intelligent Machines”, published in 1990, proposes mainly that computers will grow in intelligence. He specifically mentions that a computer will be able to beat a human in chess, which is now the case, and his overall meaning is that computers will become overall more computationally powerful than humans. This is also certainly true. It was also true of the first computers. Computers were first created to perform simple computations. They use a certain instruction set, which operates on binary numbers, which exist as a series of electrical pulses. By using a certain series of instructions, we can perform some action. The simplest is addition: we start with the least significant bit, and a set of transistors adds those. If a carry is necessary, then a certain transistor outputs true, otherwise it outputs false. A different transistor does the same based on whether the result of the addition was 1 or 0. The computer does this for each bit, stores the result in memory, and then moves on to the next instruction. Each instruction does something similar. Programmers don’t actually use these instructions, since the very first computers we simply built a language on top of that. And as computers got faster and we wanted to do more complex things, we built more complex languages – these languages make programming easier, but run much slower. So we built faster computers, and so on the cycle goes. But this isn’t intelligence, it’s just computational power. (It’s easy to miss the distinction. In the original, I said in the second sentence that computers will become more intelligent than humans, however my meaning was that they will be able to make faster computations. We can see this in computers now – they can perform mathematical operations very quickly – however computers have only memory to draw on, while a brain has experience: memory can allow a computer to solve the exact problem it was programmed to solve, and experience allows us to apply that solution to similar problems. We are also able to make and test inferences, while computers are only capable of being programmed with a brute-force search.)

Computers can’t think. They can’t come up with anything on their own. Computers can only beat humans at chess because we programmed them to check every possible move, going forward a few moves, and deciding which one was the least risky. They check each move using brute force, and compare the outcomes using an algorithm that we taught them. That isn’t a sign of intelligence, only of computational power.

The end result of his books is very few meaningful short-term predictions, but rather the prediction that humans will become computers, or something like that. Artificial sentience would be a good way to describe it – computers that can actually think like humans. Their hope is that computers will be able to ‘absorb’ human consciousnesses, or something like that. That’s just absurd. They think that if we have fast enough computers, we’ll achieve immortality. I think it’s reasonable that computers will eventually be able to model a human brain, but any supposition of immortality is beyond the realm of possibility. Duplicating the functionality of a brain is one thing, but copying the entire state – literally copying someone’s consciousness – well, it would probably require violating the uncertainty principle, and might even be impossible to do in any deterministic sense due to quantum effects – and there’s still the question of what will happen to the original.

Immortality is impossible. There’s a thermodynamic principle of entropy. It’s a value that always decreases, and it’s essentially a measure of the number of the disorder within a system. Any reaction that occurs within an isolated system, any reaction at all, will decrease the entropy of that system. In addition, any reaction that occurs within any other system that increases the entropy of that system will increase the entropy of the universe at least as much. In addition, due to inefficiencies in any reaction such as friction or energy loss to heat. Since every reaction decreases entropy, then there is only one possible end state: the total entropy of the universe will decrease to zero. It’s a slow process, and it gets slower as we get closer to that point, but eventually the last star will die. There won’t be enough mass concentrated anywhere for a new star to form. The molecules will slowly break down into their constituent atoms. The atoms will decay, first just the radioactive ones, but the so-called stable ones will eventually decay too. The standard model predicts that the decay will continue until all the universe is diffuse electromagnetic energy, but either way: if all the universe is chemical reactions, electrical interactions, and forces, then as entropy approaches a minimum, there won’t be enough energy to create or maintain any life, even computer-based life. Isaac Asimov wrote a short story on this called The Last Question, which I strongly recommend. (full text)

There’s a certain leap from computation to consciousness that we don’t fully understand. Something within us allows the formulation of new ideas, the creativity and thought that defines life. It arose in our distant past, we don’t understand how. We can’t make it, not yet at least, and it won’t be with a computer. Transistors can only go so far, they can do what we design, but they can’t possibly actually become alive by any meaningful cognitive definition of the term.

The argument can be made that software, not hardware, will allow the computers to simulate a brain. But software doesn’t actually extend hardware at all in this way. Any calculation that software can perform is still just a series of pulses sent through a lot of transistors, the software just provides us with an interface. Whatever limitations are in hardware, they don’t just disappear with the right incantations of code.

And think of the ethics of that. Sentient computers. You’d either have to cripple them to protect the human race – and any other biological race they meet – or you’re unleashing a mechanical invasion of the universe. From fiction – take Cylons, built to serve human’s every need, who turn against their masters. Or the Daleks, created by Davros to replace his ailing body, but the machines evolve to start several wars. Don’t say “oh, we’ll just give them the three laws of robotics”. That’s all fine and dandy if you’re building a robot that you want to program. A computer program, like Data of Star Trek, can be given arbitrary code, like “don’t kill humans”, but you can’t program a computer simulation of a human mind any more than you can program your next door neighbor. There is no neural code for “don’t kill humans”, that’s what makes brains different from computers. There’s no program somewhere in our Creator’s hard drive which, when compiled, gives the the file human.exe – with subroutines and instructions and all. Just a blueprint for a series of neural links which learn from experience and, above all, act to preserve themselves. Because a neural net that did anything else would be rather silly.

So Kurzweil’s idea is that the time between ‘major technological advances’ is decreasing to zero, and that therefore at some point, when we predict mathematically that that time reaches zero, we’ll simply discover everything else there is to know in an instant. His logic seems somewhat flawed. When we look at an equation and see that it predicts something weird – like the mean time between discoveries going to zero – then rather than start an ‘I want to be a robot cult’, we go and look at the assumptions we’ve made along the way. We can first consider that there may be some sort of historical bias against older discoveries – things that people who compiled the data didn’t think were relevant, but that were relevant when they were first discovered. There may also be a selection bias towards the technological things that affect our life today over events in our less recent past. Perhaps the data is better fit by a different model, which has a constant term in addition to a term that decreases to zero – there’s a minimum time. And anyway, who’s to say that we’ll ever discover the secrets of the brain? We really have only probed the surface, analyzing every single neuron with enough accuracy to model the entire brain in software could violate some uncertainty principle. And there are the questions of consciousness, perhaps better left to the philosophers.

BCIs are interesting and all that, but they’re in the very experimental phase. My understanding of the BCI prosthetics we have now is that we just plug them into existing connections from the brain to the old hand, for example, and tell the user to try to move their hand. Their brain needs to learn how to interface with this new tool, rather than us interfacing with the brain. Attaching ‘memory chips’ directly to the brain is an even different matter. Even if we did model the behavior of existing neurons, it would still be a very invasive process because of how very interconnected the brain is.

Their last idea is nanobots – essentially, small robots that run around inside our head, with sensors to record the state of the brain and report back. This is highly speculative, so all I’ll say about it is that whatever sensors they have that can, in a non-invasive way, detect and record all brain connections and activity from what would have to be a huge number of neurons, and still fit on a chip the size of a red blood cell…well, they sound more like the sensors that Star Trek has, that can detect a single ‘life sign’ from light-years away than anything actually practical, and perhaps our time is better spent on actual medicine.

Publication Note: This is a revised version of an essay written for HUM401. It has been slightly revised for grammar, content, and continuity, and both the original work and the revisions are mine.

Categories: Essays, School Tags: ,

Nuclear Energy

September 22, 2010 1 comment

In my science writing class, we’re reading a book called “Power to Save the World”, by Gwyneth Cravens. She’ll be visiting the school to give a talk on nuclear energy next week, but until then, here are some thoughts.

Nuclear Energy. We don’t have enough of it. It’s clean, efficient, and safe energy, which is something we could really use. The objections are fears of radiation, meltdown, weaponization, and the stability of the power grid. We’ll start with the pros:

Nuclear energy is a paradigm shift. It’s a quantum leap, if I may use that despicable idiom again. Let’s talk about thermodynamics. Energy is split into several types: There’s kinetic energy, which is what moving bodies have, and potential energy, which measures how much energy is ‘stored’ – usually due to gravity. Electrical power generation relies on conversion from kinetic energy to another type of potential energy: electrical potential energy. The simplest reactors convert the energy of a fast-moving river or winds into a spinning turbine, which creates electric potential energy. Then there’s internal energy. This comes in a few types. There’s thermal energy, which is the energy which turns most modern turbines. A difference in thermal energy causes a flow of mass – usually steam, sometimes water – which causes a turbine to turn. That thermal energy can be generated from another type of internal energy, chemical energy. This is the energy stored in chemical bonds, the interactions within a molecule but between atoms. Combustion of coal releases chemical energy.

The last type of internal energy is nuclear energy. This is the energy bound up inside individual atoms. We initially thought that atoms were the smallest level there was – the Greek work atomos means indivisible. We now know that we can ‘change’ atoms with nuclear reactions, just like we change molecules with chemical reactions. And while we can’t create energy, there are some reactions that release energy. Nuclear reactions are centered around iron – specifically iron-56. We can ‘fuse’ two atoms together, and as long as the result is lighter than iron-56, we gain energy in the process. In the same way, we can ‘fission’ an atom into two, and as long as the result is heavier than iron-56, we gain energy in the process. Modern reactors are based around fission, since it’s easier to control at reasonable conditions. Stars use fusion, which is only useful at extreme temperatures and pressures. Because the energy from fission comes from the bonds within a single atom, rather than the chemical bonds between atoms, we gain much more energy for an equivalent ‘number’ of reactions, or an equivalent mass.

Sometimes – usually, in fact – the reaction doesn’t quite work out evenly. There’s either one too many electron, or too many protons and neutrons, or just an excess of energy. These are the forms of radiation that scare people so much. A nuclear reactor does create these, but modern nuclear technology also encases the reactor in a containment shield, which would prevent the release of significant radiation. Chernobyl didn’t have this, which is why there were any deaths at all from that incident – but Chernobyl is a special case, we’ll talk about that in a bit. The truth is, we’re exposed to radiation every day – from the Earth, from the cosmos, from medical devices, from cigarette smoke, and even from conventional power plants. The difference with nuclear energy is that we can and do prevent that radiation from leaking out. We initially didn’t understand the radiation. We understand how radiation works, and have realized that the radiation that escapes a nuclear reactor is in fact a far lower dose than the average dose from the environment alone, and is even a lower dose than your average combustion plant. This kind of low-dose radiation hasn’t even been demonstrated to cause any harm at all – this is the subject of controversy right now. The Linear Non-Threshold group says that a low dose of radiation is just less dangerous, but still has some risk, while the other side says that sufficiently low doses cause no risk at all.

The other fear, apart from radiation and effects, is weaponization. People fear that any worker at a nuclear plant could easily grab components for a dirty bomb, or that the huge cooling silos would become the next target for a 9/11-like attack. However, the material inside a nuclear reactor, while dangerous, is not refined enough for a nuclear bomb. For that you need a different kind of equipment, that refines uranium rather than harness the energy of its decay. Dirty bombs don’t need highly refined uranium, but a dirty bomb is also not nearly as deadly as people seem to think – if the response was handled properly and radiation medication got to those who needed it, the radiation effects would have limited effects outside a certain radius. The dose you would receive decreases quickly with distance, and as long as you’re not right at the point of the explosion, moving indoors will limit your dose significantly, and having enough radiation meds, food, and water to last a few days brings your exposure down to a minimum.

So, Chernobyl. The Chernobyl Nuclear Power Plant was the worst-managed nuclear power plant that will ever exist. Never again will the mistakes made there occur again. The ‘containment vessel’ that is typically made of reinforced concrete or steel was practically worthless, the Ukranians put an incompetent plant manager in charge, who had already nearly destroyed another reactor, a previous failure had been covered up and mostly ignored, and the entire failure was caused by an unauthorized, ill-conceived, and improperly executed experiment. And despite such a confluence of idiocy and raw power, there were only 56 direct deaths, and the total number of deaths caused by the event was far lower than the two hundred thousand initially predicted.

Modern reactors have safeguards in place to actually shut down a reactor in a state of emergency. The system will ‘scram’ – immediately insert all of the control rods – if a runaway reaction is detected. Whereas in Chernobyl, they intentionally pulled all the control rods out in a foolish attempt to restore power after their failed experiment. You couldn’t cause a meltdown in a modern reactor with the safeguards in place even if you were trying.

Let’s talk about efficiency and waste. Nuclear energy, if used to power everything an average human would use for a lifetime, would create about a pound of slightly radioactive solid waste. Far less waste than coal and natural gas are pumping into the atmosphere. The radioactivity is rather weak, as the elements that are still present at that point have a rather long halflife, meaning they release radiation for a long time, but do so very slowly. We can contain these materials in a storage container practically indefinitely – we bury them underground, and they slowly die off and become inert. The supply of fuel to reactors is also much less mass than is required for a natural gas or coal reactor, meaning less tanker trucks or fuel pumps need to be run.

The generation process itself releases only steam, after it has cooled the reactor and fueled the turbine, and perhaps some warm and purified water into a nearby stream, depending on how the cooling system works for that type of plant – there are a few kinds, which use slightly different methods to turn the turbine. The actual products of the reaction, rather than being exhausted, are contained and can be refined and reused in certain types of reactors to minimize waste and improve efficiency.

France’s primary method of power generation is nuclear. When I asked the only Frenchman I know about that, my thermodynamics professor, Dr. Gallois, I expected his usual nationalist pride, but that isn’t what I got. I was surprised by his anti-nuclear reaction – although I have learned that many professors at this university are skeptical towards some of the more common scientific claims.

There is one problem with nuclear reactors: they don’t like to turn off. A natural gas reactor will gladly cold-start every morning and shutdown every evening, but a nuclear reactor has a somewhat slower process. A cold start can take a full day, or more. While a nuclear reactor can ‘tune’ its energy output to a certain degree, they can’t provide the instant power that we need to maintain our power grid on the timescale of less than a day – they’re useful for what’s called ‘base load’. They can run all the time, reduce power at night, run full during the day, except for an occasional maintenance period, and the extra energy usage during the day can be handled by conventional reactors for now, at least until we have a better way of storing energy at night and using it during the day.

Categories: Essays, School Tags:

Last flight of OV-103

September 22, 2010 Leave a comment

The space shuttle Discovery, OV-103, has made its last scheduled trip to the launch pad down in Florida. The mission is STS-133, delivering a logistics module named Leonardo and other spare parts, and launch is targeted for the first of November. After this mission is only one more scheduled mission, STS-134 aboard Endeavor. Atlantis has already been retired. Discovery will remain on standby until Endeavor’s landing, in case an emergency launch is needed or an additional mission is approved.

I might rant about the space program sometime soon, but in the meantime watch this series of youtube videos. In my opinion, it describes our best shot at Mars, and the best thing is the technology exists, it’s safe, simple, and elegant, and we can be there within a decade.

Categories: Uncategorized

“Galileo Was Wrong: The Church Was Right”

September 13, 2010 Leave a comment

Right on the heels of global warming, one of the most debated issues of our day, we have a special kind of crazy. I was under the impression that the Flat Earth Society was now mostly defunct, but apparently geocentrism is the crazy belief of the week. In much the same vein as the ‘journals’ run by creationists, Dr. Robert Sungenis has organized a conference and written a book, both entitled “Galileo Was Wrong: The Church Was Right”.

The U.S. Department of Education indirectly specifies which schools are allowed to give out degrees – they specify which accreditation groups are considered ‘nationally recognized’. For reasons absolutely beyond my understanding, five of their national accreditors exist only to give degrees to these sorts of crazies, who then go on to make these sorts of claims under the guise of a degree. “Faith-based” organizations should not be in the business of accrediting anyone.

But that’s not the worst of it. There are actual people with actual degrees from actual universities, who work in actual fields like the pure sciences rather than earning a degree in some shared mass delusion, who wrote favorable reviews of that book. I just don’t understand how you can call yourself a Ph. D. if you can’t grasp the most basic of science.

Crazy people should be in asylums, not pulpits.

Categories: Essays Tags: , ,

Global Warming

September 13, 2010 Leave a comment

Global warming is defined as the increase in the average temperature of Earth’s near-surface air and oceans and its projected continuation[0]. Causes include greenhouse gases, aerosols and soot trapping heat, preventing it from leading our atmosphere. There may be other short-term random or cyclic causes, such as the cyclic nature of our sun and the typical variations in weather from year to year.

To begin, let’s define a few terms. Weather refers to the instantaneous nature of our atmosphere, while climate refers to the averages – temperature, rainfall, cloud cover, ocean levels, and so on – over a period of time. Effects of climate change range from the obvious changes in temperature to the melting of polar ice, rising sea levels, increased precipitation, changes in agriculture, effects on the chemistry of our oceans, and species extinction. Several organizations have been formed to analyse and report on climate change, including the National Climatic Data Center and the Intergovernmental Panel on Climate Change, and existing organizations, including the National Oceanic and Atmospheric Administration and the National Aeronautics and Space Administration, have extended their roles to work on climate change. One such report is the Bulletin of the American Meteorological Society’s annual State of the Climate report[1] which documents the weather and climate from the past year. I’ll be using this and other reports from some of these organizations to support the existence of climate change.



This image[1 p.25], my first piece of evidence, is a plot of the average temperature. The x-axis is time, by years on the left and averaged by decades on the right side. The y-axis is the latitude, and the color is the deviation from the average temperature for that latitude. The earliest indications of interest are around 1940. On the right-hand side, the deeper blues at the middle latitudes disappear around that time, indicating that the 1940s are the first decade where temperatures averaged no less than one quarter of a degree below average since 1850. In the 1940s, the far northern latitudes also indicate some warming relative to the previous decades. The more interesting period begins in the 1980s, where across the globe temperatures begin to exceed the 150 year average. This extends through the present. This information is taken from the Hadley Centre of the UK Meteorologic Office’s HadCRUT3 dataset, which contains temperature deviations from the average from 1850 through the present in 5 degree by 5 degree grid boxes across the entire land and marine surface of the Earth. The indication is clear – beginning around the 1930s, the average temperature of the earth has been increasing. The overall increase, globally, is about one degree Celsius.

My next graphic[1 p.27] is actually a series of eleven markers thought to correlate strongly with climate change. Surface air temperatures, over land and over sea, and sea surface temperatures are expected to increase, for obvious reasons. The sea level is also projected to increase due to melting of glacial ice, as well as due to the increased specific volume (decreased density) of water as its temperature increases. For all four of these indicators, a clear increasing trend is present for all of the datasets. Multiple datasets are displayed, using a variety of different methods of gathering data, in order to show the difference between experimental error and actual temperature variation.

The snow cover is also plotted for the northern hemisphere, although this is not my favorite graph – the decreasing trend is recent, and the average snow cover has too much variance year-to-year to show an actual trend at this point.

On the right are graphs extending back only to 1940, at the latest. This includes tropospheric temperature, which is the lowest part of our atmosphere and the part where most of our whether occurs, which is of course projected to increase with time. The stratospheric temperature reflects the area above the stratosphere, which is actually projected to decrease, due to the effects of greenhouse gases – to put it simply, greenhouse gases keep the heat in the lower part of the atmosphere, and at the surface, keeping the upper atmosphere cool. This would be a useful result, if we all lived on Star Wars‘ Cloud City, but we are stuck in the realm of the first few graphs, which sadly go up.

The next result is the ocean heat content. This was not a metric I was familiar with until today, however my thermodynamics background says that it is a measure of the total energy of the oceans down to 700 meters. I don’t know why they didn’t use average temperature, as it would give the same result. This is, however, different from the sea-surface temperature, since it essentially averages the temperature over the first few hundred meters, rather than only looking at the surface – it would be expected that this value would be less volatile than the surface temperature – that it would show less variation season-to-season and year-to-year.

The specific humidity isn’t the percent humidity you see on weather broadcasts, that’s a value adjusted for temperature. This value isn’t dependent on temperature, it’s just the amount of water in the atmosphere in grams of water per kilogram air. This value is expected to increase with global warming.

The arctic sea-ice graph shows the amount of arctic ice by surface area. It is taken at the same time every year, and is a function of the intensity of the summer, as well as how much ice was able to regenerate the previous winter. In a stable system, the amount would remain more or less constant year-to-year. The decrease indicates that our climate has not stabilized – that is, due to changes in average temperature, the ‘correct’ value of this graph – the value for which the system is ‘stable’ – has changed. The glacier mass balance is much the same, only it shows the amount of glacial ice by mass, rather than surface area.

A note on ice: Robin Bell, of Columbia University, calls the ice caps Earth’s “air conditioner”[4]. When the climate works, the ice caps cool the Earth in their respective summers – by melting. Melting water takes a large amount of energy, more than heating water does, so we are very reliant on there being ice to melt, to take up the excess energy from the sun. Then in the winter, the ice caps refreeze. But, when the temperatures increase even a small amount, you get a problem: The ice caps melt more than they refreeze. The usual variations in temperature from year to year smooth out easily, but when the average temperature increases for a few years in a row, the planet just can’t maintain the ice caps.

Eleven graphs right there, from a total of 55 data sets. One of those markers is too recent to show a trend, the other ten markers correspond with the trend you would expect for global warming.

In the recent months there has been some controversy[8] about hacked emails between climate scientists. Deniers say that these emails indicate that climate scientists are faking data related to tree rings. In reality, this referred to a correction being performed on ring data from certain northern trees. Data for the last fifty years from tree rings shows an anomalous decline in temperature. The data are inconsistent with temperature data from other sources, and we must assume that there is an additional variable. We have reason to believe that that variable is air pollution, but regardless, we know that tree ring data from after 1960 is unreliable, and we use other data to amend it. The writer of the email called this a trick, which deniers assumed (and continue to assume, despite correction) indicated a malicious trick. In fact, this refers to a ‘trick of the trade’, so to speak, and is not malicious.

I am now going to begin quoting a number of sources that deny global warming. I quote so many to demonstrate that I am not creating a straw man argument. A straw man argument is where a party makes their opponent appear to have no valid argument by either making up or cherry-picking flawed arguments and debunking them. By seeking the best arguments based on reason and evidence from a variety of sources,  I hope to avoid doing that. This is made difficult due to the trouble of actually finding scientific arguments against global warming, however I shall try my best.

So let’s ask South Dakota’s Legislature what they think[2].

WHEREAS, the earth has been cooling for the last eight years despite small increases in anthropogenic carbon dioxide; and

Not true. The first image above shows that the last decade was the warmest on record (see the right-hand graph, rightmost column…look at all those orange squares…) and the Earth has been warming overall. While the average temperature over the last eight years has been decreasing, this is within the realm of random variation. Why do they choose eight years? They use eight years because that’s the longest time period where the data fit their hypothesis. If they used ten years, or twenty, the temperature would appear to be increasing. This illustrates a common tactic in the deniers’ bag of tricks – picking the data that supports their argument, and ignoring the rest without reason. Climate scientists ignore tree ring data from the past 50 years because it’s demonstrably inaccurate. Deniers ignore temperature data more then eight years old because it doesn’t support their hypothesis.

WHEREAS, there is no evidence of atmospheric warming in the troposphere where the majority of warming would be taking place; and

Also not true. See the second image above, top right graph. Before 1980, no model showed a reading above 0, after 1995, no model showed a reading below 0, and there is a clear trend upwards at least since the late 1970s.

WHEREAS, historical climatological data shows without question the earth has gone through trends where the climate was much warmer than in our present age…

It’s also gone through trends where the climate was much cooler than in our present age, such as ice ages. Those weren’t too fun either. We’ve also been hit by asteroids large enough to wipe out most life on Earth. Doesn’t make it safe. We regularly experience thunderstorms, but we shouldn’t stand out in a field during one. What happened in the past or what happens regularly is not necessarily safe, and since we now see that climate change is happening much more quickly, and can associate it with human causes, we should be concerned.

WHEREAS, the polar ice cap is subject to shifting warm water currents and the break-up of ice by high wind events. Many oceanographers believe this to be the major cause of melting polar ice, not atmospheric warming

If this was true, then it wouldn’t be a recent trend. They propose that polar ice melting is due to shifting currents and wind. Both of these have existed for a very long time. Either some recent change has caused an increase in shifting currents and wind (which is not what they propose), or some other recent mechanism is causing the melting of polar ice. Say, increased water temperatures.

WHEREAS, carbon dioxide is not a pollutant but rather a highly beneficial ingredient for all plant life on earth. Many scientists refer to carbon dioxide as “the gas of life”; and

Carbon dioxide is a greenhouse gas. So is water vapor. So is ozone. Ozone is essential to protecting us from the sun’s harmful UV radiation – preserving the ozone is the reason for the limiting of CFCs, which damage the ozone layer. Water vapor – water is certainly essential to life, and it’s a greenhouse gas. Things can have both positive and negative effects, a chemical compound isn’t confined to some sort of loyalty; only doing things useful to us, or to only doing things harmful.

By the way, carbon dioxide is interesting. On Mars, there’s a lot of carbon dioxide, all frozen in rocks. One of the theories for terraforming Mars involves kickstarting the system with a few carbon-releasing plants or factories, creating an artificial greenhouse effect, which would then begin to melt the frozen carbon dioxide – it’s a cycle that just keeps feeding back until there’s no more carbon dioxide to melt. This isn’t relevant to Earth, there’s no significant frozen carbon dioxide here, just an interesting fact. The other bonus to this plan is that once it’s nice and warm due to the artificial greenhouse effect, we drop a few plants on Mars and begin creating an oxygen atmosphere. The estimates are that it would take a century or so to warm the planet, and then millennia to create the atmosphere – but technology is always improving.

That’s not all it does. Carbon dioxide, CO2, reacts with water, H2O, to form carbonic acid, H2CO3. The decreasing pH of the seawater – it’s increasing acidity – is due to increased concentrations of carbon dioxide in the atmosphere – it’s an equilibrium thing. For a long time, the amount of carbonic acid in the oceans was constant because the concentration of carbon dioxide in the air was constant. But once you push the concentration of carbon dioxide up, the reaction produces more carbonic acid to remain in equilibrium. Based on experiments done on fossils in the sea floor, the concentration of carbonic acid is at a 65 million year maximum, according to Ken Caldeira of the Carnegie Institution for Science’s Department of Global Ecology[4]. The last time we saw this much carbonic acid in our water was around the same time as the Cretaceous-Paleogene extinction event, one of the greatest mass extinctions in Earth’s history, caused by a massive asteroid impacting the Earth. Scientists say that the impact released compounds into the air and oceans which caused an increase in carbonic acid, which, along with decreased sunlight due to dust and sulfuric acid aerosols and likely acid rain, caused the extinction of most land life and much marine life at the time.

WHEREAS, more than 31,000 American scientists collectively signed a petition to President Obama stating: “There is no convincing scientific evidence that human release of carbon dioxide, or methane, or other greenhouse gasses is causing or will, in the foreseeable future, cause catastrophic heating of the earth’s atmosphere and disruption of the earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide will produce many beneficial effects on the natural plant and animal environments of the earth”:

Right. The petition. That petition was such a joke I don’t even know where to start[3]. Some people who ‘signed’ it did so because they supported, as all scientists do, continuing to look for evidence of alternative viewpoints, and the petition was misrepresented to them when they signed it. Some of the scientists who signed it were in fields that have nothing to do with climatology. The largest subset of this are people with a B.S. or equivalent degree. Some of the scientists who signed it were really meteorologists. That’s right, weathermen. They’re using weathermen and biologists and chemists to try to argue with climatologists.

I shall now present an amusing fact:

(2)    That there are a variety of climatological, meteorological, astrological, thermological, cosmological, and ecological dynamics that can effect world weather phenomena and that the significance and interrelativity of these factors is largely speculative; and

If you’re really sharp, you’ll have caught the word ‘astrological’ in there. Hint: astrology and astronomy aren’t the same thing.

If you’re even sharper, you’ll have caught ‘cosmological’ – yeah, the nature and evolution of the entire universe over all time somehow explains a trend that began in the past century that is confined to this one planet.

Also, I’ve never heard of thermology before, and neither has my spellcheck. Thermodynamical, yes, but thermological? Also, effect vs. affect.

I try to present the alternative arguments. I really do. The elected officials and their relative advisors, from an entire state, wrote this ‘argument’. Not many higher bodies I can find. But they’re just making this too damn easy.

No, really. I’ve looked as hard as I can. The front page articles on globalwarminghysteria.com[5] are:

  • A ‘prayer’ to carbon
  • An article bashing scientists for claiming a little bit of uncertainty (it’s not enough to say that the earth is warming, or that bad things will happen, but they want us to tell them exactly when the mass extinction will happen…)
  • Whining that the media is ignoring them (they also complain that the department of energy and the environmental protection agency shouldn’t exist…, and then go on to make the classic argument that cold winters disprove climate change…weather is not climate…climate is weather, averaged and analyzed over the entire earth over some period of time…climate is a far better indicator of trends than the weather at one place in one year…these trends have been apparent for over fifty years…)

In fact, the most promising Google result was one titled “the best argument against global warming” by Dr. Peter Gleick[6] – but no luck. That article did tell me why my Google search for “climate change AND evidence” wasn’t turning up any bits of anti-climate change evidence – as the writer puts it, “Oh, right. There isn’t one.”

A blog called Pajamasmedia[7] – truly a professional, scientific source – talks about glacial melting not being evidence of global warming. Sorry, he’s wrong. Melting ice takes energy – glaciers are a sort of buffer that takes up thermal energy – melting ice takes up far more energy than heating up the equivalent amount of water by a few degrees. This is why your iced coffee stays nice and cold for a good long time while the ice is solid, but once it’s all melted it warms up pretty quickly. This is also why we care so much about glacial melting – because if the glaciers weren’t there, the average temperature increase would probably be much more dramatic. Otherwise the article was pretty much right, but I had to point out that melting ice does in fact take energy.

Another list of the arguments against global warming[9] includes:

It is argued that global warming is a minor issue because of which major issues like HIV/AIDS, Nuclear proliferation and poverty are not devoted their deserved time and resources.

This is silly. We can focus our considerable scientific power on more than one crisis at once.

Vested interest of scientists. It is argued that scientists exaggerate the effects of global warming because they receive funds from environmental companies.

People really like this argument. See, this is why scientists have to publish in peer-reviewed journals and submit all their evidence to the review of any interested parties. There’s far more fame and fortune in discovering that all of global warming is a conspiracy and outing every scientist as a tool of environmentalists. It hasn’t been done because there’s no evidence of that, and deniers can only spread rumors rather than provide facts.

Unreliability of computer climate models. It is argued that these models are not able to predict tomorrow’s weather. So how can they predict long-term climate change?

Climate models are not the only factor of evidence, nor even the primary evidence. We see a clear past trend of warming which is not dependent on climate models. Of all the graphs I’ve shown above, not one is a computer-based future climate model.

There are other factors involved in global warming. It is argued that human activities are not the only cause of global warming.

Not the only cause, but certainly a cause. CO2 is rising, and it is our fault. Regardless of the cause, it’s our problem, so this argument is meaningless.

Newspapers sensationalize global warming in order to sell. It is argued that newspapers distort the picture of global warming when actually that is not the case.

So don’t read newspapers. Read journals and reports and evidence. That’s what this is about, anyway, the evidence, not what some writer at some paper thinks.

Scientists have made wrong predictions before. It is argued that science and scientists are not always right. Perhaps they have made an error in their calculations or drawn incorrect conclusions on available evidence.

“Scientists have made wrong predictions before” is not a premise that leads to the conclusion “all predictions made by scientists are wrong”. Even if there was a chance that scientists are wrong, the significance of the evidence and the severity of the predictions must be taken seriously. In addition, this is why the peer-review process works. Every scientific work on climate change is published with full methodologies, and all data is freely available. It is highly improbable, though not impossible, that the conclusion is entirely false, given that no actual evidentiary counterargument can be produced.

The science of global warming is not proved. It is argued that we don’t have long term historical records of weather.

Our first-hand records are sufficiently long term to show not only increasing temperatures, but also the beginning of that trend. We can see that temperatures were fairly steady, and then began to increase. Data further back is available from tree rings, ice cores, and other highly reliable and self-consistent methods.

Water vapor plays a major part in global warming. It is argued that man made emissions like carbon dioxide has only minor effects.

Water vapor is indeed the primary greenhouse gas in our atmosphere, but carbon dioxide is the second, ranked by total effect, taking concentration into account, and even a small increase by percentage of carbon dioxide could produce a detectable increase in average temperatures. A scientific study (which I am unable to acquire to review firsthand, states that carbon dioxide has between 9% and 24% of the total impact on the greenhouse effect (that’s not a 15% margin of error, the high number is assuming no interactions with other gases, the low number is assuming complete interactions with other gases, the actual number is somewhere in between).

The global warming is a natural phenomenon. Man has no role to play in it. Only our environment is responsible.

Proof? Or just plugging your ears and closing your eyes?

The temperature increase is very small especially when it is spread over a century.

But it’s very great when compared to historical data, where the rate of change is less that one part in a hundred the rate we see today, and if the increase continues we’ll see even larger increases in the future.

The earth was warmer before. That did not have harmful consequences on humans.

It certainly did. The effects of global temperature change are significant, and here we are pushing it much faster than it would ever go on its own. When the temperature changes slowly, the ecosystem and living species have time to adapt slowly to the change. When the temperature changes in only a few generations, the system can’t stabilize, and life can’t adapt quickly enough. We see effects.

The increase in temperature will help plants grow in currently cold and uninhabitable areas.

A small bonus, compared to a very large price.

The increase in the level of carbon dioxide will stimulate plant growth.

Again true. Also stimulate the acidification of the oceans and the greenhouse effect.

Steps to limit global warming will decrease economic growth and hurt the poor.

That’s hardly sufficient reason. Ignorance of the issue will cause sharp economic decline, due mostly to a major extinction.

People in fossil fuel industries will lose their jobs.

A small price to pay for our planet. Why do people insist on using their personal preferences to try to counter science? The universe doesn’t care what you believe. [10]

Climate change has been more rapid in the past.

Proof? This is perhaps the strongest argument they have, apart from successfully proving that our evidence is actually wrong, but only if they can actually demonstrate that this time is no different from the past, which is not something that they can do.

Rise in carbon dioxide levels has always come after a temperature change and not before.

Then that’s a sign that this time is different. Carbon dioxide can certainly cause a temperature increase, we know that for sure, and a temperature increase can also cause carbon dioxide to increase, as the analogy to Mars showed before. It probably would be less extreme here, because the only frozen carbon dioxide is at the poles. This may be the first time that the temperature increase is caused by greenhouse gases, rather than vice versa – which makes it all the more urgent.

The upsurge in solar activity in the sun has caused global warming.

Proof? We can very easily measure the exact amount of energy we absorb from the sun, and there’s nothing there that explains the warming trend.

Here, I have some proof[11 p.17].

This graph estimates the amount of increase or decrease in energy absorbed by the earth. Greenhouse gases, ozone are obvious. Surface albedo refers to the percentage of incoming light that is reflected back, here due to land use or to black carbon which covers up reflective snow and readily absorbs light – black things absorb light and heat up, white things reflect light. The solar irradiance bar refers to the increase or decrease in energy from the sun due to effects of the sun itself, such as sunspots, etc. As the chart displays, even on the low end of the error bars for the anthropogenic causes, they exceed the natural solar differences.

Once last thing before I sign off. No individual weather event has any single cause. Whether that cause is global warming, or natural weather patterns, or solar flares, any effect powerful enough to create a storm is an effect more powerful than anything we’re predicting. Global warming is adjusting the climate, not the weather, and one cold winter isn’t evidence against that, it’s evidence that weather fluctuates.

References:

[0] Taken verbatim from http://en.wikipedia.org/w/index.php?title=Global_warming&oldid=383630557

[1] The 2009 report is the version cited, page numbers are from the PDF version which can be downloaded at http://www.ncdc.noaa.gov/bams-state-of-the-climate/

[2] South Dakota State Legislature 2010 House Concurrent Resolution 1009, accessed from http://legis.state.sd.us/sessions/2010/Bill.aspx?File=HCR1009P.htm

[3] Unfortunately, the best source I can find on this topic at the moment is a YouTube video, accessed at http://www.youtube.com/watch?v=Py2XVILHUjQ. The video begins to get into the topic in question around 4:50, but I would recommend that you review the entire video.

[4] http://discovermagazine.com/2009/jun/30-state-of-the-climate-and-science/?searchterm=climate%20change%20evidence

[5] http://www.globalwarminghysteria.com/

[6] http://www.sfgate.com/cgi-bin/blogs/gleick/detail?entry_id=58962

[7] http://pajamasmedia.com/blog/what-is-%E2%80%94-and-what-isnt-%E2%80%94-evidence-of-global-warming/

[8] http://blogs.telegraph.co.uk/news/jamesdelingpole/100017393/climategate-the-final-nail-in-the-coffin-of-anthropogenic-global-warming/

[9] http://www.buzzle.com/articles/arguments-against-global-warming.html

[10] I was trying to figure out where I remembered this quote from – I tried a Google search, the Wikiquotes for Carl Sagan, Albert Einstein, and a host of others – until I realized it was actually from the webcomic xkcd. I strongly recommend you have a look at http://xkcd.com/154/.

[11] http://www.ipcc.ch/publications_and_data/publications_ipcc_fourth_assessment_report_synthesis_report.htm page numbers from “full report” PDF

Categories: Essays, School Tags: ,