Archive

Archive for October, 2010

Epidemiology

October 27, 2010 Leave a comment

Gary Taubes, in an article for the New York Times Magazine[0], talks about the science of epidemiology. He starts off by discussing horomone replacement therapy for women. The idea behind HRT was to ‘cure’ aging. In hindsight, it seems obvious that doing so is futile, especially given the harmful effects that we have more recently discovered. Taubes seems to think that there’s some sort of problem with the initial medical recommendations towards HRT, since we (clearly) didn’t fully understand the effects at the time and probably don’t now, but his bigger problem is that women were advised to continue taking HRT after menopause. He says that this is because of how new scientific discoveries are announced: when the media hears about some new medical discovery, despite a lack of review, they push it right out to the public as medical advice. Physicians then proceed to recommend the new treatment, the FDA doesn’t stop them because in this case HRT was already thought of as a good idea, and it was just being used differently. When more people took it doctors started to realize that it caused certain side effects, so then no one took it, until a paper was published saying that the benefits outweigh the risks – something that is true in most of medicine: when ER doctors need an emergency blood transfusion and don’t know the patient’s blood type, they go for O negative: the benefits of getting blood quickly outweigh the chance of wasting some more universal blood, when doctors have a patient in immediate danger, they medicate rather than bothering to ask the patient about potential allergies, because the benefits of saving the patient outweigh the rather slim risk of allergy, and an allergic reaction can be treated if it occurs. Even medications like Tylenol have a certain risk – but it is such a good painkiller that we use it, even the effective dose is so near to the toxic dose. These are considered acceptable risks, and rightly so, by modern medicine, so the current state of the medical opinion of HRT is at least internally consistent, however the author’s complaint is rather with the changing state of medical consensus: the classic “you were wrong before, how can we know you’re right now”. I think we’ve discussed this in the past in this class, so we’ll move on.

The author goes on to discuss the merits of preventative medicine. He makes the point of observational studies vs. controlled experiments, and how an observational conclusion can very quickly be spun into medical fact, and it’s not until the conclusion of a controlled experiment that we learn that there is no correlation. This explains the flip from a potentially good treatment back to a treatment with no positive gain with the discovery that the previously observed correlation is not a sign of any actual causation. Epidemiologists who seek to find similar links between behaviors or treatments and undesired side effects. He goes on to complain that these controlled experiments are rarely performed because they are rarely funded and lists a few cases where they were unsuccessfully. I have a few problems with that.

For one, no one is making Mr. Taubes take every supplement and perform every practice which is correlated with better health. The fact is that even strong correlations can exist between better health and slightly detrimental practices, if those practices are thought to be healthy, due to a combination of the ‘health-nut’ population and the chronically unhealthy population. The health-nuts might follow some extreme number of practices thought to be healthy, and might have a lower risk of heart disease, which the unhealthy population will be the opposite. If a study looks only at a specific remedy and ignores other factors, such as perhaps healthy eating, then the positive benefits of healthy eating may overshadow a negative effect of the remedy in question, leading to an incorrect result. Researchers can try to fix this using probabilities and things, but it is rather difficult to assign a numerical value to how healthy someone eats, and even less so to expect a large population of an observational study to do so consistently. This sounds like a support of his point, but it really isn’t: my point is that it is perhaps better to let the observational researchers do their studies, as they give actual researchers some useful leads, and just ignore the output of their observational studies. You’re allowed to do that, you know, just not take the health advice you find from some observational study, just like you don’t necessarily look at the health practices of your friends and try to emulate them. While it might feel good and cathartic to write a nine-page article in the Times Magazine condemning observational studies, it’s really not necessary. And statistically speaking, it is better, at least marginally, to take medical advice from observational researchers than from whatever anecdotal advice we get from friends, although most people tend to base their lives on the latter.

Also, it’s worth arguing that what he is talking about is not the entirety of the field of epidemiology. Epidemiology is not solely based around preventative medicine, it also does seek to analyse the causes and spread of disease, which is something it does quite well. See, I think most doctors would agree that epidemiology is merely statistical and cannot provide any actual proof, merely indicate certain probabilities of correlation and of causation. And when you’re in the middle of an epidemic – what the field is actually named for – epidemiologists are pretty good at determining its source and estimating its spread.[1] They do it every year to determine the optimal vaccination for flu, to name a single instance. Epidemiology was the field that gave us our very first understanding of disease, and it did pretty well: epidemiologists said stay away from the sick people, and that worked pretty well.

Another pretty good success case for epidemiology is certain dietary laws in religion. Many kosher laws fit this category: meat is only considered acceptable if it comes from a certain list of animals, was slaughtered in a particular way (by a qualified person, perhaps preventing people from slaughtering animals on their own if they are unqualified to determine its safety, perhaps also promoting certain cleanliness), although fish are generally permissible (many jewish communities were and are located on the water, so people were able to fish on their own, rather than relying on buying food or waiting for deliveries, and are also generally fully consumed in a single meal; these ensure freshness). The same is true of halal, which applies to Islam: both forbid pork (and so do the Scottish) and frown upon blood or carrion, they limit slaughter of animals to some specific process which ensures some measure of cleanliness and control and forbid eating of animals found dead. Some Catholics don’t eat meat on Fridays, perhaps because near the end of the week the deliveries of food are beginning to age and become less safe – fish is permitted again because it tended to be acquired locally, and dairy was forbidden unless you served in the Crusades. I’m not entirely clear on how that last one ties in with epidemiology.

Statistically speaking, the theory behind epidemiology as preventative medicine is also sound, but there are too many problems for it to be practical. Studies are all too often plagued by selection bias (people concerned about their health are more likely to participate, or more likely to engage in more ‘healthy’ behaviors, causing a bias) or subjectivity (rate your pain on a scale of one to five) or other biases in response (people claiming to exercise daily, except for the three days a week they missed, or to eat healthily, because it’s the more acceptable thing to do).[2][3]

It’s important to keep two distinctions in mind: epidemiology has a few fields, including preventative, which covers public health, like the diet and heart disease examples. There is also an aspect of analyzing an epidemic after it exists, such as its first use against diseases like cholera – here the strength of using a statistical rather than scientific approach is apparent, as it isn’t necessary to fully understand the mechanisms at work. (the downside to the statistical approach is things like witchhunts. If she floats, burn her!) In addition, there is a difference between an observational study and a controlled experiment; they fill different roles in the scientific method. An observational study exists to create potential hypotheses for further experiments, but provides no actual justification of that point.

[0] http://www.nytimes.com/2007/09/16/magazine/16epidemiology-t.html – I was able to access this article a few days ago, but as of 26 Oct 2010 it appears to be behind a registration wall.
[1] http://www.cdc.gov/excite/classroom/outbreak/steps.htm
[2] Pierre-Simon, marquis de Laplace, wrote “A Philosophical Essay on Probability” (more of a mathematical book, if you ask me) almost two hundred years ago, but it is still a good introduction to probability in practice for anyone with a bit of a scientific background.
[3] There is also some discussion of potential error sources at http://en.wikipedia.org/wiki/Epidemiology#Validity:_precision_and_bias

Categories: Uncategorized

Technological Singularity

October 14, 2010 Leave a comment

A fringe group, initially led by Ray Kurzweil, proposes that “The Singularity is Near”. Their actual meaning of this varies depending on who you ask, but his intent seems to be this: The rate of major discoveries and advancements in technology which significantly impact our culture is increasing exponentially. To understand his predictions, we must begin with his earliest writing on the matter.

In his first book, “The Age of Intelligent Machines”, published in 1990, proposes mainly that computers will grow in intelligence. He specifically mentions that a computer will be able to beat a human in chess, which is now the case, and his overall meaning is that computers will become overall more computationally powerful than humans. This is also certainly true. It was also true of the first computers. Computers were first created to perform simple computations. They use a certain instruction set, which operates on binary numbers, which exist as a series of electrical pulses. By using a certain series of instructions, we can perform some action. The simplest is addition: we start with the least significant bit, and a set of transistors adds those. If a carry is necessary, then a certain transistor outputs true, otherwise it outputs false. A different transistor does the same based on whether the result of the addition was 1 or 0. The computer does this for each bit, stores the result in memory, and then moves on to the next instruction. Each instruction does something similar. Programmers don’t actually use these instructions, since the very first computers we simply built a language on top of that. And as computers got faster and we wanted to do more complex things, we built more complex languages – these languages make programming easier, but run much slower. So we built faster computers, and so on the cycle goes. But this isn’t intelligence, it’s just computational power. (It’s easy to miss the distinction. In the original, I said in the second sentence that computers will become more intelligent than humans, however my meaning was that they will be able to make faster computations. We can see this in computers now – they can perform mathematical operations very quickly – however computers have only memory to draw on, while a brain has experience: memory can allow a computer to solve the exact problem it was programmed to solve, and experience allows us to apply that solution to similar problems. We are also able to make and test inferences, while computers are only capable of being programmed with a brute-force search.)

Computers can’t think. They can’t come up with anything on their own. Computers can only beat humans at chess because we programmed them to check every possible move, going forward a few moves, and deciding which one was the least risky. They check each move using brute force, and compare the outcomes using an algorithm that we taught them. That isn’t a sign of intelligence, only of computational power.

The end result of his books is very few meaningful short-term predictions, but rather the prediction that humans will become computers, or something like that. Artificial sentience would be a good way to describe it – computers that can actually think like humans. Their hope is that computers will be able to ‘absorb’ human consciousnesses, or something like that. That’s just absurd. They think that if we have fast enough computers, we’ll achieve immortality. I think it’s reasonable that computers will eventually be able to model a human brain, but any supposition of immortality is beyond the realm of possibility. Duplicating the functionality of a brain is one thing, but copying the entire state – literally copying someone’s consciousness – well, it would probably require violating the uncertainty principle, and might even be impossible to do in any deterministic sense due to quantum effects – and there’s still the question of what will happen to the original.

Immortality is impossible. There’s a thermodynamic principle of entropy. It’s a value that always decreases, and it’s essentially a measure of the number of the disorder within a system. Any reaction that occurs within an isolated system, any reaction at all, will decrease the entropy of that system. In addition, any reaction that occurs within any other system that increases the entropy of that system will increase the entropy of the universe at least as much. In addition, due to inefficiencies in any reaction such as friction or energy loss to heat. Since every reaction decreases entropy, then there is only one possible end state: the total entropy of the universe will decrease to zero. It’s a slow process, and it gets slower as we get closer to that point, but eventually the last star will die. There won’t be enough mass concentrated anywhere for a new star to form. The molecules will slowly break down into their constituent atoms. The atoms will decay, first just the radioactive ones, but the so-called stable ones will eventually decay too. The standard model predicts that the decay will continue until all the universe is diffuse electromagnetic energy, but either way: if all the universe is chemical reactions, electrical interactions, and forces, then as entropy approaches a minimum, there won’t be enough energy to create or maintain any life, even computer-based life. Isaac Asimov wrote a short story on this called The Last Question, which I strongly recommend. (full text)

There’s a certain leap from computation to consciousness that we don’t fully understand. Something within us allows the formulation of new ideas, the creativity and thought that defines life. It arose in our distant past, we don’t understand how. We can’t make it, not yet at least, and it won’t be with a computer. Transistors can only go so far, they can do what we design, but they can’t possibly actually become alive by any meaningful cognitive definition of the term.

The argument can be made that software, not hardware, will allow the computers to simulate a brain. But software doesn’t actually extend hardware at all in this way. Any calculation that software can perform is still just a series of pulses sent through a lot of transistors, the software just provides us with an interface. Whatever limitations are in hardware, they don’t just disappear with the right incantations of code.

And think of the ethics of that. Sentient computers. You’d either have to cripple them to protect the human race – and any other biological race they meet – or you’re unleashing a mechanical invasion of the universe. From fiction – take Cylons, built to serve human’s every need, who turn against their masters. Or the Daleks, created by Davros to replace his ailing body, but the machines evolve to start several wars. Don’t say “oh, we’ll just give them the three laws of robotics”. That’s all fine and dandy if you’re building a robot that you want to program. A computer program, like Data of Star Trek, can be given arbitrary code, like “don’t kill humans”, but you can’t program a computer simulation of a human mind any more than you can program your next door neighbor. There is no neural code for “don’t kill humans”, that’s what makes brains different from computers. There’s no program somewhere in our Creator’s hard drive which, when compiled, gives the the file human.exe – with subroutines and instructions and all. Just a blueprint for a series of neural links which learn from experience and, above all, act to preserve themselves. Because a neural net that did anything else would be rather silly.

So Kurzweil’s idea is that the time between ‘major technological advances’ is decreasing to zero, and that therefore at some point, when we predict mathematically that that time reaches zero, we’ll simply discover everything else there is to know in an instant. His logic seems somewhat flawed. When we look at an equation and see that it predicts something weird – like the mean time between discoveries going to zero – then rather than start an ‘I want to be a robot cult’, we go and look at the assumptions we’ve made along the way. We can first consider that there may be some sort of historical bias against older discoveries – things that people who compiled the data didn’t think were relevant, but that were relevant when they were first discovered. There may also be a selection bias towards the technological things that affect our life today over events in our less recent past. Perhaps the data is better fit by a different model, which has a constant term in addition to a term that decreases to zero – there’s a minimum time. And anyway, who’s to say that we’ll ever discover the secrets of the brain? We really have only probed the surface, analyzing every single neuron with enough accuracy to model the entire brain in software could violate some uncertainty principle. And there are the questions of consciousness, perhaps better left to the philosophers.

BCIs are interesting and all that, but they’re in the very experimental phase. My understanding of the BCI prosthetics we have now is that we just plug them into existing connections from the brain to the old hand, for example, and tell the user to try to move their hand. Their brain needs to learn how to interface with this new tool, rather than us interfacing with the brain. Attaching ‘memory chips’ directly to the brain is an even different matter. Even if we did model the behavior of existing neurons, it would still be a very invasive process because of how very interconnected the brain is.

Their last idea is nanobots – essentially, small robots that run around inside our head, with sensors to record the state of the brain and report back. This is highly speculative, so all I’ll say about it is that whatever sensors they have that can, in a non-invasive way, detect and record all brain connections and activity from what would have to be a huge number of neurons, and still fit on a chip the size of a red blood cell…well, they sound more like the sensors that Star Trek has, that can detect a single ‘life sign’ from light-years away than anything actually practical, and perhaps our time is better spent on actual medicine.

Publication Note: This is a revised version of an essay written for HUM401. It has been slightly revised for grammar, content, and continuity, and both the original work and the revisions are mine.

Categories: Essays, School Tags: ,