After that last post, some of you may be thinking "But, Meredith, I don't think I've ever heard you talk about transhumanism before outside of the context of Vernor Vinge's novels. Are you an extropian, too? What's up with this futurism stuff?"
I used to have a lot of extropians in my social circle, and through conversations with them I arrived at a position that I jokingly referred to as "weak anthropic extropianism". Simply put, it is my opinion that while the chances of my surviving to an advanced age are quite high, and our understanding of biology, genetics and aging is advancing so fast that younger generations -- possibly even people who are alive now -- may in fact be able to extend their lives indefinitely, the question of whether I as an almost-34-year-old Western woman will have that option is already settled. Either I was born late enough to take advantage of the fruits of research into Not Dying, or I will die before this option becomes available to me. Realistically, it's probably the latter, and so I live every day with a subtle awareness of my own mortality. It's an interesting motivator: some day, you will die, and there's a lot you want to accomplish before that happens, so hop to it! Thinking about time and effort as finite resources also helps me to recognise sunk-cost fallacies, which is a nice cognitive benefit. All in all, it may not be optimistic, but at least it's adaptive.
Support for the weak anthropic extropian position comes from a variety of directions. These graphs of how cancer survival rates have improved over a period of 43 years show how dramatically and consistently the evidence-based approach with which we investigate disease has produced results in the form of living, cancer-free human beings. "Personalised medicine" is starting to become a reality; already, millions of patients are treated for rheumatoid arthritis or Crohn's disease with monoclonal antibodies, designed to inhibit an immune-system protein which their bodies have problems with. Monoclonal vaccines and other anti-infectives -- proteins tailored to knock out specific viruses -- are on the market already. Customised therapies are a pretty short bridge from there. It'll start with some weird fucking disease, probably an autoimmune disorder because that's where the Westerners are when you draw your map by morbidities, though I wouldn't be at all surprised if AIDS research heads in the direction of tailoring different therapies to different strains of the virus -- and then, armed with the discoveries in genetics (and more importantly, genetic engineering) from that endeavour, tailoring them to patients as well.
Put another way, I think we're going to find out whether it's possible for there to be a non-physically-aging Homo sapiens pretty soon now, whatever the answer actually ends up being.
But, you know, an awful lot of alchemists probably said "I think we're going to find out whether it's possible to transmute lead into gold pretty soon now, whatever the answer actually ends up being" not too long before the discipline of alchemy underwent a phase transition into the discipline of chemistry (with significant help, I might add, from the disciplines of brewing and, I shit you not, accounting. The taxation of alcohol plays an enormous role in the history of chemical engineering.) The good parts of alchemy (a lot of the equipment that was developed, and a lot of practical knowledge about things like melting and boiling points, as well as ways of keeping records of experimentation) provided a springboard for the development of a far more accurate and predictive model of the properties of natural (and, later, unnatural) substances. So even if the answer really is "no, aging is inescapable," we're going to learn -- have already learned -- a hell of a lot about how the human body works, and how it can break, and how to fix it when it breaks, and that's the kind of knowledge that can be employed to help make sure that billions of mortal human beings experience a healthier and more enjoyable ephemeral, all-too-short existence.
This position is inherently futurist, though not as dramatically so as strains of futurism that focus on climactic achievements of great technological depth. An imagined future where some people never age is the stuff of science fiction; same for one where brain-computer interfaces exist. The goals that I as a weak anthropic futurist am interested in accomplishing are ones of great scope. How can we wipe out nutrient deficiency diseases in the developing world (and among the poor in the developed world)? How can we dramatically reduce the attack surfaces of network protocols? These are things that affect people right now, and they're problems I happen to be interested in solving and think I stand a reasonably good chance at being able to affect in a positive way. So that's why I don't make an effort to hang out with the H+ crowd. I think they're great folks, I drink beer with them at conferences, I love what they're doing, I'm just working on different stuff. Your brain-computer interface is going to need protocol implementations that don't have parser vulnerabilities, for crying out loud, so Imma make sure you guys have the tools to do that, particularly since it helps out the Internet we already have.
Human health (and network health) are big problems. Billions of people, billions of computers. And one thing we've learned in the last several decades is that large-scale systems are a field of study in themselves. Sometimes people try to build predictive models of large-scale systems, and usually, when they fuck it up amazingly, everyone knows that something got amazingly fucked up (cf. the failure of predictive models that led directly to the credit crisis) even if they don't know which model failed. I don't think we really notice when predictive models -- like the one at the power plant that feeds your city, which predicts how much demand to expect and takes generating capacity on- and off-line to avoid waste or damage -- succeed. Large-scale engineering works; the very existence of the Internet provides all kind of fascinating case studies. I think we will derive tremendous increases of human health and happiness from better understanding of how these sorts of systems do -- or don't -- expand or contract, thrive or perish, succeed or fail as platforms for whatever we try to launch from them. Evgeny Morozov, for instance, levies a lot of criticism at "techno-utopians" who advocate spreading freedom through spreading free speech online, and Malcolm Gladwell derides "Facebook activists"; both are often perceived as gadflies, but they provide a vital service, looking for feedback that shows whether a certain large-scale behaviour produces a desired outcome or not. (Gladwell less so than Morozov, as Gladwell's target is really a strawman.) So I suppose you could say that if I'm a futurist, then my futurism is directed toward problems of scale with regard to health and information processing.
Or I could just say "yeah, I think it'd be really awesome if everyone never got sick and always had safe, reliable ways to communicate and access information from anywhere in the world, and I think we can achieve these things, so let's hurry up and make it happen already!" That probably sounds more mainstream-futurist.
Something that's been annoying me on Twitter lately is the cries of horror and outrage that 200,000 people have been evacuated from the area around the Fukushima Daiichi nuclear reactors and that iodine tablets have been distributed. This isn't something I can address effectively in 140-character splurts, so I'll talk about it here.
These complaints seem to assume that if people are being evacuated and given medical supplies, then something awful must have already happened and that the measures being taken are remediative. I have to conclude that none of these people have ever lived in an area prone to extreme geological/meteorological events. I've lived in several, which means that I have been through tornado warnings, flood warnings, hurricane warnings, and snow warnings. One common theme of all these events is that they all feature preventative measures: when it looks like there might be a flood, you start sandbagging the riverbank, when it looks like there might be a tornado you go down to the basement, when it looks like there might be heavy snow affecting travel safety you close schools. You take these measures so that if something bad does happen, you will be in the safest place possible.
In the case of a nuclear accident, preventative measures run many layers deep. Measures such as evacuation and distribution of iodine take place very early in that chain: once the chance of a containment breach N hours in the future is X%, you evacuate the surrounding area and hand out medical supplies so that anyone who needs them already has them on hand. The last fucking thing you want to be doing during a containment breach is worrying about people getting exposed, or having to put more people into a hot area in order to evacuate others, so you get everyone in a wide radius around the reactor out of harm's way very early on. This is particularly important in the case of a massive geographical disaster: you have to factor in travel delays due to damage to the transportation infrastructure, so you have to tell people to get the fuck out even earlier in order to give them plenty of time to get to safety.
Happily, it looks like all the reactor cores have been successfully cooled down. Some of the Fukushima Daiichi reactors had to be cooled with seawater due to boil-off; they had to vent some steam to reduce the pressure in the reactor, which lowered the coolant level, and they replaced it with seawater with lots of boron dissolved in it. (Boron acts like flypaper for neutrons, capturing them and preventing further fission reactions.) If they had to flood the pressure vessel, then the reactor is probably a write-off. This is why you've heard people talking about boron/seawater being the "last resort" option; it is guaranteed to kill off any ongoing primary or intermediate fission interactions, but it also ruins the reactor for future use. Think of it as like spraying a kitchen fire with a fire extinguisher: you've put the fire out, but you'll have to throw out the food you had to hose down.
The Fukushima Daiichi reactors are all roughly 40 years old. In that intervening time, boiling-water reactor designs have improved substantially. GE Hitachi's Advanced Boiling Water Reactors (ABWRs) and Economic Simplified Boiling Water Reactors (ESBWRs) don't suffer from the need for active pumping that caused much of the trouble with Daiichi 1 and 3 (which are also GE Hitachi designs, but from an earlier generation). Two ABWRs are slated to go online at Fukushima Daiichi in 2016; it would be great if the damaged reactors could be replaced with ABWRs or even ESBWRs.
Finally, I just have to give a shout-out to the engineers who built this plant in the first place. For the most part, that plant successfully survived an earthquake seven times stronger than what they designed it for. That's some good construction work. It looks like a substantial amount of the problems that happened after the coolant system failure, such as problems connecting the second set of backup batteries and generators that were trucked in, could have been prevented by standardization that's only developed in the last 40 years; when your plants are standardized, it's much easier to make sure you're getting the right parts to the right place. I hope that Japan takes advantage of the improvements in design and advances in engineering standards that have taken place since Fukushima Daiichi went online, and gets the site back up and running better than it was before.
As most of you have probably picked up on, I'm among that minority of computer scientists who actually writes code, and often prefers it to writing papers (much to the chagrin of my advisors and colleagues). I enjoy my theoretical work, but if I spend too much time on theory alone, the joy turns hollow; I want to build things that people can use. Things that are better than what we have now. Things that are founded on sound principles and elegant proofs, that run fast, scale efficiently, are easy to use and extend, work well, and fail gracefully.
I'm often torn, rather badly, between two motivations, which I'll call the Mathematician and the Engineer. The Mathematician loves elegance and correctness, and is willing to spend exponentially increasing amounts of time to get them. The Engineer respects elegance and correctness, but is far more aware than the Mathematician of the fact that there's a lot of work that needs to get done yesterday, and just wants to get the fucking code out the door. With unit tests and regression tests and a decent build system, sure -- the Mathematician likes those too, they're a good way to demonstrate correctness -- but the Engineer recognises that nobody can use the code we don't ship, and sometimes there comes a point when you just have to take a deep breath, commit the changes and call it a release.
I submit that political decisionmaking is subject to its own Mathematician/Engineer conflicts. And it's often hard, damn hard, to switch between one perspective and the other or to find a point of harmony between them, especially when the brokenness of a system is obvious to both Mathematicians and Engineers. The argument isn't so much over that something needs to change, but what needs to change, and how much work under the hood it's going to take to ship something that is at least less broken and scales better, and what the balance is between getting it more correct and getting something that will fix at least some of the short-term problems out the door at all.
(This post brought to you by the fact that the Haskell parser generator doesn't have precedence settings for attribute rules, the realisation that my life would be so much easier if it did, the further realisation that if I want it, I'm probably going to have to put it there myself, and the nagging knowledge that there's a less elegant way to do it.)
It's been a busy week for comments over here at Radio Free Meredith, and there have been some exciting discussions going on. I'd like to break yet another comment thread out into a post of its own: heron61 and I got to talking about some freedom-of-speech stuff, which you can go read if you want to, and I'm going to continue that discussion here.
Why? Well, it's been two weeks since we started our discussion of Claude Shannon's "A Mathematical Theory of Communication". However, instead of moving on to Part 2 of that paper, I'm going to talk about the history of telecommunications, from both a technical and an economic standpoint. I'll explain some fundamentals -- many of which were driving forces behind Shannon's research -- and we'll explore the problems of bandwidth scarcity, how they got started, how information theory has helped to address them, and why they're still relevant today. I also have a modest proposal, but that will be a separate post.
I'm going to offer a counter-proposal to heron61's proposal of reintroducing the Fairness Doctrine, but to do so I'm going to need to step back in time and give a sort of technological history of broadcast media and why it works the way it does today. Hopefully enochsmiles will also jump in and put in his $.028 (he gets paid in euros) about the telecom side of things -- he was right there in the thick of it for a lot of what was going on between the big telecom providers in the late '90s and he's got a lot of good domain knowledge.
So. In the beginning, there was radio. (Actually, the telegraph came before radio, as did the telephone, and those will be important in the big picture, but we're talking about how mass media came to be, so we're going to start with radio.) At first, radio was just wireless telegraphy using Morse code, which is still well-loved by hams like me. Every radio signal that conveys something other than a continuous tone has a bandwidth, which is literally how wide a piece of spectrum the signal needs in order to be transmitted and received effectively. Bandwidth is measured in hertz, abbreviated Hz. One Hz is one cycle per second: the wave starts at zero, rises to its peak, falls back down past zero to its trough, then rises back up to zero.
For wireless telegraphy, also known as CW, the bandwidth can be as little as 20 Hz, which is a really narrow slice -- if I'm transmitting that signal using a 28.000000 MHz carrier wave or "transmitting on 28 MHz", with a 20 Hz modulation frequency, if someone else in range is simultaneously transmitting at, say, 28.000010 MHz, also with a 20 Hz modulation frequency, our signals will interfere with each other, but if the other guy moves up to a carrier wave at 28.000040 MHz, we're fine. (Modulating one frequency with another frequency gives you what's called "sidebands" which are the sum and difference of the two signals, so the other guy has to move all the way up to 28.000040 to keep his lower sideband from overlapping with your upper sideband.) With the amount of bandwidth allocated for 10-meter (that is, signals with a wavelength of around 10 meters) narrow CW on the current ITU region 1 amateur bandplan, there's room for 1750 simultaneous 20 Hz signals: that band goes from 28.000 MHz to 28.070 MHz and each transmission would use a carrier wave that is separated from its neighbours by 40 Hz to either side. (I'm oversimplifying this a lot, because there are a bunch of things that come into play when figuring out how wide a CW signal is, but they're not hugely relevant here.)
I didn't explain what CW stands for yet because now I'm going to explain AM radio: the two are interlinked. CW stands for Continuous Wave. It's a carrier signal of constant amplitude and constant frequency, and to use it to communicate, you turn it on and off. Shannon talked about this a little bit in Part 1, Section 1 when he described the signals used in telegraphy: a dot is ON for one unit of time and OFF for one unit of time, a dash is ON for three units of time and OFF for one unit of time, a letter space is OFF for three units of time, and a word space is OFF for six units of time. (Clever readers may be thinking, "Hmm, is this units-of-time stuff important?" Yes, it is. We'll get to that, though it'll be a bit.) Since the signal is of constant frequency and constant amplitude, each dot or dash sounds the same when represented as a human-audible signal, i.e., a sound wave; they're just longer or shorter in duration. (Humans can't hear radio-frequency waves, but we can mathematically -- and electrically! -- map those waves down to a range of frequencies that people can hear.) But a CW signal is really just a special case of an AM, or Amplitude Modulated, signal.
Amplitude modulation just means changing, or modulating, the amplitude (informally, the height of the wave) of that carrier signal in order to produce variations in sound. This was originally invented for the telephone. When you speak into an analog telephone, the sound waves of your voice create pressure on a membrane in the mouthpiece, causing the membrane to vibrate. Those vibrations are mapped to the DC voltage (voltage, as a measurement, is just the amplitude of an electrical current) on the phone line -- the voltage rises and falls in sync with the vibration of the membrane caused by the pressure of the sound waves of your voice. The varying amplitude of your voice modulates the voltage (amplitude!) of the current on the telephone line, and that current travels over wires between you and whoever you're talking to. On the other end, the receiver translates that varying voltage into vibrations of a membrane in the earpiece, and the sound waves from that little buzzing membrane travel down the other person's ear canal to the person's eardrum, where they make the eardrum vibrate, the nervous system translates that vibration into nerve signals which the brain can interpret, and the other person hears what you're saying. Phew!
AM radio works in a very similar fashion, but instead of modulating DC amplitude (voltage!) going over a wire, we modulate the amplitude of a radio signal. The thing is, in order to transmit a voice signal, you need much more bandwidth than you do for CW. Here's why. See, the frequency of the modulation signal determines just how quickly you can raise or lower the amplitude of the carrier signal. A guy named Harry Nyquist proved back in 1928 that a wave of B cycles per second can be used to transmit 2B code elements per second (if anyone's interested, we could read that paper sometime -- for now, just remember you have two sidebands to work with), so with a 20 Hz modulation frequency we actually have 40 code elements per second or 2400 code elements per minute. For somewhat obscure reasons, the word PARIS is used as a baseline for establishing transmission speed. (Like in typing, a "word" is really "five characters".) PARIS in Morse code is [.--. .- .-. .. ...] -- so let's look at how many times we could transmit PARIS in a minute.
By Shannon's reckoning (which is a little different from how hams do it, but let's go with Shannon), a dot takes up 2 time units, a dash takes up 4 time units, the space between two letters takes up 3 time units, and the space between two words takes up 6 time units. So we've got (2+4+4+2+3+2+4+3+2+4+2+3+2+2+3+2+2+2+6) = 54. 2400 code elements per minute divided by 54 code elements per word gives us roughly 44 words per minute. That's the absolute maximum words per minute we can possibly transmit using a 20 Hz modulation frequency -- the maximum capacity of the channel. If we could key Morse faster than that -- like Ted McElroy, who could do over 70 words per minute -- we'd need a higher modulation frequency, which would eat up more bandwidth because the sidebands to either side of the carrier would have to be larger.
But this is just dots and dashes. You have to make the amplitude fluctuate (the technical term for this is sampling, which I'll use from here on out) much more rapidly in order to reproduce audio. CD-quality audio uses a sampling rate of 48 kHz. In AM bandwidth terms, 48 kHz is a huge modulation frequency. Today's FCC regulations limit the AM modulation frequency to 10.2 kHz (before 1989 it was 15 kHz), which is why AM radio doesn't sound anywhere near as good as a CD. And the FCC really hasn't allocated very much of the broadcast spectrum for commercial broadcasting; it never has.
Through all this time, the federal government has not changed the spectrum allocation for AM radio. But what about FM radio? What about television? We'll look at those as well -- but first, let's look at how FM works.
If amplitude modulation means raising or lowering the amplitude of a carrier wave to produce changes in sound, then frequency modulation is raising or lowering the frequency of a carrier wave to produce changes in sound. If you look at the waveform of an audio signal in the time domain (using, say, a program like Audacity), you'll see a sinusoidal wave of varying frequency and varying amplitude. The job of a frequency modulator is to combine this waveform with a sinusoidal carrier wave of fixed frequency and amplitude, to be sent out by a transmitter, and the job of an FM receiver is to tune in the modulated signal (by locking onto the carrier wave), strip out the carrier, and convert the modulating signal back into audio in much the same way that an AM receiver does, i.e., by turning it into a fluctuating voltage (amplitude!).
FM turns out to be much more robust against interference than AM, as you've no doubt noticed if you've ever listened to either of them while driving. If an AM receiver picks up two signals at or near the same carrier frequency, it can't determine which shifts in amplitude correspond to which signals, so it just demodulates everything it picks up at the tuned-in frequency and you end up hearing talk radio and the baseball game garbled together. Since the amplitude of an FM signal is constant, the signal strength is constant as long as you and the transmitter stay in the same place. This makes it easy for an FM receiver to pay attention to only the stronger of two carriers at or near the same frequency (known as the capture effect, so the weaker signal is attenuated (diminished) at the receiver, and only the stronger signal gets demodulated. This is a really nice property to have in a radio -- remember the problems back in the '20s with stations colliding on the air -- but it comes at a cost: FM requires more bandwidth than AM.
Rather than try to shoehorn FM into the 520 kHz-1610 kHz AM band, the FCC originally decided to put it in the VHF (Very High Frequency) part of the spectrum -- originally 42-50 MHz, later 88-106 MHz, and eventually the 87.8-108 MHz that it is today. That's nearly 20 times the allocation available to AM -- and for good reason, since each FM channel is 200 kHz wide, as opposed to the mere 10.2 kHz bandwidth per channel of AM. But that's only 101 channels. It was a lot back in the early days, but the spectrum filled up quickly, and broadcasters rapidly figured out that spectrum real estate was an incredibly valuable resource. So did the FCC. Licenses to operate a radio station are sold at auction, and the process is expensive and complicated. (As a concrete example, 122 licenses across the country are going up for sale this September 9th. The lowest opening bids are $1500 apiece, for stations in Peach Springs, AZ [pop. 600], Oak Grove, LA [pop. 2174], Rocksprings, TX [pop. 1285], San Isidro, TX [pop. 270], and Spur, TX [pop. 1088]. At the high end, $200,000 apiece, we've got stations in Lamont, CA [pop. 13,296] and Murrieta, CA [pop. 44,282]. So this should give you some idea of just how much Clear Channel has had to shell out for its 900-some stations in markets nationwide, both large and small.)
If you've read this far, you may be wondering what happened to all the information theory -- we started out so well, with Shannon and Nyquist and everything! Well, this is what Shannon had to work with back in 1948: analog transmission media over channels that could easily be polluted with crosstalk and environmental interference (i.e., weather). Building on Nyquist's work, he wanted to formally represent the notion of how much information could reliably be transmitted over a channel, with or without noise -- and in order to do that, he first had to characterise what information is.
After Shannon's groundbreaking work, engineers suddenly had the tools to figure out ways to represent information so that it could be transmitted more reliably, e.g., error-correcting codes. Also -- and this is the important part with respect to the FCC -- they had the tools to figure out how to interleave channels over the same carrier, thereby exploiting a single carrier frequency to transport multiple independently tunable channels.
Tune in next post, where we'll talk about the history and future of television, multiplexing, multiple-access protocols, software-defined radio, and some possible futures for the broadcast spectrum -- and the role information theory plays in all of them.
My oregano has been growing like a mad thing the last few months, so I decided it was time to harvest some and make oregano oil. I don't have a stillhead or a double-ended flask, both of which are necessary for steam distillation of solid matter. However, I had ordered a 100mL separating funnel from American Science and Surplus a while back, but the stem broke off about an inch from the stopcock, so I decided that would do. So I filled the sep funnel with oregano leaves, connected the stem to a stoppered one-arm Erlenmeyer (with thermometer), tooled a cork down to where it would fit in the mouth of the sep funnel, drilled it out, bent some glass tubing, inserted it into the cork, and set up the whole apparatus with the sep funnel on its side so that the bent tubing ran ever so slightly upward, then down into a beaker to receive the distillate. Presto: one ghetto still.
About two hours of gentle boiling later, I am now the proud owner of just over a dram of oregano essential oil. I'm really pleased at how well this worked; I was afraid that the steam would all condense in the sep funnel and end up splurting a bunch of oily water into the beaker, but from what I can tell, the yield was nearly all oil. There may be some oily water left in the funnel, but I'll find out later when I drain it. (The glassware is all cooling on the stove right now. Remember, kids: hot glass looks just like cold glass!) Also, the apartment reeks of oregano, but I suppose there are much worse smells of which it could reek.
Useful side note for the kitchen chemist: if you don't happen to have any Vaseline on hand, you can lube up a one-hole stopper with Astroglide.
I can't wait for the lemon balm to grow enough for me to harvest -- then I can make citronella. And the catmint. If regular catnip is kitty pot, catmint essential oil must be kitty crack.
Relax! - We Humans Can Live Forever And Become Equal To Gods - I got the Key to our Physical Immortality - Staying Absolutely Healthy All The Time, for Infinite Health = Immortality (8,500 years…
Comments
What do?