Top.Mail.Ru
? ?

Entries by tag: ethics

Lately I have much been enjoying the blog Slate Star Codex, which treads in sparkling prose much of the same rationality, ethics, cognitive science, &c ground that Less Wrong has gotten bad about stomping into dittoheady mud lately. By which I mean it's actually good and stuff. One recent post sparked off some recollections from, of all things, phonology.

"But, Meredith," I hear you say, "what could the study of how sounds are composed into syllables in different languages have to do with whether people are inherently pretty decent or inherently pretty awful and just want to be seen as nice?" Well, I last cracked a phonology text lo these ten and a half years ago -- you will find posts about it on this very blog if you look back that far -- so I may be off about some of the details, and the field has doubtless moved on despite my inattention. (I welcome correction from practicing linguists [q_pheevr? kirinqueen?], more attentive students, &c.) But here goes.

One of the common underpinnings of the various phonological theories that I studied in undergrad and grad school1 is the notion that every syllable, word, &c that is spoken has an underlying representation2 -- i.e., a mental representation of a sequence of sounds to be produced, some abstractable piece of input for one of the state machines on the composed chain that leads from brain to vocal tract. The output of this state machine is (presumably) the sequence(s) of nerve impulses that make your vocal tract do the necessary to make the sounds you wanted to say -- but the sounds articulated (the surface representation) will vary predictably from the underlying representation. The job of a phonologist is to characterise languages in terms of these transformations, ideally in the most compact (or, as both linguists and computer scientists prefer to say, elegant) way possible.

Here's a concrete (and classic) example: English pluralization. The regular plural affix in English is -s, and in cases such as cat → cat-s or top → top-s, indeed the phoneme produced in the surface representation is /s/. But what about dog → dog-s, pronounced /dɔgz/? Or toy → toy-s, pronounced /tɔɪz? You get the picture. So this sort of thing got formalised in the 4th century BC for Sanskrit, but the West only got round to working it out starting in the mid-20th century after lots and lots of descriptive work from people like the Grimm brothers (yes, really). The theoretical frameworks of the '60s and '70s (of which the several I learned about, and have mostly forgotten, grew out of the work of Noam Chomsky and Morris Halle) were all fairly rule-oriented in the way that writing software is rule-oriented, and they all aimed to give linguists the ability to produce a complete description of the rules necessary to produce all underlying → surface transformations for whatever language they happened to be studying.

By now you may very well be saying, "Where do these underlying representations come from, anyway?" I know I am; it's kind of amazing how much clarity one loses about a field of study when one hasn't touched it in a decade. That said, the Chomskyan family of theories has often been criticised for coming up with "just-so stories" about what goes on between brain and vocal tract (the work of Steven Pinker notwithstanding; let's just say there is a lot of ground still to cover), so it's a good thing we're segueing to optimality theory now3. Optimality theory, which came on the scene in 1991, still relies on this notion of underlying representation, but it posits that instead of a however-intricate-it-needs-to-be spiderweb of rules to describe every little edge case of how pronunciation rules interact together for a given language to map a single input to a single output, there's a ranked set of constraints which, applied against the set of all possible candidate surface representations available at the time (which could, in principle, be any old bullshit your brain decides to come up with -- we are talking about a massively parallel computer here), selects a "least-bad" candidate which is then vocalized. The set is the same for all languages, but the ranking differs from language to language. So now language acquisition (you learn the constraint ranking for the language you're learning) and linguistic typology (linear edit distance between constraint rankings), oh and also phonology, fall out of one theory, albeit one that still needs some empirical validation.

So now let's talk about ethics.

Part of doing computer security is being able to think like the bad guy. It is a useful thing, when operating as a defender4, to be able to think like an adversary, to conceive of attacks you would never yourself perform, while coming up with your defense strategies. Put another way, out of all the possible constraints on things it is possible to do with a Turing machine, developers tend to have one typology of rankings ("who would ever ask our database for anything other than what our application asks for?") and attackers a very different one. But a defender who can't adopt the attacker mindset for the purposes of risk assessment cannot be an effective defender, even if the options the defender considers during risk assessments are ones that they would never do independently. Furthermore, if the defender's model of "the attacker mindset" (in the analogy we're constructing here, the "attacker constraint ranking" that the defender uses as a temporary replacement for their own constraint ranking) doesn't comport with what attackers do, the defender won't be very effective either. So not only do you have to be able to think like a bad guy, you have to be able to do it well. A lot of people have cognitive dissonance over this (e.g., as Rob Graham points out, a particular prosecutor, judge and jury in New Jersey). I don't, but then, that's why I'm a computer security researcher.

But it goes beyond that. As you might expect of someone who thrashed Rob Graham so hard in a Cards Against Humanity game that he wrote a blog post around it, I possess a deep capacity for being a terrible person and I am totally okay with this. There have been times in the past when I have done terrible things that have hurt people. Hell, there are times today when I do terrible things that hurt people, like buy goods made in shitty working conditions, although really I have been doing my best to minimize the amount of direct personal harm I inflict on people. What I've noticed, through introspection and discussion and so on, is that by and large, the harm I bring about is through ignorance or inattention rather than intent; having realised this, my gut response has been to try to be more mindful and less of a fuckup, thereby decreasing the extent to which my fuckups inconvenience anyone else. So one could say, as one example, that I raised the rank of the constraint "be conscientious with other people's things", and that while my brain might produce the idea "juggle your housemate's coffee cups," it would fall hors du combat early on in processing due to its violation of this highly ranked constraint. However, nothing prevents me from altering my constraint ordering in a different situation where it's appropriate to do so -- like changing politeness registers for whatever culture I'm in, or taking all the filters off and optimizing for balls-out hilarious evil in a Cards against Humanity game. When it is contextually appropriate and safe to be horrible, I can be a son of a bitch with the best of them, which is fun because being good at things is usually fun. (And by "safe" I mean "nobody gets hurt", which is usually the case in a Cards Against Humanity game apart from your illusions about how your friends spend their spare time being shattered.)

Really I guess this isn't so different from Kant's notion of the categorical imperative, but with lots of them and a ranked-choice ordering. And I also see, off in the distance, something that might be a parallel with Jonathan Haidt's moral foundations research, or which might just be an oncoming train. But it could be interesting to, say, design scenarios that require people to make moral-dilemma decisions quickly and look at the correlations between their choices and their scores along those axes.

Anyway, I'm not sure this gets us fundamentally any closer to answering whether humans are inherently good-seeking or good-appearance-seeking, because obviously there's no objective way to evaluate what constraint ranking a person is using, or whether a person is telling you the truth about their self-reporting of the constraint ranking they're using, or even whether they're right about their self-reporting. But it has been of practical use to me, in the sense that I don't feel any particular cognitive dissonance (e.g., revulsion) when my brain suggests particularly horrific or vile responses to stimuli; when I have the time to think about it, at least, if these things register at all they register as "considered and rejected," as a neat little monadic package. I suspect that it's also an instance of the "I made it but that doesn't mean it's part of me" distinction that I have also found of considerable utility in the last year and change, but that is another topic for another time.

1The University of Houston and the University of Iowa were both Chomskyan programs when I went through them; I learned a little about head-driven phrase structure grammar but that was about it as far as exposure to other theoretical frameworks. I know all the cool kids do statistical everything these days; I work for the world leader in the field, turning research code into production code, so I don't actually get all that much theory these days, and also I work in natural language understanding rather than speech recognition anyway. But these are details.

2I always got the feeling the whole underlying-representation thing had to do with historical similarities, especially since when you study a whole bunch of languages all in one family (which I got to do, for a lot of different families), it quickly becomes clear that a lot of the phonological parallels in languages like, say, Dutch and English are predictable because it's the same word, just said differently. But I don't remember any of my profs or any of the books or papers I read explicitly coming out and saying that. Maybe it's obvious? I don't know. It seems kind of simplistic now that I lay it out like that. My memory is kind of shit sometimes.

3Is that your lampshade?

4My research actually operates a level up from this, focusing on hardening software in rigorous ways, because I don't like having to do the same thing over and over again.
Please, for your own sake as well as the rest of ours, shut up with the damn tone argument already.

Yes, yes, Dawkins and Hitchens and Myers (oh my!) are caustic. Allow me to point out something that you seem to have failed to notice: They know that.

I point this out because, as a frequent Pharyngula reader, I enjoy reading PZ Myers' discussions of ethics from an atheist perspective. More generally, I enjoy reading discussions of ethics from a wide variety of philosophical and religious perspectives. Every metaphysical system has its own set of axioms, and it is interesting to see how different approaches to epistemology, ethics, &c. can be derived from these axioms. Myers, for example, has a debate going on with Sam Harris about whether there can be a scientific foundation for morality. (Harris says yes; Myers disagrees, and the reasons why are interesting, so go read it yourself.)

"What principles can we validly derive from the following assumptions?" is a rewarding avenue of inquiry. It is a rewarding avenue of inquiry even if one does not agree with the assumptions in question. Or the derived principles, for that matter. As just one example, Fred Clark's grueling exegesis of the Left Behind novels is fascinating stuff because it delves relentlessly into the moral fabric that underpins the books' (and authors') worldview, and provides real perspective into the Avengelical (h/t Making Light) mindset. This turns out to be an approach that has far more staying power (coming up seven years now) and promotes far more cogent discussion than mere fisking or snarking would. Picking apart how premillennial dispensationalists' principles lead them to their conclusions actually serves to make both the principles and the conclusions even more horrifying, but I hold that this is a good thing; getting a person to examine his principles -- which is probably the most effective way to get him to examine his conclusions -- generally requires, as Atticus Finch had it, standing in his shoes and walking around in them for a while. Even if you need to bleach your feet afterward.

But concern-trolling over the tone argument is boring. It's boring in discussions of racism, it's boring in discussions of feminism, and it's boring in discussions of religion or the lack thereof. Sure, it'll draw hits to your blog, but it's the search-engine optimization of debate: you might pick up some more AdSense dollars (trolling, concern- or otherwise, is certainly an effective tactic for this), but you're not adding anything new to the discussion. You could reply to the substance of your opponents' conclusions -- bonus points for tracing the logic back to an actually held position and not a strawman -- but, no, those pearls won't clutch themselves.

So, ask yourselves: do you want the hits, or do you want to add something to the substance of the ethics debate? Because you can do both at the same time, but it requires real research and real effort. Taking the time to understand someone's underlying axioms is serious work, but it is also the only way to address the substance of any philosophical perspective. Otherwise, you're just tilting at strawmen, and it's getting dull.

tl,dr: silly internets, put more work into entertaining me, kthxbai.

Your friend,
Meredith

"Lost in the Meritocracy" by Walter Kirn

In which postmodernism and the decay of the modern university from the inside out drive a man to the brink of madness and ruin, and the one thing he has evaded for the whole of his academic career is the only thing that can bring him back.

Extended discussion later, maybe. I am tempted to start dissecting this beast right now, if only for the fact that if I go to sleep now I will likely have nightmares about it. This is horror of Lovecraftian magnitude, though it more properly follows in the footsteps of Poe. The young man from Providence wrote terror stories, in which the Unspeakable Elder Things are outside not only the ken of man but also of what man can know. In Kirn's tale, as in Poe, corruption and evil emerge from within -- they are born of man, they take root in the narrator, and they suffuse and pervert one of the greatest institutions of mankind. Yet these monstrosities are classic Yog-Sothothery, for they are demons of unreason, blind gibbering egregores that wreak havoc on the narrator's very grasp of sanity. Nyarlathotep walks the halls and eating-clubs of Princeton.

Oh, and by the way, it's nonfiction.

Pleasant dreams.
The following is a discussion that bunnykitteh and I got into on an old comment thread. It's gotten long and thought-provoking, so with his permission, I'm pulling it into a new post of its own to invite discussion.



bunnykitteh:

I don't so much consider hybridization as a "GMO" issue, especially if it's the kind of thing that technically could happen in nature with no outside influence simply by two different plants growing close enough together, etc.



maradydd:

Why?

Do you believe that there's a difference between, say, corn produced by pollinating one strain of corn with another vs. corn produced by taking strain A and manipulating it to replace some of its genes with genes from strain B, such that the result is identical to the corn produced in the first example?



bunnykitteh:

Why is this very low curb something that GMO apologists fall all over themselves to trip over?

Old school hybridization is CLEARLY different in important and fundamental ways from the kind of GMO manipulations that are done today.

What you seem to be asking is: are the important and fundamental differences a result of the techniques themselves?

I don't know, and that's irrelevant. There are important and fundamental differences that are a result of what's done, not how it's done. Genetic modification is used to do things that CAN'T be done in nature... in fact, that seems to be rather the point of it all :-)

The concern that seems to go ignored and unaddressed is the fact that once these changes are released into the wild, there is no containment and no "undo". This affects me and everyone else on the planet in ways that, frankly, I don't consent to and you have NO right to inflict on me.



maradydd:

Sadly, it's a question that I have to ask. Believe it or not, there are quite a lot of people who believe that of the two scenarios I depicted (and note that here I mean exactly these scenarios), the first is safe but the second is not -- despite the fact that the outcomes are identical. Simply put, some people are irrationally terrified of any genetic modification that doesn't happen in a field, and it's simply impossible for me to have a productive discussion with someone who's hampered by that kind of fear.

It appears that you're not, though, so we can have a productive discussion. :) Really, I'm sorry that I even had to ask; I think, though, that it's better to waste time with one question up-front rather than getting into an emotionally charged debate that would take up time and would ultimately be doomed to failure. I'm glad that's not the case here.

(I am still somewhat curious as to whether you would eat produce created through the second scenario I posited, but if you think it's an irrelevant question, then let's just move on, k?)

To briefly answer your implicit question: I'm actually very concerned about the "no containment / no undo" problem, and one of my biggest concerns, particularly with respect to the DIY movement, is that experiments must not be released into the wild without rigorous testing.

We've already seen, in the US, India, and elsewhere, that GMOs can and do have unanticipated effects on existing organisms. I'm furious, for instance, at Monsanto's attempts to sue US and Canadian farmers whose crops were pollinated by windborne pollen from GMO produce growing elsewhere. This is a sociopolitical chilling effect which must be crushed -- Monsanto has no right to accuse farmers of "gene theft" when those farmers had no intention of incorporating Monsanto's sequences into their crops. Now, that's a political issue, but it has bearing on your concern as well: I believe that farmers have every right to grow the crops which they want to grow, and if a farmer wants to grow crops which don't incorporate modified sequences, he should have that right. Here's a hypothetical for you: suppose that an organic farmer discovers that his corn has been pollinated with pollen from GMO crops, and as a result, his crops can no longer be certified organic. Should the farmer be able to sue Monsanto for lost revenue? I'm inclined to say yes, although in practice that would likely be a difficult case to win. The end result is basically the same as if Monsanto burned the farmer's fields, since the farmer's crop is no longer fit for sale, but I suspect a court case would hinge on whether the farmer could prove malicious intent or, more likely, negligence.

In fact, that's probably the best way of phrasing my outlook on the subject: I think it's negligent for bioengineers and biohackers to create synthetic organisms which have the potential to affect/contaminate (e.g., via hybridization/sexual reproduction, though certainly in other ways as well) parts of the biosphere for which they were not originally intended. This is a difficult problem to solve, but the onus is absolutely on us, the engineers, to figure out how to do that. You have the right to eat only what you want to eat, and to know what you're eating. You have the right to know what's in your environment, and to avoid organisms that you want to avoid.

(As a side note, the DIY movement is certainly not focused only on synthetic biology -- that's just what's grabbing headlines. One project that you might appreciate is Jason Morrison's BioWeatherMap, an open-source effort to catalogue "local microbiospheres" -- in other words, what microorganisms are present in different areas -- and track the movement of different strains of bacteria, fungi, &c throughout the world. One of my hopes for the future is that projects like this will make it easier for us to be aware of the invisible aspects of our environment. Imagine a world where you could view not only the weather forecast, smog report, and pollen report for San Francisco, but also a bacteria and virus report! Now tie it in to GPS and add a sampling system to, say, your cellphone. This presupposes some pretty major advances in miniaturization, sampling and sequencing, &c, but I think the results would be really awesome.)

Anyway: My own work is certainly affected by the principles I outlined above, most obviously scurvy-gurt. (Let's first stipulate that scurvy-gurt will work at all. I don't know if it will.) If someone doesn't want to have scurvy-gurt in their system, preferring instead to get their vitamin C from citrus fruit and whatnot, I should respect that. There are a couple of ways I can do that. The simplest is to make the enzyme-producing bacterium dependent on some particular nutrient not normally found in the human body (but safe for humans to eat) in order to survive. There's already a dentist in Florida who's developed a synthetic-bacteria treatment for tooth decay which uses this principle: he's modified the mouth bacteria which produce enamel-damaging acids so that they no longer produce those acids, then tweaked them further so that they outcompete their acid-producing cousins. However, he's also made them dependent on an additional nutrient, which he puts in a mouthwash which patients who use this technique must then use in order to keep their new bacteria alive. If the patients don't use the mouthwash, the no-acid bacteria die, and their mouths will eventually be colonized by decay-generating bacteria again. It's actually a cute money-making technique for him, in the spirit of "give away the razors but sell the blades" -- he could give away the bacterial treatment for free, then sell the mouthwash in order to make a buck. And, in fact, that's probably what's motivated his decision. :P OTOH, it has the additional side effect of doing exactly what you want -- making sure that the bacteria don't escape the habitat they're placed in.

We could do something similar with scurvy-gurt, though that presents an ethical dilemma for me. I think it would be nothing short of reprehensible to offer a cure to a crippling and often fatal disease but effectively force people to buy a supplement for the rest of their lives. Really, that's back to square one, since in order to distribute this supplement, we'd need the same kind of supply chain we already don't have for distributing vitamin C tablets.

Well. I say that, though the real-world situation is slightly more complicated than I've just depicted it. The WHO report on scurvy that I read (which I can link for you if you want to read it) points out that scurvy is a major problem in refugee camps, despite the fact that aid packages include cereals supplemented with vitamin C. Why? In a word, culture. In the parts of the world that are having problems with scurvy, it's common to boil grains for much of the day -- and vitamin C breaks down after about half an hour of boiling. I'd really like to be able to develop a "fire-and-forget" solution -- and I won't lie, there's a part of me that thinks it's terribly racist for a person to say "I never want any GMOs to come anywhere near me, ever," when a synthetic-biology solution to a brutal, fatal disease could be saving the lives of brown people in remote countries, with the consequence that one day everyone in the world would have this synthetic organism living in their intestines cranking out an extra enzyme.

As a First World analogy, suppose someone were to develop a virophage (virus which attacks other viruses) which selectively attacked the AIDS virus, destroying it throughout the body of anyone infected with HIV. Since we're imagining, let's also make it immutable. (Impossible in practice, but since we're discussing a particular ethical question, let's just stipulate this to start.) Suppose further that this virophage also remained dormant in the host's system, ready to attack any new HIV which entered the system. This would imply that the virophage could also be transferred to other people (likely via fluid contact). Would it be immoral to create this virophage? To use it as an HIV treatment? If you had HIV, would you use it? If your partner had HIV and decided to use it, what would you do?

(FWIW: in practice, I think it might be possible to develop an HIV-destroying virophage. However, I think it would also be terribly hard to get it to remain in the body after the HIV infection was eradicated. So actually, HIV strikes me as a less dilemma-fraught example, because I don't see any practical way to make a spreadable virophage.)

Anyway, these are the kinds of ethical dilemmas I struggle with every single day: where is the balance between respecting people's freedom of choice and, simply put, stamping out pain and suffering in the world? And, thinking outside the box, is that a choice we must necessarily make? Is there a way to achieve both goals? I'd like for one to be found. I don't care whether I find it or whether someone else finds it, I just want an answer. So I hope that by having discussions like these, we can delve more deeply into the thorny social problems that synthetic biology presents than the discourse typically does, and in so doing, inspire someone to find those out-of-the-box solutions.

Oct. 14th, 2006

Advice to people at concerts who think it's a good idea to chatter away over the music: when the people behind you counsel you to kindly shut up or continue your conversation elsewhere, consider the fact that since they are behind you, they have a perfect shot at the tendon in the back of your knee, and if you have any interest in walking back to your car after the show, you should probably follow their suggestion and shut your fucking cakehole.

I'm just sayin', is all.

For the curious: no, I didn't actually cripple anyone tonight, much as I wanted to. I did, however, wait for the song to end, then lean over the jackass's shoulder and cheer at the top of my lungs directly into his ear. That got the point across.

Public service announcement

Dear pretty much everyone,

If you try to contact me on (insert IM service of your choice here) and do not receive an immediate response, DO NOT FUCKING KEEP PINGING ME.

My status indicator does not read my mind and I typically do not bother to set it. The fact that I am set "Available" should therefore not be taken as gospel truth, either of my presence or of my availability to talk. I might have left the house. I might be in the shower. I might be fixing a meal, or eating one. I might be having hot monkey sex on the bathroom counter. I might have fallen asleep and left the laptop on.

I might be in the middle of a code binge, and if so, that has doubly unpleasant meaning for you: if I take the time to reply, it knocks me out of my groove, and if you keep pestering me, it knocks me out of my groove, so either way I want nothing more than to blow your head off with my 12-gauge. Your petulant whines of "but you weren't saying anything!" don't matter. If I do not answer after one contact attempt, assume I am not there. Maybe I'm simply not there. Maybe I'm there, but I just don't feel like talking. Or, horror of horrors, maybe I'm there but I just don't feel like talking to you. Moods change; I am generally pretty antisocial, but as long as you don't continually give me reason to be antisocial in your direction, I'll feel chatty at some point.

Again: if I don't answer after one try, assume I'm not there. For all intents and purposes, it will be true.

A proto-FAQ, for questions I expect may come up:

But what if there's something I really, really need to tell you ASAP?
Email me. The address connected to this livejournal is checked frequently -- more often than I check my cellphone's voicemail -- and you can flag it "return receipt requested" so that my ISP will let you know I've read it. If "tell you" is "get your opinion on" or "have you solve for me", well, that might take longer.

I am also not particularly bothered by one-line pieces of information (useful information -- articles you think I'd find interesting fall into this category) which do not demand an immediate reply; see But, Meredith, why do you stay connected while you're working if you hate being bothered so much? for more.

But I'm lonely!
Then go out and do something with your friends. Find a new hobby, or new people who enjoy a hobby you already have. Paint your walls. Paint somebody else's walls. Volunteer for a homeless shelter, a nursing home, a hospital, or your local parks department. If all else fails, go to the animal shelter and adopt a dog.

But Meredith, why do you stay connected while you're working if you hate being bothered so much?
Because I am a deeply antisocial person with a family and a social life. IM is one of the fastest ways for my sister to get in touch with me if one of my parents is in the hospital, or for a friend to say "hey, a bunch of us are going to the movies later, give me a call before 7 if you want to meet up".

IM is also a great source of work assistance for me. I'm deeply grateful that I can reach out to allonymist, yoctohedron, cipherpunk, ti94 et al when I need someone else's perspective on a code problem, and I happily return the favour when it's within my means to do so -- and by the same token, I leave them the hell alone when they say "can't think about that right now, too busy".

That, and sometimes I just want the person-who-works-at-home version of a water-cooler break. But that happens on my schedule, not yours.

How long should I wait to ping you again, if I've tried once and you haven't answered?
Wait about an hour and try again. If I don't answer, all the above continue to apply. Note that passive-aggressive remarks, of the "why arent u there?" or "*sigh*" variety, will decrease my likelihood of responding.

But you never want to talk to me.
Then you've probably annoyed me with your constant pestering and should leave me alone for at least a couple of days. Note that if you can see me online at all, it means I haven't blocked you and am still up for talking with you at some point, but on my schedule, not yours. See But I'm lonely!, and get a dog.

Face-punching for free

leahbobet has a call to arms regarding the incident at this year's Hugos wherein Harlan Ellison groped Connie Willis's breast onstage.

I'm not especially involved in the SF community anymore, mainly due to lack of time and effort on my part, but I've also heard reports from thewronghands and other tech women about inappropriate groping/wrist-grabbing/whatever from people in the tech field. Now, I have a standard response to things like this: if someone gropes me, I punch the offending party in the face. I don't care if it's a friend, an acquaintance, or a total stranger. A friend might get the courtesy of me saying "Hey, cut that shit out," but if they keep it up, they're going to get punched. Someone at #s decided it would be funny to tickle me, and ended up getting clocked in the throat. Someone at elegantelbow's New Year's Eve party decided it was okay to fondle the scarification piece on my right shoulder, and got an elbow in the ribs for his trouble. This kind of thing doesn't happen to me often -- perhaps I'm just not the most gropeable person in the world -- but the response by now is just instinct.

However, I know there are a lot of people out there who, for some reason or another, don't feel comfortable punching an assaulter in the face. Maybe you've never hit anybody before and think it wouldn't work. Maybe you're worried about getting hit back. Maybe you were brought up to believe that Nice Girls don't do that sort of thing and haven't trained yourself out of it yet.

Well, I am not a Nice Girl, so here is my offer: if I am in your immediate vicinity and someone gropes you, I will punch them in the face for you. All you have to do is alert me -- quickly -- to the problem and the offending party. A nice loud "$name, get your hands off my $bodypart!" should suffice. This serves two purposes: one, it lets me know who to let have it, and two, it draws attention to the asshat in question and directs the condemnation of the rest of the room straight to the offending party. Shame -- particularly shame in the heat of the moment -- is a powerful disincentive toward sexually offensive behaviour. People grope other people because they think can get away with it. Okay, yes, Harlan has drawn the ire of a large part of the SF community after the fact, but unless some form of lasting censure arises from this groundswell, he has gotten away with it. Had someone raised a ruckus at the time, he would have had to deal with a rather more acute form of embarrassment than what he's being subjected to now; the condemnation of peers who are in front of you is a lot more cutting than the condemnation of peers who are far away.

So, SF folks, do what you can after the fact; I applaud that.

But next time, someone punch the asshat in the face, okay?

Latest Month

July 2015
S M T W T F S
   1234
567891011
12131415161718
19202122232425
262728293031 

Tags

Syndicate

RSS Atom

Comments

Powered by LiveJournal.com
Designed by Tiffany Chow