Creative and open scheduling plans can only do so much to alleviate the problem. There are too many uncertainties in the practice of medicine to deliver regular service on the order of a MacDonalds.
Actually, I think the parallels are closer than Rangel thinks. If you've ever been to a McDonalds, you'll notice that there are times when the line is out the door and times when there are people standing around waiting. This is happening for the same reason that patients wait for a long time for doctors: there's a lot of variance in the offered load and so it's hard to get just enough staffing to serve the load. McDonalds could certainly have enough staff to make sure that noone ever waited, but if they did so those people would mostly be idle and it would cost a lot. Instead, they compromise and have busy times and idle times.
I don't actually think that the load variance is that different between doctors and McDonalds. The transaction time at McDonalds is probably more constant but the number of customers varies wildly (although probably rather more periodically than with patients). However, I suspect the real difference is that McDonalds has decided to set their staffing level comparatively higher and accept more idleness in return for faster customer service. This probably has something to do with the fact that if you get tired of waiting at McDonalds, you can always just hop over to In and Out. Changing doctors isn't quite so easy.
The problem is that there is a disconnect in the mind of the public at large and with many patients that the business of health care should be like any other business with regard to exact scheduling and service. But with so many demands on the physician's time and the uncertainties inherent in health care this just simply cannot be like a trip to McDonald's. Add this to the fact that many physicians are under pressure to schedule as many patients as possible to make up for falling insurance reimbursement rates and you have the potential for some significant delays.
Every patient seen by a physician has the potential to become much more than a routine 15 minute office visit. Let's say that a patient is scheduled to see his physician for a short and routine follow up visit. The patient tells the physician that he had some severe chest pain the morning of the visit. The physician orders an EKG in the office and it turns out that the patient is in the middle of having an acute myocardial infarction. Shortly after the EKG the patient suffers more severe chest pain and his blood pressure falls dangerously low. 911 is called and the physician and office staff are engaged in starting IVs, administering medication and trying to stabilize the patient prior to being transported to the ER. A "routine" visit turns into a hour long emergency and the entire schedule must be pushed back.
There's an important point here. The more uncertainty there is about how longer each visit will take the harder it is to get efficient scheduling. If the variance is very wide then someone's time will get wasted. However, that time still doesn't have to be the patient's time. For instance, if 99% of visits take <60 min, then scheduling on 60 minute intervals will minimize patient waiting time (while maximizing doctor waiting time).
Let's take Rangel's example. Assume (as a simplifying assumption) that all visits are 15 minutes, except for 1 out of 25, which is an emergency and requires an hour. This gives us a mean treatment time of 16.8 minutes. If the doctor schedules appointments on 20 minute intervals, then he'll have 24 appointments a day. On average, he'll have 1 emergency a day. Just for convenience, assume that his first patient, at 8:00 is an emergency. This gives us the following timeline:
|Patient #||Scheduled Time||Seen Time|
So, the doctor's schedule is back on track by 11:00 and there's no time when it was more than 40 minutes out of whack. Now, it's of course possible that there will be another emergency, but only 1/625 days will have two emergencies and even then the maximum patient waiting time will only be 80 minutes (if the two emergencies are consecutive).
The point here is that while high variance in treatment time means that there will be a lot of waiting, there's no requirement that that waiting be imposed on the patient. The reason that patients wait a lot is that doctors have decided that they'd rather have patients wait than be idle. (In the schedule above, the doctor would have been idle about 120 minutes in any day when there are no emergencies). That's a perfectly rational response to incentives, but it's not inevitable that things be that way. If doctors were heavily incentivized not to make patients wait, this would of course reduce waiting times (while probably driving up medical fees).
Actually the physician's "excuse" that he was very busy at four other offices that day is not a "cheap" excuse. It's a problem that everyone in the medical profession faces. I don't know the specifics but maybe this physician had other procedures that ended up taking longer than expected further pushing back his schedule. Maybe there was an emergency or two that the physician had to take care of and patients with chronic pain get lower priority in the schedule. Maybe the physician was covering for partners or other physicians who were on vacation and so his case load was doubled. Then again maybe this physician is just plain greedy and schedules too many patients than he should per day?
This is actually a generic problem in customer service situations. There is always some unpredictability in how long things are going to take and that leads to conflicts as to whose time is important. Knowing how many patients a doctor "should" schedule is quite tricky.
This is probably easiest to understand by looking at a simplified model. Let's say that there are two kinds of patient visits: Long and Short. Long visits take 40 minutes. Short visits take 20 minutes. On average, 80% of visits are Short and 20% are long. So, the average treatment time is 24 minutes.
How should he schedule the appointments? The obvious answer to this question is that he should schedule them every 24 minutes. But consider that that if the first two patients are Long (probability=.04), then the third patient will be waiting for 32 minutes, which will no doubt make him grumpy. On the other hand, if the first two patients are Short (probability .64) then he's 8 minutes early for the third patient  and has to wait for him.
Thus, there's a tradeoff between doctor and patient waiting time. There's not really any right "should" value. The doctor can minimize patient waiting time but only by increasing his own. His incentives are to do the opposite: make patients wait. It's true that that's "greedy", but that's the kind of greedy behavior I expect pretty much everyone to engage in. On the other hand, as a patient I want to incentivize my doctor for valuing my time. I'm not sure that lawsuits are the best way to do it, though. That likely creates too strong an incentive pressure to underbook. Maybe doctors could offer "service in an hour or your money back" guarantees like tire installers do...
 We're assuming that the patients don't arrive early. If the patients arrive early then you can just think of this as their appointments being earlier, so it doesn't really effect waiting time.
So, I'm in the market for something new. When I bought my Vaio back in 1998 it was pretty much uniquely better than any other laptop on the market: small, light, relatively fast. Laptops had already been on a downward weight trend but Sony's big idea was to strip pretty much everything out of the laptop and put it in external devices, thus reducing weight down to 3 lbs.
You'd think that in 5 years things would have gotten better. As you'd expect from Moore's law, laptops have gotten faster and a bit lighter, but basically it's a wilderness of Vaio clones out there. In fact, when I first started shopping the available machines were so unattractive that I got a lot more interested in saving my Vaio--the leading option being considered is to cannibalize another Vaio's power connector.
Three companies are showing some creativity:
The Sharp is clever and the Sony is cute, but for my money at least the dominant option is pretty clearly the Panasonic. It's lighter than the Sony and I like the form factor better. Moreover, it's engineered to be especially durable--including a shock mounted hard drive. The remaining problem is to get my hands on one so that I can test FreeBSD before forking over my $2k. (I already discovered that FreeBSD doesn't boot on the TR1A, so I don't want to take any chances).
One thing I do wonder: why aren't Panasonic's machines being marketed better? The W2 really vastly better than all the commodity laptops out there. So why haven't I ever heard of it? And for that matter, why is it called "Toughbook W2" and not "Best laptop ever"?
The answer has little to do with this specific case and everything to do with our national hysteria over rape law--a hysteria that rape accusations are now easier than ever to make and easier than ever to prove, that rape convictions can now be based on the barest assertions, that punishment for rape is harsher than for anything save murder.
But towards the end:
Ironically, empirical evidence shows that all these reforms have not significantly increased the incidences of reporting, prosecution, or conviction for rape.
Have I missed something here? If it's easier than ever to make a rape accusation and get a conviction, then why aren't rates of accusations and convictions going up? I suppose it's possible that overall rates of rape are going down in parallel, so while it's getting easier to prosecute the total number is going down, keeping the reporting and conviction rate constant, but that should be easy to disentangle with enough measurements.
A little research confuses me even further. The rape victimization rate has steadily declined from 1972 to 2000, much faster than say assault or murder. By contrast, the number of reported rapes is way up from 1973 (though roughly constant between 1980 and 2001.) Since the victimization rate is declining and the number of reports is going up, it certainly appears that the reporting rate is going up as well. (There's some inconclusive evidence of that here. Similarly, I'm not sure why Lithwick claims that conviction rates haven't gone up. In the US, at least, the rate of convictions went from about 100/1000 to 180/1000 between 1981 and 1995.
I'd be interested in the data that Lithwick is using to say that there's no significant change. If anyone is familiar with the literature on this topic, I'd love to hear about it.
"Our biggest concern is what appears to be a resurgent epidemic in gay men," said Harold Jaffe, director of the CDC's National Center for HIV, STD and TB Prevention.
In fact, data from 25 states show the number of new HIV diagnoses among gay and bisexual men increased 7.1 percent from 2001 to 2002, marking the third consecutive year that infections have risen in that high-risk group. HIV diagnoses among gay and bisexual men have increased by 17.7 percent since they hit an all-time low in 1999.
"I don't think there is any one explanation," Jaffe said in a telephone interview. "Some of it may be related to treatment optimism: 'So what if you get infected? You can get treated.' Some of it may be related to the belief that if you are in treatment you may not transmit the virus. Some may be epidemic fatigue -- being tired of hearing about it."
"I think the most compelling reason is that people aren't scared any more. If you were a gay man in the 1980s you were scared. You had a lot of friends who were sick and dying. If you are a gay man today you don't have a lot of sick peers," Jaffe said.
This, of course, is exactly what you would expect. When AIDS was basically a death sentence, people were naturally relatively careful not to get it. Now that you have a reasonable chance of surviving--albeit with a really unpleasant treatment regime--people are being less careful and the case rate is going up. I'm not sure that we should find this disturbing or of concern. If AIDS is less bad, it's perfectly rational for people to want to take more risk. Remember, the major objective is to stop peeople from suffering and dying from HIV, not to bring the case rate down to zero. Of course, it would be nice to have the AIDS rate be zero, just as it would be nice to have the flu rate be zero, but that's not the first priority.
Of course, if it turns out that people are misestimating the risk and AIDS isn't actually manageable, then we would want to educate them about the risk. However, I don't know of any evidence that that's the case, at least in this country.
Ideally you want to run backups every day. Of course, this doesn't guaranteee that you'll never lose data, but it keeps the scope of the loss down under control, since if all goes well you won't lose more than a day's worth of work. The problem, of course, is that you have to remember. Most people's remembering isn't very good.
The fix, of course, is to use an automated backup system. I use one called Amanda. backs up to magnetic tape (I use 8mm exabyte tape). The problem is that you need to change the tape every day.  I'm not very good about that. If you don't change the tape, Amanda will use a "holding disk" to store the backups. This protects you well from mistakes but not so well from crashes. And, of course eventually the disk fills up so eventually you want to flush it to tape.
Last night I noticed that I hadn't flushed the holding disk in a long long time. We're talking 5 months here. The disk had long since filled up and so no backups were being done. So, today is being spent flushing the disk to tape. After that, it will take a couple of days for Amanda to get all my disks copied onto tape and we'll be back in business.
Of course, this whole thing is just tempting fate. Murphy's tells us that now would be a perfect time for me to make some catastrophic mistake that would destroy all my data, so I have to be ultra-careful over the next few days until things settle down. If I escape this little incident without any damage I'll consider myself lucky.
 You might wonder why you need a new tape every day. The answer is that the Amanda people consider it good backup hygiene. Tapes fail too and this limits your damage. So, Amanda basically only works that way. You can cheat a little bit by intentionally spooling to backup disk and then dumping to tape, but then you get out of the tape changing habit and oops...
Old backups now dumped to tape. Running new backups. If things are going to fail, now would be the time prescribed by Murphy's Law.
They all seem to be missing a key element of the report, though. In fact, the colonel in command justified the action as "an intelligence operation with detainees," and explained that the fugitive's family "would have been released in due course," regardless. In other words, he asserts that the threatening note was nothing but a bluff, to get the Iraqi general's imagination working overtime.
It's possible, of course, that the colonel was just covering his posterior, and really had, in effect, taken the Iraqi family hostage--or has at least ventured out on a slippery slope that will inevitably end in his (or another commander's) doing so. (After all, such bluffs are only effective until the first time one of them is called, and there will be an inevitable temptation at that point to "up the ante".) But while I understand the concerns of Kleiman et al., I'd personally be much more careful about jumping to conclusions before blithely asserting that a war crime had just taken place.
I don't find this analysis that convincing. It seems to me that there are two major claims being made:
For the moment, let's stipulate point (1) and assume that the colonel was in fact bluffing. Does that make this acceptable? Suppose that I hijack a plane waving a pistol which I happen to know that I won't actually fire at anyone. It seems to me that most people would call this terrorism anyway. Similarly, if we threatened the Iraqi general's family, whether or not we intended to carry it out, that seems to me to be reprehensible. 
It's also worth considering the question of whether we were bluffing. I suppose that depends on what you think the threat was. If it was torture or murder, I'm willing to stipulate that we wouldn't have done that. On the other hand, I'm quite willing to believe that we would have detained his family more or less indefinitely. The colonel Dan cite says "in due course" but that could be anything. Certainly, we held material witnesses inside the US for months at a time, so I would think that due course could easily extend that long. That seems like a pretty serious kind of threat in and of itself.
 One could argue, of course, that we weren't actually threatening his family, since we didn't actually say we would harm them or hold them indefinitely, but that strikes me as a pretty disingenuous argument and Dan doesn't make it.
We believe in a future where wealthy countries no longer profit from the suffering of others through an obscene arms trade. We believe in a future free from the constant threat of nuclear annihilation. We believe in a future where we no longer squander billions of dollars every year on unnecessary and menacing weapons. Our vision is not built on wishful thinking.
Sure sounds like wishful thinking to me. As a statement of a sophisticated view of the world this is about one step up from "war is not healthy for children and other living things". I would think that if the past 20 years had proven anything about this topic it was that the strategic logic of nuclear weapons pretty inevitably leads to more rather than less proliferation. Maybe I'm just not a creative thinker, but I can't see any even vaguely realistic scenario in which there aren't nuclear weapons. The whole point of nukes is that they give you a really dominant position over your non-nuclear adversaries. Thus, even if one somehow had a disarmament agreement the temptation to defect and hold out a few weapons and some plutonium is enormous.
I'd be a lot more well-disposed towards these guys if their position didn't seem so naive.
Col. David Hogg, commander of the 2nd Brigade of the 4th Infantry Division, said tougher methods are being used to gather the intelligence. On Wednesday night, he said, his troops picked up the wife and daughter of an Iraqi lieutenant general. They left a note: "If you want your family released, turn yourself in." Such tactics are justified, he said, because, "It's an intelligence operation with detainees, and these people have info." They would have been released in due course, he added later.
The tactic worked. On Friday, Hogg said, the lieutenant general appeared at the front gate of the U.S. base and surrendered.
Isn't making war on women and children pretty much the definition of terrorism? I guess you could argue that we're not actually making war on them. We're just, detaining them, you know, for their protection. Until their families do what we want. Unspeakable.
Leaving aside the health and environmental costs, the economic cost of all this control is daunting. A potato farmer in Idaho spends roughly $1,950 an acre (mainly on chemicals, electricity, and water) to grow a crop that in a good year will earn him maybe $2,000. That's how much a french fry processor will pay for the twenty tons of potatoes a single Idaho acre can yield.
Huh? First, why should it be surprising that profit margins are thin? Potatoes are a commodity and one of the first lessons of Microeconomics is that the price of commodities falls until it's at the marginal cost of production. If it only cost $950 an acre to grow potatoes, you can bet that the price would drop to around $1000 (modulo farm subsidies)--and consumers would be better off for it.
Moreover, is the price really "daunting"? Let's do the math. $2,000/20 tons is $100/ton or $.05/lb. A large potato weighs a little less than a pound and has approximately 250 calories. Thus, 8 large potatoes (wholesale cost $.40) can provide your entire caloric intake for a day. I wouldn't call that price "daunting". In fact, I'd call it "insanely cheap".
Just to put these numbers in perspective, realize that when you buy your potatoes at the supermarket, you pay something like $.19/lb. In other words, the vast majority of the cost of the potato to a consumer is markup after production, not the cost of production itself. For reference, the federal minimum wage is $5.15/hr, so you could pay for your entire daily caloric intake with potatoes in 18 minutes of work.
Looked at from this angle, planting seeds instead of clones was an extraordinary act of faith in the American land, a vote in favor of the new and unpredictable as against the familiar and European. In this Chapman was making the pioneers' classic wager, betting on the fresh possibilities that might grow from seeds planted in the redemptive American ground.really drive me up the wall.
Pollan seems to be unable to admit that evolution just is. He keeps wanting to anthropomorphize it:
Yet for reasons we don't completely understand, distinct species do exist in nature, and they exhibit a certain genetic integrity--sex between them, when it does occur, doesn't produce fertile offspring. Nature presumably has some reason for erecting these walls, even if they are permeable on occasion. Perhaps, as some biologists believe, the purpose of keeping species spearate is to put barriers in the path of pathogens, to contain their damage so that a single germ can't wipe out life on Earth as a stroke.
What's really annoying about this kind of writing is that it represents sloppy thinking. There's a way to express the idea that Pollan is going after here in a rational way without talking about some nonexistent "Nature's plan" but Pollan would apparently rather wax rhapsodic than actually do some intellectual work or make his readers think. Contrast this to someone like Dawkins, who'd rather you understand, even if that means you have to think a bit. Whenever I write about scientific or technical topics, I try to be more like Dawkins and less like Pollan.
Delivery-Date: Sun Jul 27 21:43:03 2003 Delivered-To: email@example.com From: Annnas@yahoo.com Subject: hello... Content-Transfer-Encoding: text/plain Date: Mon, 28 Jul 2003 00:36:20 -0700 X-Priority: 3 X-Library: Indy 10.00.14-B X-Mailer: eGroups Message Poster Hello, I'm 22 years old female and my name is Anna. I saw your profile on the net and found to be ^^^^ interesting.. email me back at Sharon_373_Shoppers@hotmail.com if you want to exchange pictures or whatever.. Hugs, later...
Now, my question is: what's the objective of this spam? I understand the ones advertising pornographic web sites. They want me to pay to check out their porn. But what's going to happen if I respond to this e-mail? "She" is going to ask for bank account? Arrange to meet me and then mug me? Any EG readers have any clues?
"Nice computer you've got there... shame if anything was to happen to it..."
The way I see it there are only three ways to deal with such a situation:
All of these approaches have problems. (1) Allows easy invalidation of all the votes in a given area. That's no good since it could be used by a member of party A to invalide all the votes in a party B heavy area. (2) is a problem since it allows multiple voting. (3) is a problem since it requires that your votes aren't really secret.
I'd be interested in knowing what real-world voting systems do. I would have thought that your ID would be checked, but as I say that doesn't seem to be common practice, at least in some regions.
Our analysis shows that this voting system is far below even the most minimal security standards applicable in other contexts. We highlight several issues including unauthorized privilege escalation, incorrect use of cryptography, vulnerabilities to network threats, and poor software development processes.
Diebold has posted a "Technical Response" to the study. After reading both the paper, I consider it relatively lame. This paragraph is fairly representative:
A prior version of Diebold's touch screen software was analyzed while it was running on a device on which it was never intended to run, on an operating system for which it was not designed, and with minimal knowledge of the overall structures and processes in which the terminal software is embedded. In addition, many of the weaknesses attributed to the operating system on which the software was tested are inapplicable to the embedded operating system actually used by Diebold. As a result, many of the conclusions drawn by the researchers are inaccurate or incomplete with respect to the security of this particular element of Diebold's voting system.
In other words, "our stuff is in hardware and so it's secure". This is always a dangerous position to take. It's very hard to compensate for bad systems design with physical security. Sometimes it's necessary, but it's never desirable. However, as far as I can make out, in this instance the physical protections do not afford adequate security.
The JHU researchers found a large number of vulnerabilities, but I'd like to focus on what I think is one of the most serious ones: multiple voting. According to the article the system uses smartcards to identify voters to the voting machines. However, multiple voting is prevented by having the machine tell the card to set an "I've already voted" bit. Accordingly, if you were able to make multiple copies of a smartcard or a smartcard that ignored that signal you could vote as many times as you wanted.
Diebold's argument is essentially that the physical security measures would make it hard to make your own cards:
Similarly, unlike the personal computer on which the analysis was performed, the card reader is an integrated portion of the terminal. This prevents the signal monitoring which, it was suggested, could easily be used to capture the data needed to create a "homebrew" voting card. Further, because the actual voting booths are not the enclosed structures the researchers may be used to, it was inaccurately suggested that it would be easy to use a readily available device to capture the data without detection. The data which would be needed to create voting cards varies from election to election, so creating voting cards would be difficult without access to such captured data.
I don't find this very convincing. Basically, all that stops you making your own cards is not knowing the machine to card protocol. The JHU paper suggests a number of ways to capture the machine to card communication, which would let you reverse engineer it. Moreover, techniques for analyzing smart cards are quite advanced. With a valid card in hand, it should be possible to make new cards. Since almost all of the security of the system depends on not being able to duplicate cards, this seems like a rather weak guarantee. What's particularly disturbing is that the system didn't have to be designed this way. Double-voting could and should be prevented by the terminal, not the smart cards.
The rest of the criticisms are also pretty bad. It's true that some of them require kinds of access that are hard to obtain, but there are a number of practical sounding attacks. Certainly, based on this article, if I were asked to review a system like this for a commercial customer I would recommend against its use. Elections should be held to a higher standard, not a lower one.
It's understandable that Diebold would want to put up a smokescreen but it's depressing  that election officials don't seem to care:
In response to the Hopkins report, Linda H. Lamone, the state election administrator, said yesterday that Maryland's experience in the 2002 election gave her "absolute confidence" in the Diebold touch-screen system, already deployed in several counties.
She said the machines not only met state and federal standards but "passed the one certification process that matters most - an election."
This is the wrong way to look at things. One of the main problems with designing security systems is that they can work fine under normal use but fail catastrophically against an adversary. Now that it's known how to compromise these systems, they're no longer safe, no matter how well they performed before this knowledge became available.
 Though understandable, I guess, since they were probably the ones who approved the systems in the first place.
First, let's get the facts out of the way:
Frequent EG readers will recognize this as the makings of a classic Free Rider situation. While it's in my interest to have credit card fraud rates be low--and therefore to have the merchants generally act to reduct fraud rates, even if that inconveniences customers--it's also in my interest not to be inconvenienced by merchants trying to prevent my card from being fraudulently used. Of course, the same logic applies to everyone else as well.
Now, I might prefer overall that merchants check everybody than check nobody. However, it's not clear to me that writing "ASK FOR ID" on the back of your card, as many people do, has much effect on the merchant's general behavior, as opposed to just their treatment of you. If that's true, then it's probably not in your best interest to do so.
One only has to read Cookwise to get an appreciation of just how much chemistry is involved in making food do what we want. Of course, the original chemistry was discovered by trial and error, but that doesn't make it any less artificial. The only thing that's different now is that we've discovered how to control things more directly. Indeed, a simple glance through Cookwise reveals an enormous number of ingredients which are subject to major and deliberate chemical processing, including bleached flour, corn starch, and chocolate.
Indeed, it's not even really right to call raw food natural, since nearly all the plants that we currently eat are heavily selected and massively different from their "natural" counterparts. Probably the most striking example is maize. The archological maize we find in Tehuacan ca 5000 BC had cob sizes of about 2 cm. By comparison, cob sizes now are about 20 cm. All of that difference is due to human selective pressure. For a visual representation, see the following picture by John Doebley which shows teosinte, maize, and their first generation hybrid (in the middle). The ancestor of maize is probably teosinte or something like it. The middle cob resembles the archeological specimens. So, what's natural about maize?
This post originally said that Dan Simon was arguing for a preference for natural food. In the comments section, Dan Simon says otherwise. I didn't get that from his post, but obviously he's the expert on what he meant. I've modified the text accordingly.
Assuming nothing goes wrong tomorrow, Lance will be only the second man in history to win the tour 5 times in a row. (The first was Indurain).
I'm also incredibly impressed by Tyler Hamilton. He's got a broken collarbone. That's supposedly incredibly painful, and yet he's riding through it. Unbelievable.
So, one of three things is happening:
My plan at this point is to write to TSA. What I'm hoping to get from them is a letter from them explaining their policy which I can show airport checkpoint workers. As you can imagine I'd find this quite satisfying. I've also got the name of the TSA supervisor who I dealt with and if he happens to get in trouble (which I think pretty unlikely) I'm not going to cry either.
Unlike most fields, almost all of the prestige publishing infosec is almost entirely at conferences. In security, the prestige venues are:
People do publish in journals, of course, but those publications are often expansions of conference papers or papers which didn't get accepted at any of the prestige conferences. This dependence on venues where people have to physically present seems extremely strange in a field which is fundamentally dependent on networking and where essentially all papers are published electronically. Moreover, papers are very often pre-published on the Internet months before the conference at which they "appear".
So, what's going on here? One possibility is that the information being presented can only be conveyed in person. I don't think this is correct. In my experience, this sort of material isn't very hard to understand from the papers and the talks at the conferences aren't really that much more informative than the paper.
I favor a different theory: It's precisely because Internet publication was so easy for CS types that conferences are more important than journals. Publication in journals and the network serves two purposes: dissemination of ideas and signalling that the work is important. However, if publication on the network is easy, then there is no need to publish in order to disseminate your work. Instead, publication serves purely as a signal that your work is worthwhile. Thus, the more selective the venue the more useful it being accepted is as a signal.
Journals generally publish once a month or at least once a quarter, whereas conferences meet once a year. Because there are fewer conference slots, they are inherently more selective and thus more attractive--which of course makes them get even more submissions and therefore more even more selective and attractive.
Some questions to test this theory:
Thanks to Paul Syverson who catalyzed this line of thinking by pointing out that people specifically submitted to busy conferences because they were more selective.
Pretty much the first thing that the US does when fighting a war is to destroy the enemy's electronic infrastructure: phone, Internet, banking, etc. This involves a lot of intensive bombing, anti-radiation missiles, commando raids, etc. However, the demo effect provides us with a way to disrupt such infrastructure in a far easier way. In the simplest scenario, we'd air drop a Microsoft Vice President on any key site. However, with a little research, we may be able to figure out how to harness the supressive field and project it even without the use of Vice Presidents.
Clearly, this is a worthy research project.
Every programmer knows about the Demo Effect. It's something we generally don't talk about to non-programmers because they think you're weird, but it happens just the same. Your program is running just fine in the lab. And then you go to show it someone else and voila, it crashes. Not always, of course, but very frequently.
There are of course rational-seeming explanations: you do things differently when you're demoing, or the person you're showing it to says "why don't you click this button", and that's a code path you haven't tested. And of course code that hasn't been tested doesn't work. That's what we tell ourselves. But deep down we know that those are rationalizations. The truth is that demos are bad mojo. 
And if demos fail more often than not under good conditions, imagine the situation when you're not that prepared, you haven't practiced your demo, and your code is just a prototype. Imagine and be afraid.
 Terence informs me of the related "VP Effect". No matter how much you test your code, the first time you give it to a Microsoft VP, it will crash.
Nobel prizewinner Kenneth Arrow showed this in his "Impossibility Theorem," perhaps the single most important result in social choice and public choice theory. The theorem shows that no means of making social choices-democracy, market, or any reasonable alternative to either-can be perfect-they all necessarily involve important tradeoffs.
I don't believe that that's technically correct. Arrow's theorem assumes that all you have available is preference rankings. If you can express the value of each alternative in absolute units (dollars or utils) instead of as rankings, you can obtain consistent and unique orderings, basically by summing up the utilities.
Now, it's true that none of the aggregation techniques is perfect. Averaging and summing both produce a bunch of unpleasant outcomes in some pathological conditions. The particular problem from the Arrow's theorem perspective is that theoretically someone who is sufficiently rich can decide all questions.  You can remove this problem by normalizing all people's preferences but this allows strategic voting. However, this isn't strictly relevant to Arrow's theorem since Arrow's theorem assumes that you know people's true preferences anyway. Unless I've missed something, summation with utility normalization meets all of the Arrow's Theorem requirements.
That's not to say that utility maximization is perfect. Straight utilitarianism can lead to some pretty weird conclusions, however, I don't think that Arrow's theorem is the issue here.
 This is analogous to Nozick's Utility Monster and in practice about as plausible.
I arrived at SFO at 7:30 AM for my 9:00 flight. The self checkin was only open for people without checked baggage--which I was not one of. Instead, I ended up standing in this ridiculously long line. After a few minutes, it became pretty clear that I wasn't going to make the 45 minute pre-flight baggage cutoff, but I figured United would have to waive it, since there were lots of other people on the same flight in my line.
When I was about 60% of the way through the line, one of the United reps came by and suggested that we might go to the curbside check. I looked outside and there seemed to be a substantial line there as well. Based on the principle of line equalization , I decided to decline this offer. Instead, I asked the rep why they didn't open the self-checkin, leading to the following conversation:
Me: Why isn't the self-check open?
Rep: We're understaffed?.
Me: But it's faster, so why don't you open it?
Rep: What, for you?
Me: No, for everyone. It can handle more people.
Rep: We're doing the best we can.
Me: No, you're not.
The principle here--which I was apparently able to get across to this guy--is that the self-check is a labor-multiplier for gate agents. Thus, if you're understaffed, closing the self-checks just makes the problem worse.
To make a long story somewhat shorter, I got up to the counter at about 8:45 and the gate agent told me that I couldn't check my bag since I'd missed the cutoff. I offered to have my bag go on some other flight, but apparently the new policy is that now your bags have to fate-share with you. I was just outside the carryon limit so I figured I'd run to my flight and gate check my bag. Unfortunately, by the time I got to the gate they'd closed the plane and wouldn't let me on.
And that was just the start of the fun...
The gate agent told me and the other people who had missed the flight that we had to wait for the Service Director who would try to help us. The SD showed up and then proceeded to ignore us for about 5 minutes. Finally, he offered to book me on standby on the noon flight. I asked if there was some flight that I could get a real ticket on, as opposed to standby, but apparently not until the next morning.
Instead, I went to customer service and tried again. This time I was informed that I was at the highest standby priority for the noon flight and so I was likely to get on. Likely, apparently, but not certain, since I didn't get on that flight either, though a few people did. Another trip to customer service and another explanation of the situation ensued. This time they managed to book me an actual ticket on the 4:30 flight. Thus, I got to Hawaii about 7.5 hrs after I was originally supposed to arrive, having sat in the airport for most of my day.
As you can imagine, I'm pretty annoyed. I can sort of understand that United is in a bind--they were oversubscribed and couldn't really help me (although, as I said before, they could have improved matters substantially by being smarter about check-in). However, what really annoys me is that most of the people I dealt with seemed completely uninterested in actually helping me--or even pretending to. The Service Director I dealt with even told me that I should have gotten to the airport earlier because I had to be "prepared for any eventuality." A great customer service strategy if I ever saw one.
In typical whiny consumer fashion, I plan to write United a nasty letter. I'd boycott them entirely but if I refused to fly with every airline who'd ever screwed me over, I'd pretty much only be able to fly Southwest.
 If some line is obviously shorter, then people will move to that line. Thus, you'd expect all the lines to bne about the same length (in time). Since I was about 60% of the way through my line, it seemed likely that my position was better than the other line. However, in retrospect this may have been a mistake, since I did miss my plane.
Of course, I hate to fly. No travel at all would be just fine with me. It's just that a certain amount of travel is inevitable in my business and I want to be more comfortable when I do have to fly. Of course, being Premier still makes you a bit of a peon, but does get you slightly better treatment and lets you skirt checkin lines, which I hate. I understand it also increases your odds of getting exit row, which is pretty nice if you're tall and leggy, which I am.
Of course, all of this depends on actually getting credit for the miles you fly. I'm notoriously bad on this end, always forgetting my number and then failing to call in to get credit. However, in this case, even though I didn't register with my United Mileage Plus # when I bought my tickets to Vienna, I've amazingly got all the ticket stubs in one place and I'm feeling motivated. With any luck, I can even stop by customer service and get it all done this morning on my way to Hawaii.
I've got to tell you, though: the incentives really work. These two flights don't quite put me over the top. I'm already thinking about how I can schedule all future travel this year on United.
None of these is really ideal for me, since what I want is unlimited service for 3-4 days. I suppose I'll probably go minute-by-minute, since I can't stomach paying $65 and my usage patterns tend to be more in the hour range than the minute range. Still, at those rates I doubt I'll use more than an hour a day ($6/day). By contrast, I suspect I would have paid about $10/day for unlimited service, and most likely only used an hour or so on average. I wonder how many other people are like me.
I didn't much enjoy IETF in Vienna, but I will say that the network coverage was superb, with wireless available everywhere. There's word that the meeting I'm attending will have some kind of Internet connectivity, but maybe only on Wednesday.
After a few dobblebocks the other night, I came up with an elegant solution to this thorny problem: drunk driving lanes! After midnight, we allow drunks to drive, but only in the carpool lane, which we'd wall off to reduce the risk to non-drunk drivers. Drunk driving lanes will be one of my first initiatives when I am elected governor.
Black Ice Gatorade is, amazingly enough, this jet black liquid. I was expecting licorice flavor, but in reality the taste is...indescribable. Where previous Gatorade flavors at least had some vague resemblance, to some actual fruit, Black Ice is fruity but non-specific. Tasters described it as:
Our current theory is that this drink was constructed by breaking down the flavorings of other fruits into their chemical constituents and then randomly picking some subset of the flavor components. I have no idea why it is colored black.
This product does not appear to be marketed in the US. After tasting it, it's not surprising. What I can't quite figure out is why it's marketed in Austria. For some reasons, the Germans and Austrians seem to have a fondness for sports drinks flavored like cough syrup. Still, it's vaguely understandable why someone would drink Red Bull--it only comes in one flavor and so if you want to experience the unique Red Bull stimulant buzz you need to suffer through the taste. But Gatorade comes in lots of flavors, all nutritionally equal. Bafflingly, there must actually be market demand for this stuff.
Note: The title comes from the fact that the label reads:
Geschmack * Gout * Smaak
Which mean "taste" in German, French, and, I believe, Dutch. "Geschmack" actually pretty well captures the sensation delivered by Black Ice. n
I'll say one thing for Austria: when I asked the hotel receptionist whether I could swim in the river, the answer was "But of course." In the States it would probably have been more like "We can't tell you and anyway you'd have to sign the following waiver." Kind of refreshing really. On the other hand, if I come down with some sort of massive case of e. coli tomorrow I may feel differently.
Fixed the substitution of "hotel" for "river" in the first paragraph. Thanks to Bill Fenner for pointing this out.
I assume the story with the ash tray is that Austrians smoke all the time, even on the bowl. They do certainly seem to smoke incessently. When running the other day I saw a girl who looked about 12 smoking. As for the toilet brush, I guess that you're expected to clean off the inspection shelf.
I'm sure I'm going to come off as a bad sport here, but after having been to Munich in 1997 and Vienna now, I'd be perfectly happy never to do a conference in a German-speaking country again. Noone's tried to kill me or anything, but the minor inconveniences add up:
More complaining as other things happen that annoy me.
I walk through the detector, which doesn't go off, and then I step to the secondary search point and get the usual wand-down and they make me take off my shoes and run them through the X-ray machine. After about 5 minutes of this crap, they let me go. When I challenged the TSA flaks, they were pretty mealy-mouthed, saying that "no, you don't have to take off your shoes", but that you may be subject to secondary screening.
When I got to Vienna I checked the TSA's web site. Here is their policy.
- TSA does NOT require that passengers remove their shoes prior to proceeding through the security checkpoint.
- However, any person that alerts while proceeding through the checkpoint will be subject to a secondary screening to determine the source of the alarm.
- TSA screeners have also been trained to look for suspicious footwear that may require secondary screening regardless of whether the metal detectors alarm.
However, neither I, nor the guy behind me in secondary screening had set off the detector, and the woman running the belt made pretty clear that just wearing shoes was enough to target you for secondary screening. Apparently, "suspicious footwear" means "still on your feet".
I don't know if this is the TSA's actual policy, or just the particular people I ran into at SFO. However, if they're going to screen anyone who doesn't take off their shoes as a matter of secret policy they should just be open about it rather than pretending that it's somehow discretionary.
Now, if things were working properly, this might work anyway. I see the WTO as a sort of anti-suicide pact between governments. Governments are under enormous pressure from industries in their own countries to adopt trade barriers and the public is in general far too economically ignorant to actually push much in the opposite direction (the fact that news organizations don't properly cover these issues doesn't help). However, if well-meaning governments got together and agreed to have free trade, then when when industries come seeking protection, they can say "Sorry, we're not allowed. Our hands are tied". In some extreme cases where the political pressure got too much, they could pass the tariffs and then "lose" the case at the WTO and be "forced" to recall the tariffs. Unfortunately, since the WTO mainly has the ridiculously damaging enforcement mechanism I just described, the system only works if the government in question is actually in favor of free trade and is therefore playing to lose. I'm not at all sure that the Bush administration sees things that way.
Actually, two things are happening here.
A different worldview
The worldview I would like to see doctors adopt is something more akin to that of other service professionals. If you ask your car mechanic to do something he thinks is stupid, he's likely to tell you, but ultimately he's pretty likely to do as you ask unless it's clearly totally unsafe. He's restricted from doing things in the latter category because it's not possible for you to totally waive all liability. However, obviously this still leaves a lot of room for action since all sorts of bizarre and probably dangerous car modifications are available.
I don't think there's anything contradictory about exhorting doctors to adopt this worldview. Auto mechanics are free to adopt the worldview that doctors have, they just don't.
A market intervention
Suggesting that we interfere in the market is a different matter entirely, however, seemingly at odds with the "minimal intervention" principle I endorsed above. However, in this case I think it's justified because doctors are a government regulated monopoly.
Well, maybe not a monopoly--a cartel. But look, it's not an open market. If I want to go into medicine, I can't just hang out a shingle, I've got to go to med school and take tests and stuff. And if I don't do that, the police come and arrest me for practicing medicine for practicing medicine without a license. So, it's not just a monopoly but one that's endorsed by the state. Now, one very common characteristic of such monopolies is that they're required to serve all comers. If I don't want to take your security work, I just tell you "no". But if PG&E doesn't want to sell you electricity, too bad. They still have to.
Moreover, PG&E isn't allowed to ask what you're doing with the electricity. If I decide to start using the juice they're selling me to do fusion power research or electrocute rats in my basement, the City of Palo Alto might care but it's none of PG&E's business.  What I'm proposing is that a similar principle be applied to doctors. Within narrowly prescribed limits, doctors should be required to perform pretty much any procedure a patient wants. Now, I don't insist that any individual doctor do it. If the AMA wants to draw lots among qualified doctors, that's fine too. I just think that the cartel as a whole should be required to do it.
Now, I can understand that doctors wouldn't like this position, as it would cause them to do things they find distasteful. However, it's my view--though obviously not the majority view--that this obligation should come with their use of the power of the state to protect their monopoly.  If, on the other hand, they'd prefer to compete in a free market, then I'll be more than happy to defend their right to refuse service to anyone they like.
 Incidentally, this can be very liberating. For instance, it's considered very desirable for Internet companies to have "common carrier" status and therefore be able to refuse on principle any suggestion that they look at customer's traffic.
 Note that none of this requires that insurance companies pay for such procedures. However, if people are willing to pay...
Well this represents an excellent use of sarcasm, however one cannot really classify these statement as criticism. Physicians do want to see high quality care. State medical boards do censure other physicians regularly. While we may do a great job of self-policing, we are improving. I believe (as a physician) that we could develop a system which would protect patients.
A good point and it deserves a real response.
The problem here is that doctors have a conflict of interest. They want to see good medical care but they also want to not personally be punished. The distorts their incentives.
Imagine that you're some society designer. You want to set doctor's incentives (including malpractice damages) properly. (see Milgrom and Roberts) for more on this general problem.) So, you settle on some system with a given set of incentives and punishments. This system is designed to produce some nonzero rate of errors and malpractice claims because reducing them to zero is too expensive. Rather, you try to set the number of errors to some efficient level.
Now, imagine that you were a doctor. You have two additional incentives:
The first incentive is probably quite small, since even the current system enforces a fairly high quality of care and people seem pretty willing to use it.
The second incentive, however, is obviously quite large, as evidenced by the fact that doctors are making such a fuss over their current risk of being sued. Therefore we would expect that if you were designing such a system and you were a doctor you would set the punishment levels inefficiently low.
Now, of course, in DB's system doctors wouldn't be explicitly setting the punishment level too low, but since they would control the decision procedure, they have the opportunity to set it wherever they want. Thus, it is likely to drift towards an inefficient value.
But let's take a step back and address the more general issue. The basic form of DB's argument is that only doctors are qualified to make these decisions:
Physicians have the knowledge to review the chart, interview the patient and the physician. They will less likely succumb to legalese. They will less likely provide a "verdict" based on sympathy for the "victim". I truly believe that non-physicians would have great difficulty judging patient care decisions. It takes medical school, residency and continuing practice to understand many intricacies of patient care.
I'm not actually convinced this is true, but let's say that it is. That doesn't mean that doctors have to actually make the punishment decisions but merely that we need to have some way to access their specialized knowledge. In the comments section of my previous post, Kevin Dick described a path to a far superior solution, quoted in full here:
I think it's just a matter of setting up a system that gives them the right incentives. There are really two issues here, determining whether a doctor has made a mistake and then figuring out how much he should pay. Now all this is off the cuff and I haven't analyzed it too carefully, but here is an example of the type of system I'm talking about.
Assume that there's a council of reviewers. These doctors receive interactive simulations of an actual medical situation encoutered by a doctor and the council members record what they would have done in his or her shoes. If a majority describe something that appears different from what was actually done, then the doctors with a different opinion rate whether the actual course of treatment was better or worse than their proposed course of treatment. You then tally up the opinions according to the weighting scheme outlined below to determine if a mistake was made.
Now here's the rub. Only some of the simulations are of malpractice cases. The rest are cases that were treated by one of the doctors in the council. This allows you to create a rating of doctors in the council who are "better" or "worse" according to their peers on the council. The better doctors get paid more for their services and their opinion carries more weight for real malpractice evaluations. Of course, there's a masssive penalty for a doctor recommending a different course of action for a case that was actually his.
Determining damages is even easier. There are 3 or more independent evaluators or groups of evaluators. Each of them "bids" the amount of damages. The lowest bid wins but the winning evaluator or group gets a fraction around 10-30 percent of the damages. The bidding incents them to estimate low and the sharing incents them to rate high. With proper tuning, you could probably get outcomes in some "fair" range.
The key to this scheme is that you're using the doctor's specialized knowledge as a test instrument, but you're not letting them directly decide on the amount of incentive pressure, thus avoiding the conflict of interest. I'm not sure how happy I am with the second half of his scheme (the damage determining) but I think that the key insight to separate the two issues and the first half of the scheme are definitely on the right track.
The guidelines come into play in two ways. First, the insurance companies won't pay unless it's medically indicated. I've got no problem with this. This is a contracting issue. If patients want a lower threshold they can pay for it in high premiums. That said, if I were an insurance company I'd have some sort of graduated scale to avoid this kind of weird thresholding effect.
Second, doctors are refusing to do the surgery even if people pay for it themselves.
If a patient doesn't meet the guidelines, insurers won't pay for surgery and most doctors won't operate even if the patient offers to pay for it themselves.
The guidelines, the result of a 1991 National Institutes of Health consensus conference, are strict because the surgery isn't without risk. About 1 percent of patients will die from complications. And because the most common form of the surgery limits the body's ability to absorb food, patients can suffer malnutrition, requiring a lifetime of nutritional supplements and follow-up care. In addition, patients must adjust to never again eating more than a minuscule portion at a sitting, or they'll vomit.
So, what's the response of the doctors being interviewed? That they should lower the guidelines, of course:
"We're asking, should we lower the BMI so these people who have risk from their disease of obesity can be better served with surgery," says current ASBS President Alan C. Wittgrove, the San Diego surgeon who performed Carnie Wilson's surgery. "The problem with this group is there's really nothing available for them right now."
Does it even occur to these people that maybe the problem is that they're trying to make rules rather than letting the patients decide for themselves what they want? Apparently not.
The accused physician should have his records and other evidence judged by a panel of peer physicians (perhaps from another state to decrease conflicts of interest). That panel could best judge whether the physician made errors. They could then authorize appropriate payments. We all agree with the payment of legitimate damages. Physicians want a cap on punitive damages only.
Yeah, what a great idea. It's not like there would be any conflict of interest in having doctors judge other doctors. They wouldn't circle the wagons or anything like that. That would never happen.
Using data from the 1994 through 1998 Consumer Expenditure Surveys, we compare household spending on 16 different goods (food at home, food away from home, housing, transportation, alcohol and tobacco, interest, furniture and appliances, home maintenance, clothing, utilities, medical care, health insurance, entertainment, personal care, education, and other) for insured versus uninsured households, controlling for total expenditures and demographic characteristics. The analysis shows that the uninsured in the lowest quartile of the distribution of total expenditures spend more on housing, food at home, alcohol and tobacco, and education than do the insured. In contrast, households in the top quartile of the distribution of total expenditures spend more on transportation and furniture and appliances than do comparable insured households. These results are consistent with the idea that poor uninsured households face higher housing prices than do poor insured households. Further research is necessary to determine whether high housing prices can help explain why some households do not have insurance.
The obvious way to look at this result, as Levy and DeLeire point out, is that housing consumption is inelastic and thus the uninsured actually have less disposable income to spend on health insurance. In that case, it might be easier to just think of them as poorer and give them a simple transfer payment that they could use to buy insurance on the open market--provided, of course, that you can prevent the obvious adverse selection problems.
On a more serious note, check out the following section from the canada.com article:
Mr. Jennings said the strong cultural nationalism of his parents had prevented him from giving serious thought to dual citizenship before their deaths. His father, Charles, was a pioneering CBC broadcaster and his mother, Elizabeth, who died in 1992, was a prominent supporter of the National Ballet of Canada and the Canadian Opera Company.
"I clearly was never going to be really active about it while my mom was alive. I think she would have been surprised, to say the least. My mom was very, very, very deeply Canadian, and to some extent quite anti-American."
I must say, this is one aspect of the Canadian character that I've always found a little strange: the conflation of Canadian pride with being anti-American. Canadian national culture seems to be largely defined by being not-American. That can't be healthy.
For the past 10 years or so, most communications security systems that people designed have used Public Key Cryptography (PKC) for key management. With PKC, each party generates a "key pair", one public and one private. The public key is known to everyone (e.g. published on the Internet) and the private is known only to the key owner. So, if you wanted to send a message securely to someone you'd just encrypt under their public key. 
Unfortunately, while this is an incredibly clever idea it's really only a partial solution to the problem. What stops me from posting a whole bunch of keys that I claim are yours and getting your secure email? It turns out that PKC has a solution to this as well. It's possible to sign a message with your public key so that it can be verified with your public key. By verified I mean that the verifier can ensure that (1) you or at least someone who had your private key wrote it and (2) it hasn't been tampered with. So, instead of just posting my public key, I post a signed document by someone else who you might trust saying "Key X belongs to EKR". This document is generally called a certificate. The process of getting the certificate is called enrollment.
But wait, how do you get that trusted person's key? Isn't that just turtles all the way down? For this to work properly, you really need at least one person or company who everyone trusts. That company's public key is compiled into everyone's software. In practice, there are a large number of these Certification Authorities (CAs) of which VeriSign is the most famous. All of this apparatus (certificates, CAs, etc.) is collectively referred to as Public Key Infrastructure (PKI).
In the PKI world, there are two ways to get someone's certificate.
So, choice (1) only works well if you're online and choice (2) doesn't work well for anything other than intra-organizational e-mail, and not very well for that either. And of course, none of it works unless the person you're trying to send to already has a certificate, which of course most people don't.
Identity Based Encryption
IBE turns PKI on its head. Instead of having certificates, everyone's public key is their identity (like their e-mail address). Instead of a CA, there's a central authority called Key Server (KS) that issues you your private key. The way that enrollment works is this: You prove to the KS that you are entitled to your identity and it gives you the private key that corresponds to that identity.
With IBE, because you can predict what someone's identity is in advance, you don't need to find their certificate in a directory. As long as you know the KS's public parameters (these would be published the same way as CA keys are now), you can encrypt a message to anyone. Note that this all works best if there's only one central KS. Otherwise life gets a little complicated, which I'll get to in a minute.
In fact, in an IBE system you can send someone an encrypted message even if they haven't enrolled yet. Then, once they've received the message they can enroll and read it. Call this process post-enrollment to contrast it to PKI, where you have to enroll first.
As an obvious corollary, the KS is able to generate any user's private key at any time. This is useful for roaming applications. With PKI, you have to somehow arrange to copy your private key from machine to machine, or use some central key service such as that being defined by the IETF's SACRED working group. This sort of technology isn't very well standardized. Since post-enrollment is an essential feature of IBE, when you move to a new platform or machine you just re-enroll and it's automatically enabled with your key.
The flip side of post-enrollment is that if someone takes control of the KS, they can read anyone's messages. In the crypto community, this property is called escrow. Whether escrow is a good thing or not depends a lot on your perspective. Individual users are often pretty concerned about some third party being able to decrypt their communications--hence the big kerfuffle about the Clipper Chip back in the 90s. By contrast, Corporate users--well, the companies who buy the software, at least--are generally not that worried about their employee's privacy but are concerned about some employee quitting (or dying) and leaving a bunch of encrypted data around that the company can't read, so for them some escrow capability is a good thing.
Escrow is pretty much inherent in IBE systems. There are some schemes being developed to remove escrow, but they generally also mean limiting post-enrollment in some way. For instance, you could implement the KS in a trusted hardware platform (this is a good idea in any case) and have it remember which keys it had issued and not reissue them. However, this would have the disadvantage the roaming access would no longer work properly.
However, even corporate users don't want some central KS as a single point of vulnerability. That just requires trusting whoever runs the key server way too much. Instead, Voltage uses a hybrid system. Each enterprise their own key server which only issues keys for entities known to the enterprise. Thus, if I run company Foo, I would get a key server that was only used to issue keys for "foo.com". So, if you work for company Bar, you would just look up the parameters for foo.com and then you can encrypt to any employee at foo.com.
This hybrid scheme has a number of advantages over a simple centralized IBE scheme:
The major disadvantage is that senders can no longer know the static KS parameters and encrypt to everyone. Instead, they have to look up the parameters for every domain they want to encrypt to. On the other hand, once you have the foo.com parameters they are the same for every address that ends in @foo.com, so you don't have to look up firstname.lastname@example.org and email@example.com separately. Moreover, this means that enterprises can make a global decision to install a key server without enrolling anyone. Then, when people actually get encrypted messages they can post-enroll--at which point they'll presumably have more incentive to do so, since they will want to read the message.
If you're familiar with crypto, you're probably thinking something like "I know how to do this, and without any of that crypto rocket science stuff". The standard approach is for senders to encrypt the message with a random session key (you would want to do this anyway, see footnote ) and then encrypt the random session key under the KS's public key along with an indication of who is supposed to receive the message. Then, when the receiver gets the message they present the encrypted key to the KS along with proof that they are the correct recipient (it's a little tricky to tie the recipient's identity to the encrypted key, but it's quite possible). The KS then decrypts the session key and gives it back to the receiver.
The major problem with this approach is that the KS then has to be involved whenever someone wants to read a message. By contrast, with an IBE system, once the receiver has obtained their private key, they can cache it and read messages offline.
We can improve this emulation approach somewhat by having the KS also be a CA. When the sender wants to send a message they contact the KS/CA and see if the receiver has a certificate. If they do, then the sender uses that. Otherwise, they fall back the KS/CA public key. That reduces the burden on the receiver but increases it on the sender and KS/CA. Now, the KS/CA has to act as a certificate directory and the sender has to look up people's certificates. Also, this scheme has the advantage (or disadvantage, depending on your perspective) that once the users key is issued then any data encrypted with the user-specific public key is not escrowed.
However, a hybrid KS/CA can lead to some strange behaviors. Consider the following sequence of events:
Bottom line, while it is possible to some extent to emulate IBE using conventional cryptography, if IBE-type functionality is what you want, IBE does a better job of it than the emulations do. If you're really worried about escrow you could build a hybrid IBE KS/CA system that would operate more cleanly than this emulation would.
Where is this useful?
Where I'd expect to see IBE used most is in for messaging applications such as e-mail and IM. These are appplications where people are talking to other people and therefore the limitations of conventional PKI are most obvious--and not coincidentally where security has been slow to be deployed. For applications like Web-based e-commerce (which currently uses SSL/TLS), IBE isn't that big a win since both peers have to be online anyway and so they have an opportunity to exchange certificates. I wouldn't expect IBE to be in wide use for SSL any time soon.
It's not surprising, then that Voltage's initial target market is secure e-mail. Their current product offering includes plugins for the major mail clients to let them send and receive IBE-encrypted e-mail as well as Enterprise key servers that can be installed locally. As I understand it, they're also going to give away a toolkit which lets developers integrate IBE into their own applications.
 Actually, this isn't quite true. For performance reasons, one usually generates a random session key and encrypts that with the recipient's public key. The actual message is encrypted with the session key. However, this is a technical detail that doesn't change the key management issues at all.
Added the point that the emulated KS/CA scheme would not have escrow once certs were issued and the related point about hybrid IBE KS/CA schemes.
Anyway, here's the official UN list of all the countries that are better than Canada:
Update 2003/07/09 07:49:
Colby Cosh weighs in on this topic, pointing out that the index is totally bogus. Of course it is, which just makes it more delicious that the Canadians who took it seriously for so long are now stuck between admitting that they were beaten or that their previous success was false.
We need a vendor who can offer immediate supply. I'm offering $5,000 US dollars just for referring a vender which is (Actually RELIABLE in providing the below equipment) Contact details of vendor required, including name and phone #. If they turn out to be reliable in supplying the below equipment I'll immediately pay you $5,000. We prefer to work with vendor in the Boston/New York area.
1. The mind warper generation 4 Dimensional Warp Generator # 52 4350a series wrist watch with z80 or better memory adapter. If in stock the AMD Dimensional Warp Generator module containing the GRC79 induction motor, two I80200 warp stabilizers, 256GB of SRAM, and two Analog Devices isolinear modules, This unit also has a menu driven GUI accessible on the front panel XID display. All in 1 units would be great if reliable models are available
2. The special 23200 or Acme 5X24 series time transducing capacitor with built in temporal displacement. Needed with complete jumper/auxiliary system
3. A reliable crystal Ionizor with unlimited memory backup.
4. I will also pay for Schematics, layouts, and designs directly from the manufature which can be used to build this equipment from readily available parts.
If your vendor turns out to be reliable, I owe you $5,000.
Email his details to me at: firstname.lastname@example.org
Please do not reply directly back to this email as it will only be bounced back to you.
weagapnuiggwjvyetm myndd ghw
4 dimensional warp generators? I think Frys has those over in aisle 5 next to the MRI machines.
If I had to guess, I'd say that this is some kind of virus, since I can't see why a commercial spammer would want to send this message out.
HELP!We represent 3 qualified families needing to buy homes in the neighborhoods served by the DUVENECK and ADDISON SCHOOLS. If you are interested in selling your home, please have your Realtor contact us or call us directly.
Good news for me, since my house is in the Duveneck area. Not planning to sell but nice to know we have options.
The system that evolved seemed to be the following: men would go in the women's bathroom but only if someone told them that there was an opening and that it was ok (i.e. women weren't changing or anything). Men wouldn't stand in the women's line if there was one. It was only ok to use the women's bathroom if it was actually idle. Unsurprisingly, once one enterprising soul had made the switch, the precedent was established and people were pretty willing to change--until the women's bathroom filled up and the pattern was broken.
It's hard to know what's fair in situations like this. After all, the provisioning was clearly unfair. On the other hand, women put up with that kind of unfair provisioning all the time and only change lines when the disparity is quite large. On the gripping hand, I didn't hear any women complaining, so I guess this met their sense of fairness as well.
There are two major obstacles, the current and the cold. The situation with the current is that it's going about 2 mph towards the bridge. So, if you navigate as you usually do, aiming towards where you want to go, you end up getting pulled out under the bridge--or more likely picked up by a kayak. Since the swim finish is a couple of miles West of Alcatraz, you can swim more or less due South and the current will pull you to the finish. The race directors gave us landmarks to sight on, but it was really foggy and choppy and so I couldn't sight well. Like many other people I went way off course and ended up having to swim against the current a fair bit. I heard from a fair number of people that this was one of the worse years.
Now fo the cold. The water is about 59 degrees. Like all sane people, I was wearing a swimming wetsuit--maybe 10% of the people were not. A wetsuit makes the difference between being just cold and being extremely cold. Still, it's cold on the boat going out and then when you hit the water your hands, feet, and face go instantly numb and it takes a while to get warm again. I suspect I was a bit hypothermic when I got to the finish because my run transition was terrible. What can I say about the run? The bridge was fogged in so you couldn't see much of anything. I felt pretty good running once I got warm, so that was promising. This wasn't any kind of big race for me so I didn't really have any expectations.
In the end, I think Mark was right. I'm no expert but I've done my share of open water swims and I was wearing a wetsuit. I wasn't ever worried I couldn't finish though I did worry for a few minutes that I was so off course they'd pick me up in a kayak. I wouldn't much like my chances of swimming from Alcatraz to shore at night (don't want those boats to pick you up) with no real opportunity to train and without a wetsuit.
Kyle Welch points out that prisoners also probably didn't have tide charts, which makes it pretty difficult to figure out when a good time to go is.
Now, I don't know much about molecular biology, and I don't know if the analogy is accurate, but this scenario sounds an awful lot like one I'm familiar with: computer hacking. In both cases, a system full of vulnerabilities is subject to scrutiny by thousands of imaginative (or simply persistent) attackers. But the worst computer hackers can do is destroy data, and perhaps disrupt communications and other infrastructure. Bio-hackers can potentially kill millions.
This analogy raises an interesting question: why are computer viruses so lame? Most modern viruses and worms don't really do anything to the infected computer. Even Code Red, only damaged the web server it infected. It's easy to think of ways to really trash a machine that you've infected: corrupt the data, erase all the data, flash the BIOS, whatever. So, why doesn't the commonly available malware do this? Sure, not all malware authors are malicious, but it's amazing that none of them are.
Second, why isn't malware designed to spread more quietly. The current trend seems to be towards more rather than less rapid infection, to the point where the amount of infection traffic has a noticeable effect on network load. If you want to infect a lot of machines quietly, this is not how you do the job.
I've asked this question to a few malware researchers and noone seems to have a really good answer. Everyone sort of expects something really bad to come along, but is a little puzzled that it's taken so long.
I'm well aware that the U.S. is far from perfect and that we're not the only country where people are able to live as they please. Nevertheless, just being able to live mostly free from fear and tyranny is something that only a very lucky few have been able to enjoy and here and now is one of the very best places to be. Today is a good day to remember that.
The third use, commercial skipping, amounted to creating a derivative work, see WGN Contintental Broadcasting Co. vs United Video, Inc., 693 F.2d 622, 625 7th Cir. 1982); Gilliam v. American Broadcasting Cos., 528 F.2d 14, 17-19, 23 (2d Cir. 1976); cf. Ty, Inc. v. GMA Accessories, Inc., 132 F.3d 1167, 1173 (7th Cir. 1997), namely a commercial-free copy that would reduce the copyright owner's income from his original program, since "free" televison programs are financed by the purchase of commercials by advertisers.
In other words, not only does Posner think that commercial skipping is illegal, he thinks it's been illegal for 20 years.
Now, I'm no lawyer, so I have no idea whether Posner is right as a matter of law. Posner's a smart guy so I'm assuming that he likely is. However, as a matter of public policy, this is insane. Pretty much anyone who records things off the air uses the fast forward button to skip commercials. Moreover, this is one of the major uses of a TiVo. Don't get me wrong, I understand that commercial skipping reduces the value that the content provider can provide to the advertiser--but so does my getting up to use the bathroom during commercials, but there's no legal requirement that I be tied to my chair with my eyes propped open like Malcom McDowell in A Clockwork Orange.
As Ed Felten and Larry Lessig have pointed out repeatedly, one of the major problems faced by the copyright industry is that their customers have a very different view of what's reasonable use of copyrighted material than they do. When the public is reminded of this disconnect, their reaction isn't to say "oh, let's behave ourselves" but rather to wonder whether it's time for a revolution. If the law really bans commercial skipping, that revolution may be long overdue.
It's pretty clear that we're entering a new era of widespread modification of breast size. I think it's also pretty clear that the trend is towards larger breasts, particularly in view of the fact that today's implants are still pretty clumsy instruments which don't produce breasts that are entirely natural looking or feeling. Check out any number of threads knocking breast implants on for instance Fark (not for the easily offended.) As implant technology gets better, and the surgery gets more acceptable, expect to see a lot more implants.
Clearly this can't go on forever, though. After all, breasts can't get infinitely large. Basically, we're dealing with two countervailing forces here:
As the average size goes up, the marginal value of having larger breasts should start to decline, and I'd expect to see an equilibrium reached. However, this situation is complicated enough that it's certainly possible that there's no stable state at all. However, even if there is no equilibrium, and sizes shift, I suspect that the average will in the future be larger than it is now.
If I get some time and feel like slacking, I may try to work up a game theoretic model for. The problem is superficially like sexual selection but I think actually demands a different treatment.
 A case of beer.
But there's another issue here. Since the products contain traces of THC, having eaten them would serve as a convenient excuse for testing positive for cannabis in a drug test. Apparently that's not a real issue at a technical level; you just couldn't eat enough to actually trigger a positive result. But it's still one more issue that drug testers would have to contend with; even if the excuse doesn't work, lots of people might be persuaded that it would work, and thus think they could drug-test-proof themselves by just laying in a supply of hemp energy bars.
This is an interesting argument, but I think it takes us into dangerous territory. Imagine there was some otherwise legal--and non drug-containing--food that as a side effect produced a false positive on drug tests. Would the DEA be allowed to ban it on the grounds of protecting testing? This example isn't as silly as it sounds, since drug tests often measure not the drug itself but rather metabolites (though I believe the THC test actually measures THC) and sometimes more than one chemical produces the same metabolites. If we think that the DEA shouldn't ban such foods, why should they be allowed to ban THC-containing foods which clearly have no direct drug-related purpose? (By allowed, I mean not constitutionally, since clearly Congress does this kind of thing all the time, but rather, is it part of their writ as DEA.)
In their study, Novak, along with Adelphi psychologist Dr. Janice Steil and fellow doctoral student Allison Newman, conducted anonymous online surveys with 50 young, well-educated couples engaged to be married.
The researchers had both partners in the relationship fill out the survey in a separate, confidential fashion. Participants answered standard psychological questionnaires rating perceived and expected levels of intimacy in the relationship.
The result? Across almost all categories of intimacy, "women reported their current relationships as more intimate than men," the authors report.
How, exactly, do we calibrate one person's "intimacy scale" against another's? If my intimacy level is 15 quatloos and yours is 6 frobnozzles do I think we're more or less intimate than you do? Who knows? As far as I know there's not a good way to answer this question even in principle. Certainly philosophers have struggled with the related but probably easier question of interpersonal utility comparison with relatively little success.
Or maybe it's none of these. Any EG readers ever work graveyard shift at a fast food place?
Left front door squeaking. Lubricated
Changed oil/filter. 10W30
Yet, when I go to see the doctor, they generally do whatever treatment they do and send me home. Now, surely it's more important for me to know what my doctor did and what I need to do for followup care than it is for me to know what weight of oil they used in my car. Given that this information has to go in my medical records anyway, is there some reason that I shouldn't get a copy to take home with me?
The age groupers fail to do it with style. The age groupers run pace lines, dangerously block other competitors from passing, ride on the wrong side of the road, and have their non-competing spouses clog up the run course to provide outside assistance (whether it is pacing or the passing of food). Don't get me wrong -- both age groupers and pros are guilty of breaking the rules, but it really was disheartening to listen to the age groupers whine about getting caught or that having rules sucks. Worst of all, some of the competitors argued that racing by the rules puts one in a significant disadvantage in comparison to other competitors.
Unfortunately, I think I have to agree with the people who complain that racing according to the rules puts one at a disadvantage to other competitors. But only if the competitor's goal is to be the best in the age division or win a Kona slot.
As it happens, I was just thinking about this the other day. In my view, the drafting rules are unenforceable. (Full disclosure: I've received a drafting penalty once. I claim I was passing.)
Here's the problem: The drafting rules require that you be 7 meters back from other cyclists. That means that the absolute maximum density of cyclists is 228/mile. That sounds like a lot, but consider that nearly all of the decent age group swimmers will come out of the water between 50 and 70 minutes. This means that at the start of the cycling portion of the race, the density is something like 50 cyclists/minute. At a pace of 20 mph, that's 150 cyclists/mile. At densities like this, drafting is essentially inevitable.
To make the situation worse, USAT's drafting rules require one to fall back once passed. Consider what happens when you're passed by the head of a column of people 1/4 mile long spaced out at 7 m intervals. You now have to fall back behind all those people, which really kills your time and noone noone wants to do it. Even then, the time you spend being passed looks a lot like some kind of drafting if the official isn't paying close attention.
The consensus among pretty much all the competitive triathletes I know is that the drafting rules are a joke. Reports of being busted when one wasn't drafting (at least in the athlete's opinion) are quite common. Reports of people blatantly drafting and not being caught are nearly universal.
I do like the idea of drafting rules. I don't want to be in a real bicycle race with lots of drafting and team tactics. However, I think the USAT needs to take a hard look at what's practical to enforce. Very likely this means reducing the size of the races so that the density is lower. Quite possibly it means some sort of stagger start instead of mass starts. A good first step would be to actually measure (like with transponders) the amount of drafting that goes on. Then at least we'd have enough data to work from.
After watching the race from end-to-end, I was pretty convinced that it is very possible for me to finish an Ironman in 17 hours on just about any random day of the year. While it is clearly a significant mental and physical effort for the average American to complete, if you are a triathlete and you manage your inputs and outputs then it is very possible to finish.
It sure looks a lot easier when you're watching it from a motorcycle. There's an enormous difference between being able to do a short course race and being able to do an Ironman. You can get by on short course if you never train more than 20 miles on the bike and 5 on the run. If you try to do this and then go out on the Ironman course you're likely to be just completely fried by the time you hit mile 15 of the run. You have to train distance to do distance.
I do agree, however, that anyone who is in reasonable shape can train up to the point where they could do an Ironman. Even then, though, it's not a slam dunk. I've seen fit people who just wanted to do the distance drop out because of hydration or other problems. I've also seen people bust their humps training and still only pull out 14 1/2 hrs.