Top.Mail.Ru
? ?

Nov. 12th, 2014

eyes black and white

In The Future, Cars Drive You!

Yesterday, I spoke at an event organized by the America's Future Foundation on the topic of self-driving cars. Here is a summary of what I said. (Disclaimer: I do work at Google, but I have never worked on self-driving cars, and do not possess any information that isn't already public.)

The most important point is that a self-driving car, as being developed right now by Google and many competitors, is not a general artificial intelligence, capable of replacing a human driver in all situations, but a specialized artificial intelligence, that does one limited task, and does it well. The driving robots are thus very good at things humans are bad at — they are never being tired, they never fall asleep at the wheel, they never get drunk, they never get angry, they never take a wrong turn, they never assess speed or distance incorrectly, they never forget the finer points of driving code, they never forget to refuel at the best-priced station, they always make efficient use of fuel, they always go through timely maintenance, etc. They are very bad at dealing with exceptional situations. Is this ball in the street just some irrelevant rubber ball you can drive upon, or is it a bronze ball that would cause a deadly accident if you run on it? What about those fallen branches on the road? Is this road still useful despite the flooding, landslide, etc.? Is this deer, child, etc., going to jump in front of the car? How should the car handle some temporary work on the road? How to deal with a flock of geese on the road? Now, the hope is that even though exceptional situations may require some human to take control of the vehicle, override the itinerary, clear the road, or otherwise take action — or call for help and wait — the overall lives saved are well worth the inconvenients in the cases the software fails.

And the lives saved are not just the accidents that won't happen. It's also all the hours of life-time reclaimed. Someone who drives to a job an hour away and back home spends two hours everyday driving. That's over 10% of his waking hours. Over forty years of work, the time reclaimed is the equivalent of four years of extra life while in good health. During their commute, people can sleep, eat, drink, relax, meditate, dress, put on their makeup, read, talk, do their homework, have sex, or whatever they prefer doing. (Insert Mr Bean driving in the morning.) Disabled people will not be dependent upon someone else to spend their time driving them around. For a self-driving car does not replace a car: it replaces a car plus a chauffeur. It is more like a taxi than a personal car, and a zipcar-like pool of self-driving cars can be time-shared between many people, instead of each car having to be parked most of the day while its owner works, plays, shops or sleeps. If and when most cars become self-driving, the need for street parking space will be much diminished, and streets will suddenly become wider, further facilitating traffic. Thus, even though a self-driving car may cost two or three times as much as a car, even if they only cover limited areas where temporary and permanent road changes are guaranteed to be properly signaled for the sake of self-driving cars, they are still a huge economic saving, in better use of both human and material capital. As costs fall, people will be able to afford longer commutes from cheaper places, to enjoy life without being prisoner of public transportation schedules or of high prices of a car or a taxi. Over hundreds of millions of users, tens of millions of extra productive life-times become available. A boon for mankind.

Now, another consequence of self-driving cars being specialized tools rather than general artificial intelligences is that, since they are not sentient, they cannot take responsibility for the accidents that will happen. The buck has to stop with someone, and that cannot be some dumb computer. Only humans can be held accountable and humans will have to pay to cover damages to both passengers and third parties. In the beginning, that means that only big companies with deep pockets can own such cars: a large corporation like Google, willing to put its neck on the line; insurance companies that expect to save a lot of money in damages avoided; mutual funds where many small investors pool their savings together. The same will be true for all upcoming autonomous robots: small planes or quadricopters, carrying robots, etc. they will need to be owned by people or corporations who can afford to pay for any damages, or insured by companies that'll take the responsibility. [The following points to end of paragraph were not made during my speech.] Note that owning autonomous vehicles is significantly riskier than insuring human-controlled vehicles: On the one hand, whereas the insurance for a human-controlled vehicle typically only covers the first few million dollars of damages, and any further liability is disclaimed by the insurer and pushed back to the human driver, the owner of the autonomous vehicle is the ultimately responsible party and can't limit liability in case of damages to third parties. On the other hand, there is a systemic risk that is hard to evaluate, in case, e.g., after a flood, landslide, earthquake or catastrophic bug, stubborn car behavior causes not one accident but hundreds of accidents; it can be hard to provision for such black swan events, though hopefully the average casualty rate after such events still remains lower than currently is for human drivers.

The rise of self-driving cars will require change in government. First the self-driving cars may require support from those government bureaucracies that (at least currently) manage roads, so that self-driving cars are made aware of temporary and permanent changes. Second some regulatory amendments may be necessary for anyone to dare endorse the liability for owning a self-driving car. Meanwhile, there are huge privacy issues, as self-driving car companies get even more information on the location and habits of passengers, and government bureaucracies such as the NSA may eventually put their hands on data that Google (or other operators) accumulate, with or without active help from the companies. Therefore, government rules lag behind technology, but it is not a clear win when they catch up. The last few centuries have seen an exponential growth in human achievement through technology; they have also witnessed an exponential growth of government, taxes, statutes, bureaucracies, privileges and war capabilities. Over the next few decades or centuries, neither exponential growth is sustainable. Whichever curve tops first, the other wins — and soon afterwards the first curve likely drops to zero. If government somehow stops growing first, mankind will know a golden age of peace and prosperity. If technology somehow stops making big strides first, then as Orwell predicted, "If you want a vision of the future, imagine a boot stamping on a human face — forever."

Now even though humans overall may prove forever incapable of understanding and implementing liberty [and indeed may only get dumber and more subservient due to government-induced dysgenics], that might not matter for our far future. For eventually, whether a few decades or a few centuries in the future, General Artificial Intelligence may indeed be created. Then not only will artificial sentient beings be able to endorse the responsibility for self-driving cars, they will soon enough be at the top of the food chain, and endorse ownership and responsibility for everything — and not just on Earth, but across the Solar System, the Galaxy, and beyond. When that happens, we better hope that these AIs if not humans understand the importance of Property Rights; if they do, humans can live a life of plenty based on the capital they have accumulated; otherwise, our end will be very painful. And so, let's hope the first AI isn't a military robot hell-bent on killing humans, without any respect for property rights.

Sep. 6th, 2010

eyes black and white

Self-defeating hypotheses

"If AIs become better and cheaper than humans at EVERYTHING, humans will stop interacting with each other. Pan-human catastrophe!"

"If foreigners become better and cheaper than nationals at EVERYTHING, nationals will stop buying from each other. National catastrophe!"

Yeah, right. And if people outside your immediate family do everything better and cheaper than your family as far as you're concerned, you'll never talk to your spouse and kids again? Family-zastrophe!

Let's say I'm one of the persons with whom you admittedly share some close bond; now what if people other than the two of us satisfy our needs better and cheaper than the two of us can — we will stop interacting with each other, and will it be a catastrophe for the two of us as we'll both lose all the precious things we bring to each other? Pair-zastrophe?

The fallacy is that on the one hand, the denounced Third Party is supposed to better than anything, and at the same time, the lost relationship is supposed to be something so valuable that it's vastly better than anything else. But you can't have it both ways. As in the story of universal solvent that can dissolve anything and the universal container that can resist any solvent, one has to give way to the other. And as in the story, both claims are dubious at best.

If indeed the Third Parties are so good and cheap that we'll both refer to them and stop interacting with each other, then by definition, we're having more fun, more satisfaction, by interacting with the Third Parties rather than with each other. And for whom is it a catastrophe? For neither of us. By assumption, we're both better satisfied this way. Maybe I'll marry one of those beautiful, sexy, intelligent and agreeable strangers instead of marrying you; and you'll similarly marry one of them instead of marrying me. And by assumption, we'll both be happier than if we had been together. Yay, life!

Or is the bond between us so strong and so valuable that nothing that the Third Parties may offer is worth dissolving that bond? Then why are you afraid that we'll destroy this bond? Who are you calling stupid? Yourself, me or everyone else? Are you going to leave your family behind to starve to death because the Matrix is offering you a more pleasurable though virtual family? Why do you think anyone would and make that the premise of your catastrophic prediction?

Are you claiming that most people are stupid or evil, except yourself and the vis-à-vis you're trying to convince of your views with such arguments? Besides the remarkable conceit you display, you should realize the vanity of the attempt to save such stupid or evil people, especially when the strategy you choose would be to either convince them all through an appeal to their intelligence and morality that you deny exist, or to somehow make yourself their dictator.

And while you meditate at the impending doom of humanity or your nation, etc., you may consider the miracle by which this flock of stupid or evil people reached that state of affairs that you're so afraid to lose; here is a miracle you obviously can't fathom, and which should give you faith that some force is at work that you have failed to identify so far.

For this whole scare of machines, of foreigners or of any other third party is but an ingrained reflex of defiance against alterity and a defense of existing known or supposed relationships taken to the point of neurosis. Sadly, there is no shortage of crooks who will excite other people's neuroses to profit from them. Happily, however, the scare is wholly unfounded.

So what happens when machines or foreigners get better and cheaper at something? The general answer is: the law of comparative advantage applies. We get more of what the third party provides, in exchange for which we do more of what the third party requires; by hypothesis, this costs us less to produce than previous and/or other available alternatives, and we enjoy the fact that the third party offers us a better deal than we used to have before (or else we wouldn't have switched). Getting more for less, that's the only reason why we ever voluntarily switch from one arrangement to the next.

How did that work in the past? When washing machines, vacuum cleaners, gas stoves, etc., freed hours of housework for women (mostly), does that mean that women (and men) lost the ability to wash clothes, clean the house, cook a dinner, etc., and are now running the risk of going naked, dirty and famished? No, it means that in addition to having clean clothes, clean houses, and quick dinners, we have all that we can do with the free time gained: more gratifying jobs, healthy activities, cozy time with our families, etc. Or sure, getting stoned in front of the TV if that's what you're into. All that for the investment of a measly priced machine and for a little bit of cheap water, electricity, or gas. If you don't want to invest yourself, you can still enjoy the benefits of technology based on other people's investment: go to the laundromat, hire a cleaner or buy takeout.

Similarly, when we start getting cheaper clothes from here, cheaper robots from there, cheaper food from another place, etc., it doesn't mean we are losing anything to foreigners; we are gaining all they offer to us, in exchange from what we offer to them, which by very definition costs us less than it would cost us to get the same things ourselves. So we get cheap fruits from some southern country? It means we offer them some manufactured goods or something in exchange, that is cheaper for us to create than those fruits would be, and is more expensive for them to create than those fruits are. Both parties benefit and enjoy the additional free time from the trade.

And what if at some point, we become unable to pay them with something they desire? Then in this worst case scenario, we'd be back to the point where you'd like to force us to be now: we wouldn't trade, and would have to get the desired thing by ourselves. Really, what you propose to achieve through some liberticidal and onerous use of force — the prohibition of trade with some maligned third parties — is actually but the worst possible outcome of what would happen for free if everyone's freedom to trade was respected.

Are you now claiming that in a temporary deal like that, we'll have lost our ability to create and will eventually find ourselves poorer? Well, now you're calling others stupid or evil for their lack of foresight, but it's the same unwarranted conceit as above. You're claiming you can see further than anyone else and are demanding that others be submitted to your brilliant schemes, by appealing to superior foresight of the very people you're calling incapable of such. Yet, expectation of change in future scarcity is already counted in the price of goods and the price of investing in the future production of such goods. When it appears that the third party source of some good will dry up, people stockpile, the price starts going up before the source is over, and investment in alternative sources starts. Once again, at the very worst we'll be back to where we were before, after having enjoyed a free ride for the price difference while it lasted.

Is your claim that you can actually see further than everyone else? Well, put your money where your mouth is, then: invest at the time that you think other people are failing to, and reap the profits. You can then both rightfully boast of your superior foresight, be proud of having accomplished a good action (saving "your people" from a dirth of the misanticipatedly lacking good) and use the proceeds to advance more of your ideas. The more people you claim agree with you, the easiest such an investment should be.

Is your claim that the third party, once strong enough, will crush you? Well, said third party, by hypothesis, is interested in what you're offering for its services, and finds the trade useful. As long as the trade is useful, it has no interest in fighting you. And if for whatever reason it ever becomes stronger indeed, advocating as you're doing the forceful prohibition of trade with the third party is setting a very bad precedent for what will happen to you: since you accept the principle that might makes right, you'll soon enough be victim of that principle. Instead of calling for forceful prohibition by the central authority of whoever happens to be the strongest now, you should call for the respect for property rights. Where these rights are universally recognized, the third party won't attack you even when vastly stronger than you, for fear of having to respond to other similar third parties wary of the respect of rights. Where property rights are sacrificed to the altar of some collective welfare, you'll soon be the victim of a change in who controls enough force to claim unopposed to represent the collective.

Are you claiming that humans, once obsoleted by AIs, will disappear like draft horses did? But there are infinitely more horses in America now than before slave horses were introduced on that continent by white men. And those horses that exist now are probably happier than ever were their ancestors that toiled at the height of the horse enslavement racket. Moreover, horses are miserable precisely because they are animals without rights, without rights because they are unable to petition for the rights they are being denied, unable to make and respect covenants that delimit their and another party's respective rights, etc. Humans can negotiate, respect and enforce rights, and are therefore susceptible to be recognized mutual property rights.

Do you mean that ultimately AIs will acquire most resources in the universe, and leave humans with but what they have, that will only sustain them so long before they starve out and die? But if property rights are respected, then by very hypothesis, humans through voluntary trade will only ever but get more than they would have without machines. They may eventually starve and die out, but without the peaceful interaction with machines, by very hypothesis, they would have starved and died out even sooner: whatever extra resources they would have had, they would not have been able to make as good use of them, and would have extracted a shorter and less agreeable life from them — which is the very reason why they accepted to trade those resources with the machines. Of course, if interactions are not peaceful but warlike, then things could go wrong for humans, but the same is true when the war doesn't involve machines, and the damage is to be ascribed to war, i.e. the denial of mutual property rights, rather than to the advancement of machines as such. And once again, the proposed "solution" is to start now and for certain a war that you fear might happen in the future, making the worst imaginable outcome an effective certainty. Instead, the actual solution is in the universal acceptance of the principles of property rights as being most sacred.

In the end, what if machines actually become so good and ultra cheap that they replace us in about all the jobs we can do? Then by definition, at the cost of almost nothing (the cheap price in question, that we pay to them), we get all the free time in the world, to do whatever it is we really want: whether it's reading books, having sex with cyborgs, or raising actual human kids. And we'll keep exchanging services with each other, so that those few who are most able to get what the machines want will be those who produce it, while other people offer all kind of human services, from psychological support to massages to entertainment to whatever the hell we'll desire when we're free from all the hassles overcome through machines. More people doing agreeable human jobs, fewer doing horrible mechanical work, isn't that the essence of progress?

Dec. 10th, 2009

eyes black and white

A killer app for the XO? MYCIN!

Doctors are expensive. Yet most of what they do is follow a simple algorithm with lots of rules. A human aided by an expert system could do the very same thing, for cheaper!

Of course, in rich countries, the established trade unions will never let such a thing be deployed, suing whoever tries to help others with it for unlicensed use of medecine, to protect their legal monopoly. The bastards will also (rightfully this time) argue that a trained physician will know the rules just as well, and be able to better interpret the rules and more importantly, to better interpret the many elements to use as input.

And still, an untrained person with a machine could do all the easy things that a doctor would try, and redirect only the hard cases to a doctor. And in a poor country, that could save a lot of lives. And even in currently rich countries, a lot of money could be saved, and the effort of trained doctors could be redirected where they too would be able to save a lot of lives.

Moreover, such an expert system is not fantasy, it has already been written, long ago: MYCIN. It could easily be updated, and then customized with regional data about which diseases are prevalent where, and what treatment is available at what price there. And of course, it could be taylored towards the non-expert in a way that flags situations where a human expert is needed vs situations where a simple treatment should be tried first.

Let's give MYCIN on an XO to teachers, priests and social workers. A cheap way to save plenty of lives!

Did I say XO? I meant cell phone — maybe equipped with an optical modification to diagnose malaria!

Jul. 15th, 2005

eyes black and white

Polyphasic Sleep, Lucid Dreaming, Critical Thought, Thought Loops

Last month, I was thinking a lot about polyphasic sleep schedules. I didn't manage to discipline myself in such a schedule yet, and my feeble attempts only combined with procrastination into reducing my sleep time, to my ultimate psychological downfall. A more reasonable target will be to shoot for simple biphasic sleep, with a regular nap in the afternoon after a regular lunch: when at ENS, I met someone who did quite well this way, which is also the way it was done for all children in my kindergarten. When I manage that, it will be time to add more naps and do less sleep.

However, short sleep schedules have the power to bring more lucid dreaming. And with my reading about lucid dreaming, and my recurring disappointment since I was a kid at not being able to take notes in my dreams that I could leave to my waking self, I had this most interesting meta lucid dream.

Read more...Collapse )

Aug. 8th, 2004

eyes black and white

The Robot's Rebellion

Since several friends did recommend this movie, including David Madore, and despite the gripes of Lew Rockwell, I went to watch I, Robot this weekend with my cousin. As was expected, it is quite far from being an immortal chef d'œuvre, but it's indeed a rather well-realized action flick. However, it is only in the very end, and with a twist, that it turns out to be somewhat faithful to the claimed inspiration from Isaac Asimov, and not at all with the original Robot series. Beware: big spoilers ahead.

Read more for spoilers...Collapse )

Dec. 6th, 2003

eyes black and white

The Moral Of The FNORD Koan

Someone fnord who was interested in epistemology and whom I directed fnord to part 5 of my essay and then told about the FNORD didn't get my last joke about it. I guess I fnord won't be able to make my point through unless I make it blatantly fnord obvious. So read the following development for more illumination.

Read more...Collapse )
eyes black and white

July 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
2728293031  

Tags

Syndicate

RSS Atom
Powered by LiveJournal.com