Top.Mail.Ru
? ?

May. 1st, 2009

eyes black and white

Designing Software under the Influence

beach reported on IRC "In fact I agree with a French colleague of mine who works for the water company, that you have more and better ideas when under the influence, but then you need to be totally clean when working out the details. Luckily, grad students work out the details these days :)"

My answer was that I fell in the wine barrel when I was a kid, and the effects on me are permanent. Maybe that's why I have all these ideas, but can never work out the details.

rsynnott remarked that he normally attributes that to laziness, in himself. I suppose that could be a valid other name for that. In any case, that's why I'm now working towards getting minions to work for me.

Reisner's Rule of Conceptual Inertia: If you think big enough, you'll never have to do it.
Tags: , ,

Feb. 19th, 2009

eyes black and white

Creationist programming vs Evolutionary programming - Epilogue

Read more...Collapse )

You can now find on my web site my essay From Creationism to Evolutionism in Computer Programming, subtitled The Programmer's Story: From Über-God to Underdog. As compared to the previous MSLUG conference (see slides), the essay contains a vastly expanded second part on the prospective future of the evolution of programming, and my vision for TUNES. Enjoy!

Abstract: Programming tools imply a story about who the programmer is; the stories we tell inspire a corresponding set of tools. Past progress can be told in terms of such stories; Future progress can be imagined likewise. Making the stories explicit is a great meta-tool... and it's fun!

Jan. 28th, 2009

eyes black and white

Why Language Extensibility Matters

If you neglect some aspect of computation in designing your programming language, then the people who actually need that aspect will have to spend a lot of time dealing with it by themselves; and if you don't let them do it by extending your language, they'll have to do it with inarticulate barkings rather than civilized sentences.

Read more...Collapse )

Sep. 15th, 2008

eyes black and white

Metaprogramming from the ground up: avoid C

Long ago, assembly languages were endowed with expressive macroprocessing facilities. But it still sucked to write in non-portable languages with incompatible proprietary such metalanguages. And thus, I wanted to metaprogram in something reasonably portable, which at the time pretty much meant C. The first obvious choice was to look at what the standard C pre-processor, CPP offered.

So as to prevent people from shooting themselves in the foot, the language designers made sure the macro expansion algorithm would terminate, by disabling recursion on already-expanded tokens in expressions where the token was already processed. Some people, notably hbaker, thought of using #include as a recursion mechanism. Unhappily, this isn't enough, because you cannot store infinite state in a CPP program: there is a finite number of variable-setting clauses in a program, each to a fixed variable known at compile-time, which leads to a finite number of variables usable in tests. There is a finite number number of test statements, each combining into a boolean a finite number of variables, in a computation restricted to operators of modular arithmetics; variables being expanded as lexical text can actually expand to something that has more combinations than a fixed-precision integer, but up to the reduction to some arithmetic operations, there is still but a finite number of observable grammatical states that a variable can take.

All in all, it is impossible to write a useful non-trivial metaprogram in CPP. But that doesn't mean it is impossible to write trivial harmful metaprograms in CPP, as is easily demonstrated in my counter-example die_die_stupid_c_compiler.c. So CPP is but one more example of a fascist bondage and discipline meta-language. At the same time, the C++ language was slowly extending itself with a meta-programming system, its template language, that soon enough became weakly "Turing equivalent", and allowed wizards to write metaprograms to do all kind of wonderful things. Except that this metalanguage was a pure functional language completely disconnected from C++ itself, extremely hard to debug -- you pretty much have re-develop all meta-level libraries from scratch in a completely new language to do non-trivial metaprograms, and cannot reuse libraries across language levels to bootstrap new functionality. Yet another misguided design.

The correct approach was OpenC++, that provides metaprogramming in the same recursively-bootstrappable meta-language as C++. But by the time you get there, you understand that C++ is not a language you want to use for metaprogramming anyway. Like Perl, C++ is a swiss army chainsaw of a programming language. Unlike Perl, it's got all the blades simultaneously and permanently cast in a fixed half-open position. Don't turn it on. If you want to metaprogram a language in itself, you'll do yourself a favor by instead choosing Lisp, OCaml, Oz, Haskell, Erlang, or any other HOT language.

As for C, it's rather bad as a portable assembly language, as it doesn't handle continuations, multiple-value returns, doesn't allow you precise access to the memory model and temporary variables as required for precise garbage collection, etc. I will spare you my pitiful attempts at metaprogramming it with m4 (don't try it -- m4 really sucks, being better than CPP is a very low bar; however you may look at ThisForth for a relative success at using it). Tom Lord did interesting things with the hard part of metaprogramming: not just generative but analytic, too. He achieved the automatic verification of some GC invariants in the C layer of his Scheme implementation -- and that convinced me that even when done the best possible way, metaprogramming C still sucks: the CPP layer makes it hard to reason about actual source, and the language has lots of arbitrary semantics that make it hard to reason about where side effects happen unless you have intimate knowledge of the compiler, but there is no way to access this knowledge unless you re-write your own compiler at which point, why choose C?

These days, LLVM seems to be the main thing as far as a portable low-level language target for metaprogramming goes (unless you join the dark side and drink the .NET kool-aid). And if you don't care as much for mainstream and portability, you could try to go the way of Factor or your own COLA and build a system from the ground up around sound metaprogramming principles.

Aug. 24th, 2008

eyes black and white

GHARF

As previously announced, I've started my own business. If you're looking for a software company that develops distributed reflective systems, look no further.

Our purpose will be to develop the ideas from the TUNES project, starting with the low-hanging fruits in the hope to fund our longer-term goals.

Our long-term goal is to use reflection (internalization of the knowledge and expertise of computing semantics) to improve semantics-based unification between programming languages, and otherwise push ever higher the level of abstraction at which programming may happen.

The low-hanging fruits, we think, still lie in better automation in the development and management of distributed computing systems, particularly so as regard the building of systems consistently manipulating large bodies of evolving persistent data.

We hope to build as our flagship product a database system that scales semi-automatically. Declarativeness, syntactic and semantic abstraction, will allow for faster development as well as easier management. Unlike existing "relational" or "object" databases, we will decouple the data model from the transaction model, allowing for enhanced robustness as well as unequalled flexibility in adapting the database to various application domains. Based on a relatively efficient dynamic higher-order typed language, and maintaining an explicit decoupling between monotonic (purely functional) and linear (stateful) fragments of our computational logic, we'll build a distributed system, a reliable multicast facility on top of which we can journal distributed transactions, that can be used to construct various queryable views of the data.

Our first version will probably extend and productize Erlang-in-Lisp, reuse Mnesia, and possibly combine these techniques with Elephant, BKNR or another such existing system to persist objects. Further versions may or may not be displace Common Lisp with Scheme, Factor, Javascript, or our own COLA, for total semantic control allowing for metaprograms that aid with code-data co-evolution.

All software will of course be Free Software, published under a dual license bugroff/MIT, and available on TUNES.org. You'll be welcome to use it, improve on it, and either give back to the community or stubbornly hoard your improvements. But if you need the world best specialists to tailor our software to your needs, build robust and usable applications, and provide exceptional service, then you know where to find us.

TUNES.org will continue to pursue its existence as an independent non-profit organization. Our company will be a separate for-profit entity that happens to contribute to the TUNES project. We are open to research and/or development contracts, with the understanding that our core software will be published as free software, although special-purpose customizations might remain an unpublished secret shared with our customers.

The current codename for our company is GHARF (rot13), though this name is neither marketable nor available in .COM. We have several ideas for an actual marketable name, but will refrain from announcing anything until we have secured the internet domain registration and setup an initial web site.

Aug. 17th, 2008

eyes black and white

Oink, oink!

It's official: I'm an evil capitalist pig, oppressing a poor wage slave in a third world country, and dispossessing him from the productive surplus of his work for my own selfish profit!

Sure, I didn't make (and adamantly oppose) the statutes that prevent him from moving to a first-world country, where he could sell his valuable services for more than I can afford. Sure, I have to pay him almost three times what he was previously making in my bid to convince him to serve my multi-national company. Sure, he'll be working on a much more interesting project than he was at his previous job, under much better work conditions. Sure, I'm taking all the risks: in the faint hope of some uncertain future profit, I'm committing to months or years of bleeding money regularly and foregoing any short-term plan of parenthood, while my employee will be spending the same months and years in the certainty and comfort of a regular salary with which to enjoy family life. But you see, I'm the employer, a tyrant giving orders, so of course I'm evil, whether I'm an incompetent bungler, or much worse, a taker of profits.

At least, thus goes the formal discourse of the many factions of socialists. For the real reason why I'm evil in their eyes is not that I'm an employer. Indeed, if I were a goon of the State or of any other emanation of the Collective, embodying The People's Romance (gracias, PLIMO), then my employership would be pardoned, as well as any wealth I generate for myself. Not only would I not be the godawfullest monopolest exploitest of an employest, instead I would be the incarnation of everything that's Good about Society, leading All Of Us towards our Shared Destiny; my arbitrary orders would not be unbearable tyranny, they would be the Sovereign Will of The People, to be enforced with arbitrary ruthlessness upon any reluctant antisocial miscreant. But I'm a individual who dares to follow his own purposes and who refuses to submit to my Social Duties as proclaimed by the anointed representatives of the Common Good. And that is my mortal sin. Forget employership. The reason I'm irredeemably evil is thus: I'm selfish.

As for my venture, it doesn't have a name yet, but if you know me, you already know what it is all about. News at 11.

Apr. 9th, 2008

eyes black and white

Creationist programming vs Evolutionary programming, part VI

Here is the sixth installment in this series for my essay Creationist programming vs Evolutionary programming. Previous installments: Part I (Creationist programming, The Devil), Part II (Intelligent Design, Polytheism), Part III (Unintelligent Design, Lamarckism), Part IV (Supernatural Selection, Teleological Evolution), Part V (Natural Selection, Inside Evolution).

The Relevance of Paradigm Evolution

This evolution of programming paradigms is a nice story, but what is its relevance for software developers? After all, the tools described above already exist; they have been created, they have been engineered, they have been selected or they have emerged, without any of these paradigms being explicitly stated, much less used as a conscious guide. Do these paradigms correspond to anything real, or are they but a nice-sounding rationalization? Do we gain anything by spelling them out?

Well, as Daniel Dennett wrote, There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. This is true of computer science and computer engineering as of any other human endeavour. Just because you don't state your assumptions doesn't save you from the consequences of following them when they are erroneous, not anymore than putting your head in the sand would save you from predators you can't see. These paradigms do describe assumptions implicitly followed without a conscious decision, and each step in their evolution describes relevant phenomena to which earlier paradigms are blind. And those who make unconscious decisions are but surer victims of the problems they are blind to.

Realizing that some phenomena are not accidents happening during development, but constitute an essential part of it is necessary to properly address them. Failing to plan is planning to fail. If you assume say, the Intelligent Design paradigm, even though you may benefit from tools developed with latter paradigms, you will systematically waste resources trying to intelligently design what is beyond the reach of any intelligent design, or aiming at the only solutions reachable by it despite their being inferior to competition. However, if you go beyond intelligent design, you will come to better solutions naturally by letting them grow. By embracing a more primitive paradigm, you will introduce a lot of unnecesary nasty bugs by not taking seriously the systematic processes of weeding them out early with dedicated tools; you will systematically fail to consider cheap solutions that are at hand, but that do not lend themselves to a perfect algorithmic description, etc. Those who stay behind in terms of software development paradigm will be incapable of doing what will appear to them as clever lateral thinking, strokes of genius or unreachable fantasy, whereas those who master further paradigms will casually achieve feats previously deemed impossible by a simple systematized application of their more evolved paradigms.

To those who understand the relevance of such paradigms, the important open question is: what is the next paradigm, if any? Is the above Inside View of Evolution the be-all, end-all of programming paradigms? Is the refinement of existing tools our only hope? Or will some further paradigm catch on? Can one identify and adopt this paradigm early on, and thus get an edge over competition?

What then, if anything, is next on our road as far ahead of the current paradigm as that paradigm was of previous ones?

Paradigmatic Optimism

The simplest view about future paradigms is that there will be no new ones, at least none that works. Our understanding of software development is mature and as good as it can get as far as the big piture goes, though there may always be a myriad of minor details to get right. This is Present Optimism: the theory that we've already reached the limit of knowledge.

Of course, assuming there is finite understandable information about the big picture of software development, there will be diminishing returns in understanding the field and eventually not enough new relevant information to possibly constitute a new paradigm change for the better. And so we can be confident that this theory of Present Optimism will some day be true about software development paradigms as about many things.

On the other hand, considering how new the field of software development is and how fast it has changed in just the last few years, it seems premature to declare that we fully understand how software is developed and will not find new deep insights. If indeed our understanding of software development was to remain unchanged for, say, five to ten years, and all developers were to settle towards a finite set of well understood unchanging methods, then we could assert with much more confidence that indeed we have reached the acme of software development. But this hasn't nearly happened yet, and the case for Present Optimism is rather slim.

Another kind of optimism and a common idea about the future of software paradigms has always been that computers will somehow become more intelligent than men and will take over the menial task of programming, like djinns to whom you will give orders and who will grant your wishes. This is Extreme Future Optimism, or Millenarism: the theory that soon(er or later), we'll reach a Millenium where all our worries will be taken away.

However, this Optimism is based on a misunderstanding of what progress is about, a misunderstanding that is best dispelled by confronting it with the equal and opposite misunderstanding: the claim that such a future is bleak because it means machines will be taking all our jobs away. Hopefully the errors will cancel each other in a collision from which light will emerge.

Yes, computers in many ways have replaced humans for many tasks, and will replace humans for more tasks to come. The building of tools that replace human work in software development is what our whole story of paradigms was about. But competition by computerized tools does not destroy human jobs, it only displaces jobs towards new areas not covered by tools. Useful tools provide some of the same positive satisfactions as before and some more, while reducing the negative efforts; the goal of some previous jobs is fulfilled without the associated costs. The human resources previously used toward that goal are not destroyed but liberated; they are made available to be redirected to new useful endeavours that couldn't previously be afforded.

Furthermore, as long as humans and machines do not have the same relative performance in all activities, the law of comparative advantages ensures that the tasks relatively better done by meatware than by software will remain a domain of human activity. And even if machines do it better than humans, nobody prevents you from programming without machine help, or from choosing to sponsor a human rather than a machine for the programming tasks you need. Just like automation in other industries made these industries vastly more productive and mankind at large vastly more wealthy, so will automation in programming make software a more profitable industry and better serve mankind. Through all the software development tools already mentionned in the article above, automation already serves mankind, to a tremendous degree. Continuing to program in Java will no more provide job security than did programming in C++, COBOL or Assembly before; it will only guarantee a lot of wasted effort and ultimately failure in the Luddite refusal of automation.

What machines can neither possibly create nor destroy is on the one hand the desire for ever more, ever higher satisfactions, and on the other hand, the ability to adapt and work towards these satisfaction: in other words, human life, its drive and its spirit. Machines displace this life for the better, turning feats into chores, chores into menial tasks, menial tasks into assumable commodities. As our past worries are taken away, we worry about new often loftier tasks that become our focus. Ultimately, the only persons who create human jobs are human parents, and only illness and death destroy jobs away; the rest is a matter of organizing existing human resources. The fear of Artificial Intelligence is a lifeformist stance wrapped in the usual protectionist fallacies, and its narrowmindedness should inspire the same spite as racist or nationalist arguments before it.

Conversely, blind faith in Artificial Intelligence is yet another mystic superstition by millenarists dreaming of being saved from having to live their own lives. This blind faith is a cop out, in that it wishes away the very nature of programming and its intrinsic difficulties. Indeed, even if intelligent machines are to replace humans in the activity of programming, said machines won't be able cop out of a programming paradigm that way; the buck will have to stop somewhere, and the issues will have to be addressed. One of the main features of digital computer software as we know it is that it behaves, combines, and can be understood according to rigorous formal semantics in perfectly well-defined logics, whereas intelligence seems to be about dealing with fluid concepts, incomplete information, creative solutions, under misunderstood external pressures. Bridging that gap, if possible, can't be achieved by hand-waving. It requires a paradigm shift that the cop out precisely prevents from knowing.

The legitimate cop out is not to assume knowledge but to admit ignorance: my previous investigations didn't lead to any firm conclusion to this question, and I don't know have enough combined care for the matter and trust in the remaining available venues to investigate to afford further investigation. But are we reduced to this ignorance? Are there not things we know or can guess about the directions that the future may take?

TO BE CONCLUDED...

Jan. 3rd, 2008

eyes black and white

Creationist programming vs Evolutionary programming, part V

Happy new year to you, faithful readers! 2008 brings the continuation of my series, Creationist programming vs Evolutionary programming, of which this is the fifth installment. Previous installments: Part I (Creationist programming, The Devil), Part II (Intelligent Design, Polytheism), Part III (Unintelligent Design, Lamarckism), Part IV (Supernatural Selection, Teleological Evolution).

Natural Selection

As far as paradigms for understanding software development go, the notion of evolution under godly guidance was an improvement over that of direct design by purposeful gods, which was itself an improvement over the notion of immediate creation. But in each case, this was only pushing back one level the assumption of a driving intent external to the world. Real evolutionary theory does away with this assumption. Survival of the fittest does not suppose an external criterion of fitness to which living creatures are submitted; rather, survival itself is the only criterion, tautological and merciless. Survival is its own purpose: those programs that survive, survive; those that don't, don't. Changes that improve the odds that their host software should survive and propagate, thus statistically tend to propagate themselves and colonize their respective niches. Changes that decrease the odds that their host software should survive and propagate, thus statistically fail to propagate themselves and eventually disappear. The cumulative result of this natural selection is an evolutionary process that favors bundles of traits that tend towards their own reproduction. This freewheeling evolution necessitates no godly intervention, neither by an intelligent conscience, nor by madmen. More remarkably, programmers are no gods above it, and their actions are no such interventions. They are but machines like others, bundles of self-reproducing traits competing to exploit the resources of the universe. As compared to other machines in this programming universe, certainly programmers are unique and different -- we're all unique and different; that doesn't exempt them from the laws of natural selection. Programmers are machines trying to survive in a wild machine-eats-machine world; their actions are their attempts to survive and reproduce by gaining an edge in the race for ever more efficient acquisition and use of reproductive resources. If God exists, then ever since He created the world, He has just been relaxing, sitting back and enjoying the show. Evolution is not guided by God, it is God's Spectator Sport. Such is the paradigm of Natural Selection.

With this new understanding of the world of software development emerge new tools to improve our development processes. We think in terms of self-sustaining systems, evolving and competing based on their ability to survive and spread. We understand that the hosts and actors of this memetic competition are humans as well as machines, or even more so. We may then notice that systems are never born big, and that the only big systems that work are those that were born small and evolved and grew in a way that they were kept working at every step. We explain the spread of ideas in terms of generations of humans and machines passing on their forking and mingling traditions. We understand that pieces of hardware, software and wetware survive as part of ecosystems, with cycles of development and use by various humans, where economic and legal aspects have their importance as well as technical and managerial aspects. We realize that these systems compete on a market ultimately driven by economic costs, of which technical aspects are but a small part, sometimes not decisive, though they are what the technicians obsess about.

Because the forces opposing creation are no devil but malicious humans indeed, we use of computer cryptography and cultivate networks of human trust to achieve security. A Third Wave of Cybernetics attempts to re-create artificial life and life-like phenomena through the emergence of behaviour from many software agents.

Natural Selection provides a big picture that puts haughty programmers down from their godly pedestal and back into the muddy real world. It doesn't offer direct solutions to design problems so much as it dispels our illusions about fake solutions and unearned authorities. No one is a god, above the others, to predict what will work and dictate what to do; our experts' dreams are often but vain obsessions, whereas some rare amateurs' successful experiment may start a revolution. Life is the ultimate judge -- accept no substitute, and respect its sanction.

Inside Evolution

Natural Selection may appear to look down on the world as a soulless marketplace. It will only appear soulless if you imagine yourself in the seat of that laissez-faire God above the world. But face it, you're no god, you're not outside the world and above it. There may be a god, who may or may not be intervening in this World, but you have to come to the realization that He's definitely not you. You're one of us earthworms, trying to make the best out of what you have (or not trying, and thus probably failing and promptly disappearing into irrelevance). Evolution is not something for you to enjoy watching, it is something you are part of, willy nilly. You can't just let nature decide, you're part of the nature that will decide. Whichever genes and memes you carry may or may not survive -- it is largely up through your actions that they will succeed or fail. You're in the experimental set of changes that may or may not work out well, or you're in the control set of the obsolete that will surely be replaced. Such is the view from Inside Evolution.

The tools that matter are those that are available to you. Your resources are limited, and you should invest them wisely. Which tools will make you most productive personally? Opportunities are there to be seized; if not by you now, by someone else later. On the other hand, it may be too soon to invest in some ideas, and too late to invest in others; timing is key. Specialization will help, and can be a long-term investment that provides compound interests. As for cooperation with other non-gods, you can only go so far with your own efforts, and success lies in being able to leverage the efforts of other people. Which tools allow you to reuse as much as possible of these people's efforts? Tools can be technical, or can be social. Not just software libraries, but software communities, software market niches, software business contracts. Of course, you always need some kind of exclusive resource to ensure a revenue stream; free software or not, your combined proficiency, trustworthiness and time are ultimately the only such resource you have, and ample enough to live well if you can market it, though it will probably not make you super rich. On the devil side, intellectual frauds will try to have you adopt their bad ideas, and other scammers will try to divert your resources in their favor; you must learn to avoid them.

As you fully grasp the fact that all actors are individuals, not just yourself, and you take into account incentive structures. Incentive structures will put you and your associates in a position to productively cooperate at your full potential, or to work at a fraction of it; so carefully watch both your legal and business arrangements. You may see that proprietary software destroys incentive from anyone who doesn't fully trust the software owner, and that trust can last but until the eventual catastrophe inevitable in any centralized management; any proprietary software has a suspended death sentence. On the contrary, you may see that free software creates an insurance against disagreement with associates, and ensures pereniality of software investment. With a systematic view of incentives, you stress the importance of contracts and accountability as a way to structure human interaction, re-uniting liberty of means and responsibility for results in complex software arrangements. For instance service-level agreements will allow to robustly build larger, more complex structures than direct command chains. You may recognize the value of free markets as a way to organize people and to evaluate ideas, rewarding those able to invest their resources in the good ones rather than the bad ones. You may celebrate startup companies as light innovation structures with highly motivated personel.

The Inside view to Evolution restores the soul in the market place for software. This soul is yours. You're the entrepreneur of your own life.

TO BE CONTINUED...

Dec. 15th, 2007

eyes black and white

Creationist programming vs Evolutionary programming, part IV

Here is the fourth installment in this series for my essay Creationist programming vs Evolutionary programming. Previous installments: Part I (Creationist programming, The Devil), Part II (Intelligent Design, Polytheism), Part III (Unintelligent Design, Lamarckism).

Supernatural Selection

Though Unintelligent Design helps further the field of software engineering, one may realize that while small parts of software are understood, software at large is not understood, much less designed. Lamarckism, by shifting the spotlight to the change process, leads to asking why and how programmers lacking complete understanding choose to keep or change some or some other parts of the software. The immediate answer is that as god programmers write, they stumble upon good or bad features that they winnow by propagating the good and by eliminating the bad. The software writing process is thus some kind of artificial selection, under the careful, intelligent guidance of the programmer God. The programmer God impresses upon the process a definite direction, Progress, and otherwise lets software evolve organically in this divine order. This software paradigm is Supernatural Selection.

Under this paradigm, new tools are selected into prominence. Prototyping tools help the programmer God flesh out as many ideas as possible as quickly as possible, so he may select the correct ones. Formal specifications help define what software should be doing, without worry about how it will be doing it. Heuristic search algorithms use intelligently designed strategies to systematically explore spaces of potential solutions too large to be explored by the programmer themselves. The combination of these two approaches leads to declarative programming, where the programmer God focuses on the intent, and delegates the implementation to the machine. From one phase to the next, programs are transformed through systematic metaprograms. To prevent the devil from corrupting software, formal proofs are developed that perfectly exclude undesired behaviour. To coordinate multiple programming gods, software modules separate interface from implementation, allowing for experimentation and adaptation separately in each part; rational developer communities are created, conferences are given, journals are published.

This whole approach has also been called the First Wave of Cybernetics, combining an understanding of the natural dynamics of software with a faith in the ultimate power of an intelligent and purposeful programmer god, culminating with expert systems using explicit knowledge representation in an attempt to solve complex real-world problems.

Teleological Evolution

The paradigm of Supernatural Selection obviously suffers from the same shortcoming as did the theory of Intelligent Design before it, in that it supposes that the programmer God (or at least some of them) are supremely intelligent. The only reason this shortcoming was not immediately grasped is because these successive paradigms were adopted without ever being articulated as clear theories. Now, an immediate improvement over the previous paradigm is to stop believing that the programmer Gods are intelligent. Gods may guide the evolution of software, but their contribution to the process is hardly an overall intelligent coherent purpose; rather it is through a number of interventions based on partial knowledge, intuition, randomness, towards a progress that can be felt but not defined. Such is the theory of Teleological Evolution.

With the transition from intelligent guidance to unintelligent guidance, we are lead to the appearance of new tools, that roughly correspond to the Second Wave of Cybernetics. Genetic Algorithms, connectionist neural networks, probabilistically approximately correct learning methods allow to mine information from large databases without any explicitly designed representation of knowledge. Weakly structured computations allow to manipulate data despite limited understanding. At a smaller scale, programmers are satisfied with randomized algorithms that have good enough performance in practice despite having dreadful worst case guarantees. To protect from the devil, checksums and probabilistic proofs can be more useful than unattainable formal proofs. To synchronize multiple gods, user communities come to prominence, as users, though the least proficient, are those who possess the best distributed knowledge of what makes the software useful or not.

The paradigm of Teleological Evolution loosens the strictures of Design or Supernatural Selection, and opens the space for practical software solutions to problems beyond the full grasp of the programmers. While it reckons the importance of reasonable endeavor, this importance is de-emphasized; indeed, even reason can be seen as but a fast-track internal process of random production and selection inside the programmer's mind, as guided by his godly intuition. In the end, Teleological Evolution embraces an unfathomable mystical intuition as the ultimate divine source of creation.

TO BE CONTINUED...

Dec. 14th, 2007

eyes black and white

Creationist programming vs Evolutionary programming, part III

Third installment in this series, here is Part III of my essay Creationist programming vs Evolutionary programming. Previous installments: Part I (Creationist programming, The Devil), Part II (Intelligent Design, Polytheism).

Unintelligent Design

Intelligent Design was once a great progress in how to approach software creation, but sooner or later, you must realize that it doesn't describe reality accurately. The design of most software is just really bad; the issue isn't that errors creep in that corrupt a perfect understanding, it is that the understanding was far from perfect to begin with. We must recognize that all too often, the programmer God is just plain stupid. And so, the next stepping stone on the way to better programming paradigms is: Unintelligent design.

God may have an intent, but he's a blind idiot who doesn't know exactly what it is he wants or how to achieve it. He not only makes gross mistakes, he writes plainly erroneous code that can't possibly work. Tools to help him design programs will thus include helpful messages from his compilers for error diagnostic and recovery: their role isn't to tell an intelligent programmer the devil crept in while you weren't looking, just have a look, you can obviously see him and chase him, it is to tell the unintelligent programmer what you did was stupid, here is the explanation why, for it would be hard for his limited intellect to figure it out all by himself. Syntax checking, type checking and various kinds of advanced semantic checking are invented to catch the more or less obvious errors and converge more quickly towards what the programmer would mean if only he were capable of forming coherent intent. Interactive help, manuals and hints constantly remind the programmer God of the things about which he should know better. Integrated development environments help God play with the code and get faster answers as to whether or not his ideas make sense. All the software interfaces are made idiot-proof by making languages more abstract and completing them with ample compile-time and run-time checking. Tools do most of the work and clever interfaces try to present things so that complexity is managed away for the stupid user, and all decisions may be done based on a shallow limited view of the world, the only kind that fits the programmer God's tiny brain. There is no shortage of imaginable tools and prosthetic devices to help the programmer God cope with his mental disabilities; and these tools are themselves limited mainly by the inability of their own godly program designers.

When a devil adds machine malfunction to operator dysfunction, testing becomes something to take seriously and systematically. When multiple gods are involved, the many resulting processes running at the same time must be protected from each other; the many parts of the software are tested separately, and contracts for what happens at their interface are attemptedly defined and enforced. Because the programmer gods cannot be trusted to remember all the issues with the software, some software must be used to systematically track those bugs and issues. When some of the programming gods are malicious, you're glad they are idiots, too, and you bury them under the weight and complexity of security features that will catch each of the more obvious malicious types of behaviour.

Lamarckism

Whether software is designed by intelligent or stupid gods, or something else altogether, we importantly may understand that software changes to adapt to new circumstances, and focus on the nature of this change. That's Software Lamarckism.

Filesystems may remember many versions of the files they hold, each with a different version number. Software releases are numbered too. Because many gods may be working at a time, a piece of software may exist along many different development branches. To understand the differences introduced, whether they were intelligent, stupid or malicious and what to do of them, tools computing differences between files are created. To merge the intelligent changes and the fixes to the stupid and malicious ones along the many different branches, tools are created to apply computed differences to branched files. Revision control and change management is born, and continuous backup remembers all previous versions of tracked files.

Lamarckism is not a complete theory of why and how change happens, but it introduces a useful focus on change. It is thus the starting point for more elaborate theories that will explain the development of software in the terms of this incremental process of change.

TO BE CONTINUED...

Dec. 13th, 2007

eyes black and white

Creationist programming vs Evolutionary programming, part II

Second installment in this series, here is Part II of my essay Creationist programming vs Evolutionary programming. Previous installment: Part I (Creationist programming, The Devil).

Intelligent Design

Even with the Devil variant, the pure Creationist approach to programming soon proves insufficient to explain how Software comes into existence, and why programming is such a difficult activity. As projects grow bigger, it becomes obvious that a whole software system cannot be completed in one go. The sheer volume of it makes it impractical. But it is possible to create the software in many steps, starting from foundations and building layers upon layers, bootstrapping complex structures from simpler ones, shaping tools and tool-making infrastructures, replacing parts with better ones as the need and opportunity arises, building scaffolding that may get destroyed later possibly leaving fossils along the way, all according to a carefully designed master plan. This programming paradigm has a name: Intelligent Design.

Intelligent Design is a most common paradigm amongst software professionals and amateurs, whether in the industry or in academia, if only because it flatters them. Programmers realize that software problems are big, complex beasts, but have faith in their godly brainiac powers to tame those beasts through the reasonable, intelligent, systematic endeavour of well-trained specialists creating well-designed programs.

Within this paradigm are created and elaborated such tools and concepts as assemblers, formula translators, source code, operating systems, compilers, compiler-compilers, compilation management utilities, etc. Top down design, flow charts, modelling tools, hierarchically layered systems, the waterfall design process, and all kind of neat engineering practices follow from this paradigm.

Now of course, this paradigm has a devil variant: Intelligent Design with a Devil. Based on this enhanced paradigm, according tools are engineered into existence to counter the devil's work: editors to modify programs and remove bugs (based e.g. on line numbers), loggers, tracers and single-steppers to help locate bugs. Some small amount of reviewing and testing is added to the design process.

Polytheism

Another useful mixin for software engineering paradigms is the polytheism mixin. According to this partial theory, there isn't one God, with one Master Intent and consequent actions, but a lot of gods, each with his own intent and actions. It may be that many programmers are each a god partaking in some part of designing the Software; it may be that some God takes multiple roles to address the multi-faceted endeavour of Software design; it may be that God's intent changes with time, that God is moody and has tantrums.

God's ways are impenetrable, but enhanced theories of what God is lead to the design of new tools. To address the multiple programming gods, files are invented; as gods get organized in hierarchies, so are files organized in directories. Machines are time-shared, operating systems grow to manage multiple users, and eventually multiple users at the same time, each running multiple processes. Communication protocols are developed to exchange data between machines. Source code comments and formal documentation serve to convey intent and content between programming gods.

The devil mixin can also be combined with the polytheism mixin. The devil may have multiple aspects that each have to be addressed separately. The apparent or actual polytheism could be explained as a one God's multiple personality disorder, and the devil as a symptom of His schizophrenia. The devil may be a God himself -- a malicious programmer. Any of these explanations for errors in God's Design can lead to new techniques to address the identified sources of error. User accounts are protected by passwords; files and other resources have usage restrictions, and will be backed up; redundancy checks are added to communication; errata complete documentation, and pages are intentionally left blank to prepare for them.

TO BE CONTINUED...

Dec. 12th, 2007

eyes black and white

Creationist programming vs Evolutionary programming, part I

I'm writing an essay Creationist programming vs Evolutionary programming, based on a speech given at ENS in october 2005, and redacted, extended and polished under the purview of the loveliest of lovelies, my wonderful and everinsightful Lucía. In the hope of pushing myself to finish it, quick, I'll be publishing it in a series, of which this is the first installment.

I was once asked to summarize the main TUNES concept in a few words. My reply was that the central idea behind TUNES is the evolutionary paradigm for programming. What is this evolutionary paradigm? The opposite of the creationist paradigm. (Duh!) Now, what is...? Wait, let's examine paradigms for the appearance of software on earth: start from the initial naive paradigm of software creationism, and watch how it has evolved since.

Creationist programming

At first there was nothing; then, God zapped code into the computer, and Software came into existence.

The belief in this simple story is Software Creationism. The programmer is a God outside and above the machine; the program is his creation, implemented in the machine. The programmer God is perfect, and has a perfect program in mind; the actual program may not be perfect, but that's because it's a rendition of His platonic idea on an imperfect Machine. Though its form might be limited by the limits of the finite physical computer, the program is the expression of a perfect intent.

Software Creationism is not only the naive belief that non-programmers naturally form when confronted with the apparition of software, it is also the programming paradigm taught to students at most schools: in exercises and in tests students are expected to produce from scratch and on paper a perfect solution to a perfect specification; in assignments and projects, they otherwise have to write standalone pieces of code to be run and evaluated once by the teacher, software that should not rely on any code by anyone else or contribute to such, except through the documented interface provided by the system.

No programming tools are necessary in this paradigm; just a switchboard to insert the program into the machine. Who needs tools when you're a perfect god directly playing with memory at the binary level (or base ten, if that's your kind of machine). Programming the machine is best done in binary, directly from God to Machine, although it can be done in a whichever write-only language suits the expression of God's will.

The Devil

The story of software creation by a superior God is beautiful; however, anyone who has ever tried to program soon realizes that he can seldom get his programs to run perfectly at the first try, or even the second. Bad things happen: Bugs, typing mistakes, mismanipulations, cosmic rays, hardware malfunctions, errors in the program, etc. Happily, the simplest explanation suffices to account for it: The Devil.

There is a devil that modifies things in a way that counters God's intent. Whether this Devil is a personality defect within God, an opposing force outside God, or an artefact of the laws of Nature that God created is unclear and might really not matter. What is clear and matters is that the bad things that happen are the symptoms of the presence of dark forces of Evil. This Devil introduces imperfections in the way the machine works, and causes it to fail to perfectly receive the perfect message from the perfect God and thus to fail to embody His perfect platonic idea.

To keep this devil under control, appropriate tools pop into existence: punched cards or display consoles are used that can be read by the programmer God as well as written. Thus programs can be double checked, fixed, retried, stored, despite the attempts by the devil to make them fail. Programs must be read as well as written, decoded as well as coded, and thus come into existence all kinds of languages to express computation. Many software development practices are invented, to be followed religiously.

From a better (or less bad) paradigm for programming, we thus get better (or less bad) tools that allow us to improve software engineering and cope with the difficulties of the endeavour. This will continue to be true as we improve our software engineering paradigms.

Interestingly, anytime we find a new and hopefully better such paradigm, we will always be able consider a variant of it where some dark forces conspire to undo or corrupt what the creative forces strive to achieve. Thus, the idea of such opposing forces is a universal mixin for software engineering paradigms, the devil mixin.

TO BE CONTINUED...

Oct. 1st, 2005

eyes black and white

Prochainement à Paris

Je donnerai une conférence à l'École Normale Supérieure, à l'occasion de l'ouverture du Séminaire Informatique Des Élèves. Le titre en sera:

Cybernétique et Réflexion: Une Approche Dynamique de l'Art de Programmer
La date et l'heure provisoires sont le Jeudi 13 octobre à 19h, mais cela reste à confirmer, la salle n'étant pas encore réservée.

Read more...Collapse )
Tags: ,

Dec. 30th, 2004

eyes black and white

Getting Students To Do Useful Stuff

At the request of an Indian student, I have worked out a formal document containing my term project proposals for students completing a Masters in Computer Science, specializing in Distributed Systems: http://fare.tunes.org/computing/term-project-proposal.html.

For a long time, I've considered it an awful practice that students should have term projects consisting of toy programs and rigged demos following the MWRO principle: many write, run once. Instead, they should be encouraged to systematically participate in actual useful real-world projects, and if possible free software projects, that make academic peer review possible. I never could do anything about it when in French universities, so maybe this is a godsend opportunity to move things in the right direction.

Nov. 29th, 2004

eyes black and white

America! America!

Quick update to say I'm alive and well in the Silicon Valley. America is BIG. So far, I have had a great welcome. So many thanks to David and his Dad John, and happy birthday to Betty! Tril, water and I had a TUNES meeting in Seattle. More later.

May. 20th, 2004

eyes black and white

Dynamic software development

Here are a few things I'd like to say about dynamic software development, after I've just had the opportunity to test first hand with CTO what I had been studying in theory and through the experiences of other people.

Read more...Collapse )
eyes black and white

CTO, reloaded

Now is a good time to announce that the software behind Cliki.Tunes.Org, aka CTO, has been noticeably improved.

Read more...Collapse )

Nov. 16th, 2003

eyes black and white

Decentralized versioning?

...Collapse )

eyes black and white

July 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
2728293031  

Tags

Syndicate

RSS Atom
Powered by LiveJournal.com