Top.Mail.Ru
? ?

Sep. 24th, 2017

eyes black and white

A tale of many nests

This short essay will tell you about my favorite macro, nest, discuss the modularity of syntax extension, and use the implementation of that macro as an illustration for how to use defmacro, syntax-rules and syntax-case, providing along the way a comparison between these respective macro definition systems.

Using the nest macro

When I started using Scheme as my main Lisp, the first macro I wrote was the nest macro. What macro? The nest macro. The one that in Common Lisp helps my code avoid drifting hopelessly to the right as I nest binding form inside binding form... by doing the nesting for me. To illustrate the kind of issues that I'm concerned with, consider the Common Lisp code snippet below:

Read more...Collapse )

Apr. 30th, 2017

eyes black and white

Design at the confluence of programming languages and build systems

This short article discusses upcoming changes and future challenges for ASDF, the Common Lisp build system. It also draws lessons for a hypothetical successor to ASDF, for build systems in general, languages in which to write them, and languages that would have an internal build system that could rival with modern build systems.

ASDF, "Another System Definition Facility", is the de facto standard build system for Common Lisp (CL). It is relatively lightweight (13 kloc, over half of which for the portability layer UIOP, the "Utilities for Implementation- and OS- Portability"), quite portable (17 supported implementations), configurable (though importantly it "just works" by default), well-featured (it can create standalone executables), extensible (e.g. with support for linking C code, or for compiling FORTRAN through Lisp, etc.). But it lacks many features of modern build systems like e.g. Bazel: it does not support determinism and reproducibility, distribution and caching, cross-compilation to other platforms, building software written in languages other than CL, integration with non-CL build systems, management of multiple versions of the same software, or scaling to millions of files, etc. Historically, these limitations are due to ASDF being at heart an in-image build system in direct line of the original Lisp Machine DEFSYSTEM: it is designed to build and load software into the current Lisp image. But the challenges in possibly transforming ASDF into a modern build system touch limitations of Common Lisp itself and tell us something about language design in general.

I have essentially two development branches more or less ready for merge in the upcoming ASDF 3.3: the "plan" branch that provides proper phase separation (briefly discussed in my ELS 2017 demo), and the "syntax-control" branch that binding for syntax variables around ASDF evaluation (briefly discussed in my ELS 2014 extended article, section 3.5 "Safety before Ubiquity").

Phase Separation

The first branch solves the problem of phase separation. The branch is called "plan" because I started with the belief that most of the changes would be centered around how ASDF computes its plan. But the changes run deeper than that: 970 lines were added or modified all over the source code, not counting hundreds more were moved around as the code got reorganized. That's double the number of lines of the original ASDF, and it took me several months (part time, off hours) to get just right. Still, it is up-to-date, passes all tests, and works fine for me.

To understand what this is about, consider that a basic design point in ASDF 1.0 to 3.2 is that it first plans your entire build, then it performs the plan. The plan is a list of actions (pair of OPERATION and COMPONENT), obtained by walking the action dependency graph implicitly defined by the COMPONENT-DEPENDS-ON methods. Performing the plan is achieved by calling the PERFORM generic function on each action, which in turn will call INPUT-FILES and OUTPUT-FILES to locate its inputs and outputs.

This plan-then-perform strategy works perfectly fine as long as you don't need ASDF extensions (such as, e.g. cffi-grovel, or f2l). However, if you need extensions, there is a problem: how do you load it? Well, it's written in Lisp, so you could use a Lisp build system to load it, for instance, ASDF! And so people either use load-system (or an older equivalent) from their .asd files, or more declaratively use :defsystem-depends-on in their (defsystem ...) form, which in practice is about the same. Now, since ASDF up until 3.2 has no notion of multiple loading phases, what happens is that a brand new separate plan is computed then performed every time you use this feature. This works well enough in simple cases: some actions may be planned then performed in multiple phases, but performing should be idempotent (or else you deserve to lose), therefore ASDF wastes some time rebuilding a few actions that were planned before an extension was loaded that also depended on them. However, the real problems arise when something causes an extension to be invalidated: then the behavior of the extension may change (even subtly) due to its modified dependency, and the extension and all the systems that directly or indirectly depend on should be invalidated and recomputed. But ASDF up until 3.2 fail to do so, and the resulting build can thus be incorrect.

The bug is quite subtle: to experience it, you must be attempting an incremental build, while meaningful changes were made that affect the behavior of an ASDF extension. This kind of situation is rare enough in the small. And it is easily remedied by manually building from scratch. In the small, you can afford to always build from scratch the few systems that you modify, anyway. But when programming in the large, the bug may become very serious. What is more, it is a hurdle on the road to making a future ASDF a robust system with deterministic builds.

Addressing the issue was not a simple fix, but required deep and subtle changes that introduce notions neglected in the previous simpler build models: having a session that spans multiple plan-then-perform phases and caches the proper information not too little not too much; having a notion that loading a .asd file is itself an action that must be taken into account in the plan; having a notion of dynamically detecting the dependencies of loading a .asd file; being able to check cross-phase dependencies before to keep or invalidate a previously loaded version of a .asd file without causing anything to be loaded in the doing; expanding the state space associated to actions as they are traversed potentially many times while building the now multi-phase dependency graph. And all these things interfere with each other and have to be gotten just right.

Now, while my implemented solution is obviously very specific to ASDF, the issue of properly staging build extensions is a common user need; and addressing the issue would require the introduction of similar notions in any build system. Yet, most build systems, like ASDF up until 3.2, fail to offer proper dependency tracking when extensions change: e.g. with GNU Make you can include the result of a target into the Makefile, but there is no attempt to invalidate targets if recipes have changed or the Makefile or some included file was modified. Those build systems that do implement proper phase separation to track these dependencies are usually language-specific build systems (like ASDF); but most of them (unlike ASDF) only deal with staging macros or extensions inside the language (e.g. Racket), not with building arbitrary code outside the language. An interesting case is Bazel, which does maintain a strict plan-then-perform model yet allows user-provided extensions (e.g. to support Lisp). However, its extensions, written in a safe restricted DSL (that runs into plan phase only, with two subphases, "load" and "analysis") are not themselves subject to extension using the build system (yet the DSL being a universal language, you could implement extensibility the hard way).

Fixing the build model in ASDF 3.3 led to subtle backward-incompatible changes. Libraries available on Quicklisp were inspected, and their authors contacted if they depended on modified functionality or abandoned internals. Those libraries that are still maintained were fixed. Still, I'd just like to see how compatible it is with next month's Quicklisp before I can recommend releasing these changes upon the masses.

Syntax Control

The current ASDF has no notion of syntax, and uses whatever *readtable*, *print-pprint-dispatch*, *read-default-float-format* or many other syntax variables are ambient at the time ASDF is called. This means that if you ever side-effect those variables and/or the tables that underlie the first two, (e.g. to enable fare-quasiquote for the sake of matching with optima or trivia), then call ASDF, the code will be compiled with those modified tables, which will make fasl that are unloadable unless the same side-effects are present. If systems are modified and compiled that do not have explicit dependencies on those side-effects, or worse, that those side-effects depend on (e.g. fare-utils, that fare-quasiquote depends on), then your fasl cache will be polluted and the only way out will be to rm -rf the contaminated parts of the fasl cache and/or to build with :force :all until all parts are overwritten. Which is surprising and painful. In practice, this means that using ASDF is not compatible with making non-additive modifications to the syntax.

Back in the 3.1 days, I wrote a branch whereby each system has its own bindings for the syntax variables, whereas the default tables be read-only (if possible, which it is in many implementations). With that branch, the convention is each system can do modify the syntax in whatever way it wants, and that will only affect that system; however, changes to syntax tables must be done after explicitly creating new tables, and any attempt to side-effect the default global tables will result in an error.

This was the cleanest solution, but alas it is not compatible with a few legacy systems that explicitly depend on modifying the syntax tables (and/or variables?) for the next system to use, as ugly as that is. My initial opinion was that this should be forbidden, and that these legacy systems should be fixed; however, these were legacy systems at a notable Lisp company, with no one willing to fix them; also, I had resigned from maintainership and the new maintainer is more conservative than I am, so in the end the branch was delayed until after said Lisp company would investigate, which never happened, and the branch was never merged.

A simpler and more backward-compatible change to ASDF would have been to have global settings for the variables that are bound around any ASDF session. Then, the convention would be that you are not allowed to use ASDF again to load regular CL systems after you modify these variables in a non-additive way; and the only additive changes you can make are to add new entries to the shared *readtable* and *print-pprint-dispatch* tables that do not conflict with any default entry or earlier entry (and that includes default entries on any implementation that you may want to support, so e.g. no getting #_ or #/ if you want to support CCL). Even additive changes, if made, must somehow not clash with each other, or they become non-additive; but there is no way to automatically check that this is the case and issue a warning. After you make non-additive changes (if you do), then ASDF can't be used anymore to build normal systems that may conflict with those changes, and if they are modified and you call ASDF on a system that depends on them, you lose (or you must first make all those systems immutable).

Note that because ASDF would already break in those cases, most of these constraints de facto exist, are enforced, and are respected by all ASDF users. There remains the question of binding the variables around the build, which allows normal systems to be built even if a user changes the variables, or to not bind them, which puts the onus on most users of keeping these variables bound to reasonable values around calls to ASDF for the benefit of a few users would want their own breaking changes to persist after the build. I believe the first option (bind the variables) is cleaner, though the second (basically, do nothing) is more backward-compatible.

In all cases, you can always make non-additive changes to a readtable (such as enabling fare-quasiquote) by locally binding *readtable* to a different value, e.g. using named-readtables:in-readtable. A local binding won't adversely affect the ASDF build; but unless ASDF is changed to enforce its own bindings, you'll have to make sure to manually undo your local bindings before you call ASDF again.

The problem with not adding any syntax-control to ASDF is that it forces Lispers to always be conservative about modifying the readtable and calling ASDF (or having it called indirectly by any function whatsoever that they call, which they can't always predict). In practice this makes hacking CL code hostile to interactive development with non-additive syntax modification; which defeats in social conventions a technical feature of the language often touted as cool by its zealots. If syntax-control is added to ASDF, then you can freely do your syntax modifications and be confident that building code won't be adversely affected.

The current branch implements the simpler option of binding variables around ASDF sessions, and using a mutable shared readtable that should only be modified additively. It has probably bitrotten, and should be updated or rewritten. The current maintainer, Robert Goldman, should probably opine on which change to adopt with what schedule (3.3.0? 3.2.2? 3.3.1? 3.4.0?) and sign off the API.

Vanquishing Language Limitations

These two modifications are ((now)low)-hanging fruits in making ASDF a more robust build tool, one that supports working with non-trivial extension to the build system or the Lisp syntax. And in both cases, the limit reached by ASDF is ultimately that CL is a hippie language that allows unrestricted global side-effects and disallows disallowing. Therefore extensions necessarily introduce potential conflict with each other that have to be solved in wetware via convention, whereby all users are to be trusted not go wild with side-effects. The system cannot even detect violations and warn users of a potential mistake; users will have to experience subtle or catastrophic failure and figure out what went wrong.

A better language for a build system should be purer: inasmuch as it has "global" side-effects, it should allow to "fork" the "global" state in an efficient incremental way. Or even better, it should make it easy to catch side-effects and write this forking support in userland. At the very least, it would make it possible to detect violations and warn the user. Bazel is an example build system with an extension language that has local side-effects, but globally has pure forked environments. A successor to ASDF could similarly provide a suitably pure dialect of Lisp for extensions.

Happily, adding better syntax control to ASDF suggests an obvious solution: ASDF extensions could be written in an enforceable subset of a suitable extension of Common Lisp. Thus, ASDF extensions, if not random Common Lisp programs, can be made to follow a discipline compatible with a deterministic, reproducible build.

What would be an ideal language in which to write a extensible build system? Well, I tackled that question in another article, the Chapter 9: "Build Systems" of my blog "Ngnghm". That's probably too far from CL to be in the future of ASDF as such, though: the CL extension would be too large to fit ASDF's requirement of minimalism. On the other hand, if such a language and build system is ever written, interest for CL and ASDF might wane in favor of said latter build system.

In any case, in addition to not being a blub language, features that will make for a great programming language for an integrated build system include the following: making it possible to directly express functional reactive programming, determinism as well as system I/O, laziness as well as strictness, reflection to map variables to filesystem and/or version control as well as to stage computations in general including dynamic build plans, hygiene in syntax extension and file reference, modularity in the large as well as in the small, programmable namespace management, the ability to virtualize computations at all sizes and levels of abstractions, to instrument code, etc.

Towards cross-compilation

Now, before we get reproducible builds, we also need to enable cross-compilation for ASDF systems, so the necessarily unrestricted side-effects of compiling Common Lisp code cannot interfere with the rest of the build. Cross-compilation also allows building on a different platform, which would be important to properly support MOCL, but would probably also mesh well with support for building software in arbitrary other languages.

Importantly, instead of the (perform operation component) protocol that specifies how to build software in the current image, a (perform-form target operation component) protocol (or maybe one where the target information has been made part of the operation object) would return forms specifying how to build software, which could then happen in separate Lisp or non-Lisp process, on the same machine or on another worker of a distributed build farm.

Note however, that one essential constraint of ASDF is that it should keep working in-image in the small and not depend on external processes or additional libraries. Any serious effort towards a "deterministic build" should therefore remain an extension indeed (though one users would load early).

Still, if this extension is to remain compatible with ASDF and its .asd files, providing a backward-compatible path forward, then modifications and cleanups may have to be done to ASDF itself so it behaves well. Even keeping that hypothetical deterministic build separate, I expect non-trivial changes to the ASDF API to enable it, such as the perform-form protocol mentioned above. But backward-compatibility and smooth transition paths have always been the name of the game for ASDF; they are what make possible an ecosystem with thousands of packages.

There is a precedent to an ASDF extension leading to (most positive) changes in ASDF: POIU, the "Parallel Operators on Independent Units", Andreas Fuchs' extension to compile files in forks (but still load them in-image). Making sure that POIU can be expressed as an extension of ASDF without redefining or breaking the provided abstractions, was instrumental in the evolution of ASDF: it led to many cleanups in ASDF 2, it inspired several of the breakthroughs that informed what became ASDF 3, and it kept influencing ASDF 3.3.

Thus, even though ASDF will stay forever an in-image build system, and even though a deterministic build extension (let's call it FDSA, the Federated Deterministic System Assembler) may ultimately remain as little used as POIU (i.e. because it lacks sufficient benefits to justify the transition costs), I expect the design of the base ASDF to be deeply influenced by the development of such a tool (if it happens).

Looking for new developers

Robert Goldman and I are not getting younger, not getting more interested in ASDF, and we're not getting paid to hack on it. We are looking for young Common Lisp hackers to join us as developers, and maybe some day become maintainers, while we're still there to guide them through the code base. Even without the ambition (and resources) to actually work towards a hypothetical FDSA, our TODO file is full of items of all sizes and difficulties that could use some love. So, whatever your level of proficiency, if you feel like hacking on a build system both quite practical and full of potentiality, there are plenty of opportunities for you to work on ASDF (or a successor?) and do great, impactful work.

Mar. 25th, 2017

eyes black and white

Why I haven't jumped ship from Common Lisp to Racket (just yet)

Matthias Felleisen jested "Why are you still using CL when Scrbl/Racket is so much better :-)" ? My response was as follows:

Dear Matthias,

you are right Racket is so much better in so many dimensions. I use Lisp because I just can't bear programming in a language without proper syntactic abstraction, and that is a dimension where Racket is far ahead of Common Lisp (CL), which sadly also remains far ahead of the rest of the competition. Racket also has managed to grow a remarkable way to mix typed and untyped program fragments, which sets it ahead of most. But I am under the impression that there are still many dimensions in which Racket lags behind other languages in general and Common Lisp (CL) in particular.

  1. The Common Lisp Object System (CLOS) has multiple-inheritance, multi-methods, method combinations, introspection and extensibility via the MOP, generic functions that work on builtin classes, support for dynamic instance class change (change-class, update-instance-for-changed-class) and class redefinition (defclass, update-instance-for-redefined-class), a semi-decent story for combining parametric polymorphism and ad hoc polymorphism (my own lisp-interface-library), etc. Racket seems to still be playing catch-up with respect to ad hoc polymorphism, and is lacking a set of good data structure libraries that take advantage of both functional and object-oriented programming (a good target is Scala's scalaz or its rival cats).
  2. While the ubiquity of global side-effects in CL is very bad, the facts that all objects that matter are addressable by a path from some global namespace and that live redefinition is actively supported makes debugging and maintaining long-lived systems with in-image persistent data more doable (see again CLOS's update-instance-for-redefined-class). This is in contrast with the Racket IDE which (at least by default) drops live data when you recompile the code, which is fine for student exercises, but probably wrong for live systems. CL is one of the few languages that takes long-term data seriously (though not quite as seriously as Erlang).
  3. Libraries. CL seems to have much more libraries than Racket, and though the quality varies, these libraries seem to often have more feature coverage and more focus on production quality. From a cursory look, Racket libraries seem to be more ambitious in their concepts, but to often stop at "good enough for demo" in their practice. An effort on curating libraries, homogenizing namespaces, etc., could also help Racket (I consider CL rather bad in this respect, yet Racket seems worse). My recent experience with acmart, my first maintained Racket library, makes me think that writing libraries is even higher overhead in Racket than in CL, which is already mediocre.
  4. Speedwise, SBCL still produces code that runs noticeably faster than Racket (as long as you don't need full delimited control, which would requires a much slower CL-to-CL compiler like hu.dwim.delico). This difference may be reduced (or even reversed) as Racket adopts the notoriously fast Chez Scheme as a backend (or then again not). Actually, the announcement of the new Racket backend really makes me eager to jump ship.
  5. As for startup latency, Common Lisp is also pretty good with its saved images (they start in tens of milliseconds on my laptop), making it practical to write trivial utilities for interactive use from the shell command-line with an "instantaneous" feel. Racket takes hundreds of milliseconds at startup which puts it (barely) in the "noticeable delay" category (though nowhere near as bad as anything JVM-based).

All these reasons, in addition to inertia (and a non-negligible code-base and mind-base), have made me stick to CL — for now. I think Racket is the future of Lisp (at least for me), I just haven't jumped ship right yet. If and when I do, I'll probably be working on some of these issues.

PS (still 2017-03): Here are ways that Racket is indeed vastly superior to CL, that make me believe it's the future of Lisp:

  • First and foremost, Racket keeps evolving, and not just "above" the base language, but importantly below. This alone makes it vastly superior to CL (that has evolved tremendously "above" its base abstractions, but hasn't evolved "below", except for FFI purpose, in the last 20 years), which itself remains superior to most languages (that tend to not evolve much "above", and not at all "below" their base abstractions).
  • Racket is by far ahead of the pack in terms of Syntactic abstraction. It is the best language in which to define other languages and experiment with them, bar none.
  • Racket has a decent module system, including build and phase separation (even separate phases for testing, cross-compilation or whatever you want), and symbol selection and renaming.
  • Racket has typed modules, and a good interface between typed and untyped modules. While types in Racket do not compete with those of say Haskell, just yet, they are still evolving, fast, and that contract interface between typed and untyped is far ahead of anything the competition has.
  • Racket has lots of great teaching material.
  • Racket has a one-stop-shop for documentation, though it isn't always easy to navigate and often lack examples. That still puts it far ahead of CL and a lot of languages.
  • Racket provides purity by default, with a decent set of pure as well as stateful data structures.
  • Racket has many primitives for concurrency, virtualization, sandboxing.
  • Racket has standard primitives for laziness, pattern-matching, etc.
  • Racket has a standard, portable, gui.
  • Racket has a lively, healthy, user and developer community.

I probably forget more.

PS (2017-08-23): A few months onward, I've mostly jumped ship from Common Lisp... but not to Racket, and instead to Gerbil Scheme.

As ASDF 3.3.0 gets released (imminently), I don't intend to code much more in Common Lisp, except to minimally maintain my existing code base until it gets replaced by Gerbil programs (if ever). (There's also a syntax-control branch of ASDF I'd like to update and merge someday, but it's been sitting for 3 years already and can wait longer.)

What is Gerbil? Gerbil Scheme started as an actor system that vyzo wrote over 10 years ago at MIT, that once ran on top of PLT Scheme. vyzo was dissatisfied with some aspects of PLT Scheme (now Racket), notably regarding performance for low-level system code and concurrency (at the time at least), but loved the module system (for good reasons), so when he eventually jumped ship to Gambit (that had great performance and was good for system programming, with its C backend), he of course first reimplemented the PLT module system on top of Gambit, or at least the essential features of it. (The two module systems were never fully compatible, and have diverged since, but they remain conceptually close, and I suppose if and when the need arise, Gerbil could be made to converge towards PLT in terms of features and/or semantics.)

Why did I choose Gerbil instead of Racket, like I intended?

  1. A big reason why I did is that I have a great rapport with the author, vyzo, a like mind whom I befriended back in those days. A lot of our concerns and sense of aesthetics are very much in synch, and that matters both for what there is and what may come to be. Conversely, the bigger features that Racket has that Gerbil is lacking (e.g. a GUI) are those that matter less to me at this point.
  2. What there is, the module system, the actor system, the object system, the libraries, is far from as complete as I could wish, but it is all in good taste, and with the promise that they can be molded to what we both want in the future.
  3. While the code base is smaller than in PLT, it is also more consistent and with a coherent sense of aesthetics, being implemented by one man (so far). It also happens to cover the kind of domains for which I'm most in need of libraries, and it also has a bias towards industrial applicability that you can't expect from PLT and its legion of academics and interns (see my discussion of PLT above).
  4. Sitting on top of Gambit does not just mean relatively efficient code (as far as Scheme is concerned), but it also means enjoying the portability of its GVM, and some of these properties are especially interesting to me: its observability.

Observability is the property (whose name I coined in my PhD thesis) whereby you can interrupt the execution of the program and observe it at the level of abstraction of your language (in this case, the GVM). This already allows Gambit to migrate processes from one machine to the other, even though the machines may be using completely different backends (C on ia32, C on AA64, JS, PHP, Java, etc.) For my thesis, I want to generalize observability from the GVM to arbitrary virtual machines written on top of it for arbitrary languages, with plenty of cool implications (including e.g. Erlang-style robustness; see said thesis). Working on the GVM will save me having to care about plenty of CPU- or VM- dependent backends that I would have to deal with if I wanted to write a compiler from scratch or reuse an existing one. I notably tried to read the source of Chez Scheme, that PLT Racket is adopting as a new backend (moving from its own horribly complex yet ultimately slow C codebase); but it is largely inscrutable, and with its tens of backends would require much more work before a prototype is achieved.

I therefore have a lot of personal reasons to adopt Gerbil. I understand that Gerbil in its current state, is no rival to either Racket or Common Lisp (or Haskell or OCaml, or even blub languages), for most people in most situations. Yet there are probably other people for whom it is a better fit, as it is for me; and I invite these people to come join forces and enjoy writing in Gerbil. It's still barebones in many ways, yet already quite a pleasure to work with, at least for me.

Jan. 26th, 2015

eyes black and white

Programming on valium

Google AutoValue: what in Lisp would take a few hundred lines max in Java is over 10000 lines not counting many, many libraries. Just WOW!

Thus, Java has macros too, it's just that they are 10 to 100 times more programmer-intensive than Lisp macros. I feel like I'm back in the dark ages.

Even for "normal" programming without new macros, a program I wrote both in Java and in Clojure was about 4 times bigger in Java (and that's despite using AutoValue). I also took ten times longer to write and debug the Java program (despite having written the Clojure program before, so no hard thinking whatsoever needed), with a frustrating edit-compile-run cycle many orders of magnitude slower. Part of the difference is my being much more experienced in Lisp than in Java, but even accounting for that, Java is slower to develop with.

The Java code is also much harder to read, because you have to wade through a lot of bureaucracy — each line does less, and so may be slightly faster to read, yet takes no less time to write, debug, modify, test, because of all the details that need be just right. Yet you must read and write more Java, and it's therefore harder to get the big picture, because there is less information available by screenful (or mindful) and much more noise. The limitation on available information is not just per screenful but also per file, and you find you have to jump constantly through so many files in addition to classes within a file; this is a lot of pain, even after accounting for the programming environments that alleviate the pain somewhat. Thus the very slight micro-level advantage of Java in readability per line is actually a big macro-level handicap in overall program readability.

Lack of both type aliasing and retroactive implementation of interfaces also means that type abstraction, while possible with generics and interfaces (themselves very verbose, though no more than the rest of the language), will require explicit wrappers with an immense amount of boilerplate, if not reimplementation. This strongly encourages programmers to eschew type abstraction, leading to more code explosion and much decreased maintainability.

Also, because function definition is so syntactically cumbersome in Java, programs tend to rely instead on big functions with a lot of side-effects, which yields spaghetti code that is very hard to read, understand, debug, test or modify — as compared to writing small conceptually simple functions that you compose into larger ones, as you would in a functional programming language.

The lack of tuple types is also a big factor against functional programming in Java: you'll need to declare a lot of extra classes or interfaces as bureaucracy just because you want a couple functions to pass and return a few values together (some people instead use side-effects for that — yuck). You could use a generic pair, but that leads to horrible types with many<layers<of<angle,brackets>>> which is very hard to read or write, and doesn't scale to larger tuples; of course, the need to declare types everywhere instead of having them inferred by the compiler means that even with tuples of arbitrary size, you'll need to spell out long unwieldy types more often that you'd like. Ignorants complain about the number of parentheses in Lisp, but just because of the size increase, there are a lot more parentheses in my Java program than in my Lisp program, and if we are to include all curly, angle and square brackets, that will be another many-fold increase.

Java 8 makes the syntax for functional programs slightly easier, and AutoValue makes it slightly less painful to bundle values together, but even with these improvements, Java remains extremely verbose.

The standard library is horrible, with side-effects everywhere, and a relatively poor set of primitives. This leads to the ugly habit of having to resort to "friend" classes with lots of static methods, which leads to a very different style of invocation and forces more bureaucratic wrapping to give things a unified interface. The lack of either CLOS-style generic functions or Clojure-type protocols mean you can't add decent interfaces to existing data-structures after the fact, making inter-operation with other people's code harder, whether you decide to adopt your own data-structure library (e.g. a pure functional one) or just try to extend existing ones. Lack of multiple inheritance also means you have to repetitively repeat a lot of boilerplate that could have been shared with a common mixin (aka trait class).

All in all, Java is just as heavily bureaucratic as I expected. It was developed by bureaucrats for bureaucrats, mediocre people who think they are productive when they have written a lot of code for a small result, when better tools allow better people to write a small amount of code for a big result. By analogy with programming languages said to be a variant of something "on steroids", I'd say that Java is a semi-decent programming language on valium. As to what template is sedated, I'd say a mutt of Pascal and Smalltalk. But at least it's semi-decent, and you can see that a lot intelligent people who understand programming language design and implementation have worked on it and tried to improve upon the joke of a language that Java was initially. Despite the bureaucracy, the sheer amount of talent thrown at the language has resulted in something that manages to not be bad.

This hard work by clever people makes Java so much better than Python, an attractive nuisance with lots of cool features that lead you into a death by a thousand cuts of small bad decisions that amplify each other. Superficially, Python looks like a crippled Lisp without macros and with a nice toy object system — but despite a lot of very cool features and a syntax that you can tell was spent a lot of time on (yet still ended up with many bad choices), Python was obviously written by someone who doesn't have a remote clue about semantics, resulting in a lot of pitfalls for programmers to avoid (there again with side-effects galore), and an intrinsically slow implementation that requires a lot of compile-time cleverness and runtime bureaucracy to improve upon.

In conclusion, I'd say that Java is a uniformly mediocre language that will drag you down with bureaucracy, which makes it rank well above a lot of overall bad languages like Python — but that's a very low bar.

Does this rampant mediocrity affect all industries? I'm convinced it does — it's not like these industries are fielded by better people than the software industry. Therefore it's an ever renewed wonder to me to see that the world keeps turning, that civilization endures. "A common man marvels at uncommon things; a wise man marvels at the commonplace." — Confucius

Nov. 22nd, 2014

eyes black and white

Personality Types for Mathematicians

When I was a kid, I used to believe that mathematics was all about knowing the rules and following them perfectly (which at least, unlike in other endeavours, was possible), and about carefully planning your strategy to attack and vanquish given problems, and that a good mathematician would be one who would primarily think like that. In terms of MBTI (which I didn't know at the time), that would be an INTJ personality, the Mastermind as Keirsey calls it. Since my dad was a math professor, I thought he had to be like that. But as I grew up, I realized to my surprise that wasn't the case. Indeed, while these activities are indeed essential in Mathematics, and any mathematician must be capable of thinking that way, and while some mathematicians have this personality indeed (including some friends of my father), my father himself was actually an INFP, a Healer in Keirsey speech. What made him love mathematics was the abstract aesthetics of it — how to discover and appreciate beautiful proofs, that only involved intrinsic aspects of the mathematical objects (points, lines, planes, curves, functions, etc.) rather than representation-dependent aspects (such as spatial coordinates in some arbitrary basis, or equations, etc.), and what interesting and beautiful properties those proofs told us about the underlying mathematical structure — the best ones being those that show deep connections between different structures. I find that I love computing the very same way. In terms of individual traits, I suppose that "Introversion" is necessary to focus on abstract mathematical objects (whereas "Extroversion" is more useful in collaborative programming settings), and so is "iNtuition" (which I understand as about approaching the world in abstract rather than concrete terms). "Thinking" is important to follow the rules, but "Feeling" is important to appreciate the aesthetics, which may be the greatest heuristic guide you can have in Mathematics. And "Judgment" might be useful for planning problem-solving strategies, but "Perception" is useful for getting a good sense of an unfamiliar world. [Interestingly, while I identify myself as ENTP, which Keirsey calls an Inventor, my wife is an INFP and most of my past girlfriends were, or were not far from it — it's the personality type I somehow most relate to, even though I obviously didn't relate enough to my father as a kid, since I didn't understand him at all then.]

Oct. 19th, 2014

eyes black and white

Whence Creationist Programming?

Where did I get the idea for "Creationist Programming", the belief that Software is created by a Programmer? I was trying to describe the TUNES approach as Evolutionary, and found it easiest to explain what that meant by contrasting with a Creationist approach... and elucidating what that meant led me to write this essay and later give this presentation.

Indeed, I had been asked to make a short presentation on the essential insight behind TUNES; reflecting (ha!) about that, my closest explanation was that the insight was applying to computing systems what in a previous political essay I had called "dynamic thinking", except applied to computing systems rather than to political systems. Now, this term "dynamic thinking" is one I made up, and the meaning of which people won't understand; and I didn't want to refer people to a controversial political essay when I was trying to explain something completely different and already controversial on its own — I aim to reach the union of people interested in either controversy, not the intersection of people interested in both. And so I looked in my essay for which part of my explanation would be most familiar to my scientific-minded audience, and would make a good starting point. And that was the evolutionary aspect.

Oct. 7th, 2014

eyes black and white

The Far Future of Programming: Ems

I had the privilege of reading a draft of Robin Hanson's upcoming book on ems: emulated brains, that with specialized hardware could possibly run thousands or millions of times faster than the actual brain they were once templated from. This got me thinking about what kind of programming languages these ems would use — though most arguments would also apply to any AI whether it is based or not based on such ems. And yes, there will be programming languages in such a future: predictable algorithmic tasks aren't going to write and deploy themselves — the buck has to stop with someone, and that someone will be a sentient being who can be held accountable.

When you're going at 10000 times the speed of a human, computers run relatively 10000 times slower in subjective time. Of course, an em could put itself in sleep mode and be swapped out of the compute cluster during any long computation, so that most computer interactions happen subjectively instantaneously even if they are actually very slow. An alarm on timeout would allow the em to avoid being indefinitely swapped out, at which point it could decide to resume, cancel or background the computation. Still, even if subjectively instantaneous, overly slow computations would disrupt the em's social, professional and personal life. Ultimately, latency kills you, possibly literally so: the latency may eat on the finite allowance of time during which your skills are marketable enough to finance your survival. Overly fast ems won't be able to afford being programmers; and there is thus a limit to how fast brain emulation can speed up the evolution of software, or any creative endeavours, really. In other words, Amdahl's Law applies to ems. So does Gustafson's Law, and programming em's will thus use their skills to develop greater artifacts than is currently possible.

Now, if you can afford to simulate a brain, memory and parallelism will be many orders of magnitude cheaper than they are now, in roughly inverse proportion to latency — so for fast ems, the price ratio between parallelism and latency will be multiplied by this factor squared. To take advantage of parallelism while minimizing latency, fast ems will thus use programming languages that are very terse and abstract, minimizing any boilerplate that increases latency, yet extremely efficient in a massively parallel setting, designed for parallelism. Much more like APL than like Java. Since running the code is expensive, bugs that waste programmer latency will be much more expensive than they are now. In some ways programmers will be experiencing echos of the bad old days of batch processing with punch cards and may lose the fancy interactive graphical interfaces of today — yet in other ways, their development environments will be more modern and powerful than what we use now. Any static or dynamic check that can be done in parallel with respectively developing or running the code will be done — the Lisp machine will be back, except it will also sport fancy static type systems. Low-level data corruption will be unthinkable; and even what we currently think of as high-level might be low-level to fast em programmers: declarative meta-programming will be the norm, with the computer searching through large spaces of programs for solutions to meta-level constraints — machine time is much cheaper than brain time, as long as it can be parallelized. Programmers will be very parsimonious in the syntax and semantics of their programs and programming languages; they will favor both high-falluting abstraction and ruthless efficiency over any kind of fanciness. If you don't grok both category theory and code bumming, you won't be the template for the em programmer of the future. Instead imagine millions of copies of Xavier Leroy or Edward Kmett or the most impressive hacker you've ever met programming in parallel — there will be no place for second rate programmers when you can instead hire a copy of the very best to use your scarce em cycles — only the best in their own field or combination of fields will have marketable skills.

At high-speed, though, latency becomes a huge bottleneck of social programming, even for these geniuses — and interplanetary travel will only make that worse. Bug fixes and new features will take forever to be published then accepted by everyone, and every team will have to develop in parallel its own redundant changes to common libraries: what to us are simple library changes to fast ems might be as expensive as agreeing on standard document is to us. Since manual merges of code are expensive, elaborate merge algorithms will be developed, programming languages will be modified if needed to make code merge easier. To reduce the number of conflicts, it will be important to canonicalize changes. Not only will each project have an automatically enforced Programming Style; copies of the very same maintenance ems will be present in every geographical zone to canonicalize bug fixes and feature enhancements of a given library. Software may therefore come with copies of the virtual wetware that is supposed to maintain the software — in a ready-to-code mood (or read-to-explain mood), in a fully deterministic internal state and environment, for complete reproducibility and debuggability. Canonicalization also allows for better caching of results when looking for otherwise expensive solutions to often-used problems.

Because programming itself can be parallelized by temporarily multiplying the number of ems, programming languages will be extremely composable. Modularity, separate compilation, advanced type systems and contract systems to specify interfaces, isolation through compile-time proofs, link-time enforcement or run-time virtualization, the ability to view the code as pure functional (with side-effects encapsulated in monads), etc., will be even more important than they are now. Expressiveness will also be very important to maximize what each worker can do; macros, dependent types, the ability to view the code in direct style (using side-effects and delimited continuations), etc., will be extremely important too. Development tools will manage the transformation back and forth between these two dual styles of viewing software. Thus armed with suitable composability, Conway's Law need not constrain software more than the fact that it's ultimately represented as an expression tree. What more, if the workers on each subexpression are forks of the worker on the top expression, there can be some coherence of design in the overall system over a very large system that currently would have required many programmers with different personalities. In this context, comments may be literally "leaving a note to yourself" — a parallel duplicate self instead of a sequential future self.

As programming is recursively divided into tasks, the programmer becomes his own recursive Mechanical Turk. There is an interesting issue, though, when additional requirements appear while trying to solve a subproblem that requires modifying a higher-level problem: if you let the worker who found and grokked the new requirement survive and edit the problem, this may create a bad incentive for workers to find problems so they may survive, and a problem of prioritizing or merging the insights of many parallel copies of the programmer who each found issues. It might be cheaper to have the subproblem workers issue an explanation for use by the superproblem worker, who will either send updates to other workers, or restart them with an updated subproblem specification. Ultimately, large teams of "the same" programmer mean that coordination costs will be drastically lower than they are currently. Software will thus scale in breadth vastly beyond what currently exists, though in depth it will still be limited to how much a single programmer can handle.

Because a same programmer is duplicated a lot of times, personalizations of the development environment that increase productivity have a vastly multiplied effect. Extreme customization, to the point of reimplementing tools in a style that suits the particular programmer, are to be expected. Because new copies of the same programmer when young can replace old copies that retire or graduate, there is no fear that a completely different person will have to be retrained on those quite personal tools. The newcomer will be happily surprised that everything is just where he wished for (except when some subtle and important constraint prevented it, that might be worth understanding), and that all source code he has to deal with fits his own personal programming style. Still, deliverables might have to be in a more neutral style if they are to be shared by multiple programmers with different personalities so that each domain is handled by the most proficient expert — or if they have to be assessed, security-checked, proven correct, etc., by a third party as part of accepting the software before it's deployed in a sensitive environment or duplicated zillions of time.

I am sure there is a lot more that can be foreseen about the far future of programming. As for the near future, it won't be quite so different from what we have now, yet I think that a few of the above points may apply as the cost of bugs increases, including the cost of a competent programmer relative to the size of the codebase.

PS: Robin Hanson is interested in reading more ideas on this topic and ems in general. If you share them soon enough, they may make it to the final version of his book.

May. 13th, 2014

eyes black and white

The Great ASDF Bug Hunt

With the release of ASDF 3.1.2 this May 2013, I am now officially retiring not just from ASDF maintenance (Robert Goldman has been maintainer since ASDF 3.0.2 in July 2013), but also from active ASDF development. (NB: ASDF is the de facto standard Common Lisp build system, that I took over in November 2009.) I'm still willing to give information on where the code is coming from and advice where it might go. I'm also still willing to fix any glaring bug that I may have introduced, especially so in UIOP (indeed I just committed a few simple fixes (for Genera of all platforms!)). But I won't be writing new features anymore. (However, you will hopefully soon see a bunch of commits with my name on them, of code I have already written that addresses the issue of syntax modularity; the code was completed and is committed in a branch, but is not yet merged into the master branch, pending tests and approval by the new maintainer).

Before I left, though, I wanted to leave the code base in order, so I made sure there are no open bugs beside wishlist items, I dumped all my ideas about what more could be done in the TODO file, and I did a video walkthrough of the more subtle parts of the code. I also wrote a 26-page retrospective article on my involvement with ASDF, a reduced version of which I submitted to ELS 2014. There, I gave a talk on Why Lisp is Now an Acceptable Scripting Language.

The talk I would have liked to give instead (and probably should have, since I felt like preaching to the converted) was about the great ASDF bug hunt, which corresponds to the last appendix of my paper (not in the reduced version), a traverse across the build. It would have been a classic monster hunt story:

  • The setting is a seemingly peaceful and orderly little village on the (programming) frontier. It is a familiar old place, not a big one, but a good, comfortable one. Though it is not perfect, and monsters roam at night, it looks fundamentally healthy. (That would be ASDF, in daily use by tens or even hundreds of Common Lisp programmers, despite bugs that catch the unwary.)
  • The protagonist is the (bug) hunter. (I should tell the story in the first person, but for now, third person will do.) In the beginning he is young and naïve — but capable (of improvement). When he comes into town, our protagonist kicks out a few baddies that were victimizing the population; soon enough he replaces the ailing sheriff. (That would be me becoming ASDF maintainer in 2009 when Gary King steps down, after fixing some pathname related bugs.)
  • Under the new sheriff, monsters big and small are hunted down. The inhabitants are not afraid anymore, though some of them remain grumpy. (That's me fixing bugs with the help of many other programmers, while the unwary remain blissfully ignorant of having been saved.) The protagonist builds fortifications, and finds he has to extend the city limits to make it easier to defend, adding new buildings along the way. (That would be improving the ASDF design to be more robust, and adding features.) Often he has to hunt monsters that he himself let in, sometimes after they hurt citizens. (That's when I introduce bugs myself, and sometimes fail to fix them before release.) The protagonist feels guilty about it and learns to be a better sheriff. (That's when I get to deeply respect the regression test suite.) But by and large, his endeavor is a success. At long last, he thinks the place is now safe, and that he knows everything about the town and its now former monsters. — My, how wrong he is! (That's me at the time of ASDF 2.26)
  • Now, a monster has been terrorizing innocent citizens for years. No one has seen the monster, but the way he picks his victims and what he does to them is characteristic. (That's the old bug whereby changes in dependencies are not propagated correctly across modules.) The protagonist's best buddy has found a good way to protect homes against the monster, but it still roams in the streets at night. (That's when Robert Goldman fixes the bug and gets dependency changes to trigger rebuild across modules within a system, but dependency changes still fail to trigger rebuild across systems.) Our sheriff, having finally vanquished all other monsters, and having no other foe left in town, sets off to catch this one last monster. And so, he has to enter hitherto unexplored caverns deep below the village, a place abandoned long ago, where the creature lurks. (That would be the ASDF traverse algorithm.) And of course that's when the story turns ugly.
  • Our protagonist thinks the monster will be an easy catch, what with all his experience and technology. But it's actually a long, hard fight to the death. It's the toughest enemy ever. (And that's the story of writing ASDF 2.27, that eventually becomes ASDF 3, after months of struggle.)
  • Along the way, many times, the protagonist thinks he has almost won, but not at all; many times, he thinks he is lost, but he keeps at it. (More code was written in the year or so since ASDF 2.26 was released than in the entire decade before.) Quickly though, he realizes that the monster he was chasing is but a henchman of a bigger monster that has been ruling over the village all along. The apparent orderliness of the village was but a lie, all that he thought he knew was fake! (That's the fundamental algorithm behind ASDF having deep conceptual bugs.) Happily, a mysterious wise man left him cryptic instructions on how to defeat the monster before he even became a sheriff, though he only understands them when comes the confrontation. (That would be Andreas Fuchs and his POIU, the maintenance of which I had also inherited, and that brought all the essential insights just at the right moment.)
  • In the end, the sheriff vanquishes his foes and defeats the great monster for good, but not until he has learned to respect his enemy. And his real prize is in the lessons he learned and the final illumination he reaches. (And I hope you too can enjoy this illumination.)

The final illumination is that inasmuch as software is "invented", it isn't created ex nihilo so much as discovered: Daniel Barlow, who wrote the initial version ASDF, obviously didn't grok what he was doing, and can't be said to have created the ASDF algorithm as it now stands, since what he wrote had such deep conceptual flaws; instead, he was experimenting wildly, and his many successes overshadow and more than redeem his many failures. I, who wrote the correct algorithm, which required a complete deconstruction of what was done and reconstruction of what should have been done instead, cannot be said to have created it either, since in a strong sense I "only" debugged Daniel's implicit specification. And so, the code evolved, and as a result, an interesting algorithm was discovered. But no one created it.

An opposite take on the same insight, if you know Non-Standard Analysis, is that Daniel did invent the algorithm indeed, but specified it with a non-standard formula: his formula is simple (a few hundreds of lines of code), and captures the desired behaviour in simple enough cases with standard parameters (using SBCL on Unix, without non-trivial dependency propagation during an incremental build) but fails in non-standard cases (using other implementations, or dealing with timestamp propagation). My formula specifies the desired behaviour in all cases with all the details correct, and is much more elaborate (a few thousands of lines of code), but is ultimately only a Standardization of Daniel's formula — a formal elaboration without any of Daniel's non-standard shortcuts, that runs correctly in all cases and not just simple ones, but in the end, a formula that doesn't contain information not already present in Daniel's version, only making it explicit rather than implicit.

The two interpretations together suggest the following strategy for future software development: There is a lot of untapped potential in doing more, more daring, experimentations, like Daniel Barlow did, to more quickly and more cheaply discover new interesting designs; and conceivably, a less constrained non-standard representations could allow for more creativity. But this potential will remain unrealized unless Standardization is automated, i.e. the automatic specification of a "standard" formal program from a "non-standard" informal one; a more formal standard representation is necessary for robustly running the program. This process could be viewed as automated debugging: as the replacement of informal variables by sets of properly quantified formal variables; as an orthogonal projection onto the hyperplane of typed programs; as search of a solution to a higher-order constraint problem; as program induction or machine learning; etc. In other word, as good old-fashioned or newfangled AI. This process itself is probably hard to formalize; but maybe it can be bootstrapped by starting from a non-standard informal specification and formalizing that.

Apr. 27th, 2009

eyes black and white

Confusing constants and variables in Computer Programming

I am always amazed when people fail to distinguish between constants and variables. I am all the more amazed when the victims of such confusion are the otherwise brilliant implementers of programming languages. You'd think that if anyone knows the difference between a variable and a constant, it would be a programming language implementer.

Read more...Collapse )

Feb. 19th, 2009

eyes black and white

Creationist programming vs Evolutionary programming - Epilogue

Read more...Collapse )

You can now find on my web site my essay From Creationism to Evolutionism in Computer Programming, subtitled The Programmer's Story: From Über-God to Underdog. As compared to the previous MSLUG conference (see slides), the essay contains a vastly expanded second part on the prospective future of the evolution of programming, and my vision for TUNES. Enjoy!

Abstract: Programming tools imply a story about who the programmer is; the stories we tell inspire a corresponding set of tools. Past progress can be told in terms of such stories; Future progress can be imagined likewise. Making the stories explicit is a great meta-tool... and it's fun!

Feb. 10th, 2009

eyes black and white

Who's responsible for that moving part?

Common Lisp pathnames have long been a source of frustration for users and implementers alike. I'll argue that the deep reason for this frustration is that the Common Lisp standardization process resulted in declaring as fixed an interface to a moving functionality, preventing users' needs to be addressed with no one in charge of addressing the discrepancy.

Read an overlong rant on the pitfalls of programming language standardization...Collapse )

Once you understand programming language documents not as mere technical documents, but as social tools to coordinate people, you can critique them from a new perspective. Who promises to whom to take responsibility for what in exchange for what? What about the contract brings value to both parties, and what doesn't? These are questions you should be asking when working on programming language design. Contracts are useful when they promote an efficient division of labor, whereby the more competent in a specialized field easily take responsibility for work in said field at the benefit of others, at low cost. Contracts are harmful when they create unnecessary work, when they assign responsibilities to the wrong people, when they impede future improvement in division of labor. Design your contracts carefully; include what works, exclude what doesn't.

Jan. 28th, 2009

eyes black and white

Why Language Extensibility Matters

If you neglect some aspect of computation in designing your programming language, then the people who actually need that aspect will have to spend a lot of time dealing with it by themselves; and if you don't let them do it by extending your language, they'll have to do it with inarticulate barkings rather than civilized sentences.

Read more...Collapse )

Jan. 21st, 2009

eyes black and white

Curry-Howarding evaluation traces

In the conclusion of his article "When Are Two Algorithms the Same?" Yuri Gurevich (who BTW fails to cite Mitch Wand on FEXPR) notes:

In addition to the Curry-Howard correspondence, there is another connection between proofs and computation. If we take an algorithm for computing a function f and we run it on input x obtaining output y, then the record of the computation can be regarded as a proof, in an appropriate formal system, of the equation f (x) = y.

But as Philip Wadler remarks, for each programming language and type system, there is a Curry-(deBruijn)-Howard isomorphism between that language and the proofs of some logic proposition system. The connection that Yuri noticed, and that I incidentally used as the basis for my 1998 mémoire de DEA (~ Master's Thesis), is thus not a wholly novel connection but indeed a particular case of such a generalized Curry-Howard isomorphism, for the type system whereby evaluation trace or tactic T is of type E↓ iff it shows how evaluation of expression E can terminate.

The computational content of the proof that E terminates is a concrete evaluation trace. In the proper framework, an abstract evaluation tactic or strategy may be of type E↓ if it can be reduced to such a concrete evaluation trace in the "type"-directed context of evaluating E. Thus, if you're only considering expressions from a system that has strong normalization properties, then the trivial strategy "reduce wherever you can" has type E↓ for all E in the system, which isn't very informative. If on the other hand you're considering a richer system where termination is not decidable, then you can have informative proofs that include important information on what choices to make at various points of the evaluation attempt. If evaluation of an expression E doesn't terminate, there is indeed no terminating evaluation trace and no computational content. To achieve a full intuitionnistic logic, you can conceivably extend your computation system with expressions such as E→F that terminates iff termination of E implies termination of F, yielding a function that takes an evaluation trace of E as input and output an evaluation trace of the F. Use ⊥ for F and you have the usual negation — and of course, just because E doesn't terminate doesn't necessary imply that E→⊥ terminates, which is the stronger statement that you can actually prove from within the system that E doesn't terminate. Hopefully for your system, evaluating the Gödel sentence Y(G↦G→⊥) won't terminate.

NB: if you're willing to stretch it a bit more, E itself is of type ⊳V iff some trace or tactic shows it evaluates to value V; but then your proof object is trivial and you moved all the intelligence to the opaque evaluation metasystem, which makes the correspondence useless, unlike the previous point of view, that is actually useful in enabling the same tools to do more things (or the same things to be done by fewer tools).

Sep. 17th, 2004

eyes black and white

Worse is Better vs the Right Thing(tm)

Perl, Python, PHP -- to me, they are the Pitiful Programming Perversions. And yet, it moves! As far as running goes, these hacks are certainly better than (e.g. my own) vapourware. Is it that sometimes Worse Is Better? Or maybe rather we perfectionists should overcome the very same paralysis through analysis that makes us more than others aware of the Right Thing(tm). We should get things going, to begin with, so as to someday be able to establish that Right is Right. Seek perfection not in a one-shot static product, but in an incremental dynamic process. Forswear the State, embrace the Tao. Yup, that's it. Just do it. And I mean, NOW!

eyes black and white

July 2025

S M T W T F S
  12345
6789101112
13141516171819
20212223242526
2728293031  

Tags

Syndicate

RSS Atom
Powered by LiveJournal.com