A realization biking home: if "Naïve Action Theory" has it right, it is not only correct but plausible to say that practical deliberation culminates in action.
« April 2008 | Main | June 2008 »
A realization biking home: if "Naïve Action Theory" has it right, it is not only correct but plausible to say that practical deliberation culminates in action.
Posted at 07:12 PM | Permalink | Comments (0)
M. Thompson:
This suggests, though, that we know what practices and species are before we come to advance such claims. Do we take the concepts over, maybe, from sociology in the one case, and biology in the other? But we are practicing philosophy, or mean to be, and so if we accept the equation, the 'wider context' of vital description is the species, then we must, in Professor McDowell's phrase, 'enter it on the left side'. Vital description of individual organisms is itself the primitive expression of a conception of things in terms of 'life-form' or 'species', and if we want to understand these categories in philosophy we must bring them back to that form of description.
I find Dr. Prof. McDowell's phrase, or its application here anyway, confusing. Clearly what Thompson means is: "the species is the 'wider context' of vital description" (or perhaps more explicitly: "the species just is &c"), and what makes it clear that that's what he means is precisely the fact that he thinks that, in the equation as he has it written, the left-hand term should interpreted and the results assigned to the right-hand term. Conventionally, the term getting assigned to is put on the left. One could write something like "x + 2 = f(x)" and be understood, and indeed to understand what's going on one's reader would have to enter the equation on the left side. But that would not be because he was doing philosophy; it would be because one had expressed oneself confusingly.
And indeed: I was confused.
Posted at 12:03 AM | Permalink | Comments (3)
An interpreter for everyone's favorite obfuscated functional programming language, Unlambda. The interpreter is actually mildly useless in several respects: (a) building the tree on large Unlambda programs takes a surprisingly long time; (b) since Python doesn't do tail recursion, you have the option of having your program hit the recursion limit if it's too low, or segfaulting if it's too high (there can be a "just right", for instance for this quine, but that's really just luck); (c) it's an Unlambda interpreter, so how useful could it be?
But it is admirably short, and was written in an admirably short amount of time.
Or rather, it's not really not tail-recursing that's the problem, it's that the stack will grow and grow anyway. An only slightly less elegant alternative doesn't have that problem.
Posted at 10:05 AM | Permalink | Comments (0)
This and this (follow the embedded SN-monad link) are the two most comprehensible descriptions of monads I've yet read, and I think part of the reason for that is that, since they're both in Scheme, you avoid Haskell's confusing (to the uninitiated) type-related syntax—also the first one explicitly notes that he has, mercifully, omitted the math.
Moderately relatedly I had the following thought. Suppose you have a recursively defined function, kind of like the fibonacci series, thus: f(0) = 1, f(1) = 1, f(n) = f(n-1) + 2*f(n-2). Then you could describe the recurrence relation postfixwise thus: n 1 - f 2 n 2 - f * + [it just occurred to me the way this is written presupposes that with binary operations first you pop the right operand and then the left operand, and I have no idea if that's the way it's usually done, but oh well]. Then it seems that there's a more or less straightforward way to read a continuation-passing style version of the computation from the postfix description, if you imagine that the syntax is arg1 arg2 … argn op cont. Go along, pushing the values until you reach an n-ary operation, pop n values, and instead of pushing the result, pass it to the continuation (obvs. values passed as arguments to continuations will have to count as being on the stack): then you get: n 1 - (\x -> x f (\x' -> n 2 - (\x'' -> x'' f (\x''' -> 2 x''' * (\x'''' -> x' x'''' + k))))). Moving the functions to the front and replacing the '+' , '*', '-' with CPS analogues kp, kt, km (for plus, times, minus) gets you something that actually works:
f 0 k = k 1
f 1 k = k 1
f n k = km n 1 (\nm1 -> f nm1 (\fn1 -> km n 2 (\nm2 -> f nm2 (\fn2 -> kt 2 fn2 (\fn22 -> kp fn1 fn22 k)))))
This occurred to me because you always see in talk about CPS the remark that it turns "expressions "inside-out" because the innermost parts of the expression must be evaluated first" (presumably the article means written first, because of course the innermost parts have to be evaluated first)—but that's also the case with postfix notation. This leads me to conjecture that Forth programmers and HP calculator users find CPS natural.
Posted at 07:40 PM | Permalink | Comments (0)
Posted at 08:04 PM | Permalink | Comments (0)
I just saw a throat-singing unicyclist.
Posted at 02:53 PM | Permalink | Comments (3)
In a feat of improbable organization surpassed only by my still having notes from the classes I took from László Babai (Discrete Math, Algorithms & Combinatorics—I may have learned more in these two classes than in any other I took as an undergrad—now I can't tell if a mere three years after I took the former it's gotten significantly harder or I've forgotten not just how to do some classes of problems, and the meanings of some terms, but also that I ever knew them—a mixture of both, probably), I still have the syllabus from a class I didn't take, as well as some handouts from the first meeting: The Philosophy of Wilfrid Sellars, taught by Jim Conant and Michael Kremer. It is one thorough syllabus, and one of the handouts is also a doozy.
Now, of course, I consider this an incredible missed opportunity, but at the time I wasn't able to summon up much excitement. Hélas.
Posted at 11:42 PM | Permalink | Comments (5)
Boy, do I hate these clocks! The fine people at Movado know well that you don't need to have numbers on the face as long as you've got hands, so the idea that this clock expresses some sort of saucy insouciance* regarding the keeping track of time is somewhat absurd. The only thing the numbers on the bottom express is the desire of the possessor of the clock to be seen as someone who doesn't care about his worldly obligations and therefore need not be in thrall to the passage of time—but who really is, and wants to be able to track the movement of the hands across the face of the clock.
Since it's the hands, and not the numbers, that enable one to tell time, I propose an alternative clock. It would have numbers, since it must be apparent what is being denied. And would be round, of course. But it would have no hands, no hands at all.
We can, and should, take this further. The clock is to have no hands, but it should have three cylinders at the center, batteries at the back, and a softly whirring motor in between, causing one of the cylinders to rotate a full revolution in 24 hours, one in one hour, and one in one-sixtieth of an hour. It would be as close as possible to being a wall clock, and lack only the one feature that would actually make it possible to tell time with it. Indeed, it would just be tacky—pointless, even—without the battery, motor, and rotations.
*aka "insauciance".
Posted at 06:25 PM | Permalink | Comments (5)
Recent Comments