The sitch: there's a fucking gigantic grid implementing a universal turing machine using Conway's Game of Life, and it's been set up to play chess against itself. So we've got: (a) the Life game, probably featuring (b) various slider-like things, some of which represent (c) the Turing machine and some its tape, which being run is (d) storing chess-related data and running chess-related algorithms. And more such things, no doubt.
The scale of compression when one adopts the intentional stance toward the two-dimensional chess-playing computer galaxy is stupendous: it is the difference between figuring out in your head what white's most likely (best) move is versus calculating the state of a few trillion pixels through a few hundred thousand generations.
Furthermore, "from the perspective of one who had the hypothesis that this huge array of black dots was a chess-playing computer, enormously efficient ways of predicting the future of that configuration are made available"—that is, not only can you figure out some description of what's going to happen more quickly than the person who's going through updating each cell in accordance with the rules of Life, you can also translate that description back into Life terms. And, apparently, this is comparatively, well, I don't know if easy is right, since updating a few trillion Life cells is more time-consuming than difficult (why one has to do this in one's head is not really clear to me, unless it's to make it impossible to carry out) but simple, or fast, or efficient, or something.
The claim seems wildly implausible to me. Suppose you're confronted with the Life/UTM/chess-playing grid, and you know only that it implements Life. You can't adopt the intentional stance towards it until you have some idea what it's doing beyond just playing Life. I don't see any real reason to grant this, but even if you grant that someone comes along and tells you "oh, that implements a universal Turing machine and it's playing chess with itself", you still can't do anything with that information to predict what the move will be until you know the state of the board. And how are you going to determine that? You don't have any predictions yet, you have some massive puzzles. You'll have to figure out which the program and which parts are the data (and there's no guarantee that the "tape" part of the UTM will be contiguous or at all tape-like or anything like that, of course), how it works, and simply figuring whose move it is is likely to be extraordinarily difficult. Consider how many moves of Life might correspond to advancing the UTM tape one cell, and how many tape manipulations might correspond to a single move in the game, and how complicated data structures representing the state of the board might be stored by a Turing machine (I have no idea myself, but I assume the method is not perspicuous). Even if someone came along and told the observer not only that it's playing chess with itself, and then also told h/h what the current state of the board is (in which case it's hard to see what the program itself has to do with anything anymore), he's still not in much of a position to predict future configurations of the Life board—not even those extremely few that do nothing but represent the state of play immediately after the move. (Think of how many moves, how far into the future of the game, the average chess algorithm considers, and the scoring algorithm used, how everything's updated, processed, etc.; then think of how many different Life-board configurations it will take to go through all that. Knowing that White will end up castling queenside does absolutely no good in predicting any of those configurations.)
It's sort of like if I told you that this is how Emacs represents some compressed data and expected you to tell me what it is (I've taken off a few characters in the beginning that would identify immediately the compression algorithm used):
h91AY&SYÀ\237^K§^@^@^M^?ÿb^PH@QÁd pBHt^@@^P\200@^H^@^L^@!^@ ^@^B^@^P^@ ^@Tai\223 db^LL\232^Z`À^Z^L\206\215^CG¨^G\210\235\2342A-ìBþI^@-\203å\215?M @\206ì^[òÞ\236"BZf*79ëtÅÜ\221N^T$0'ÂéÀ
Easy-peasy, right? Or like arguing that we can predict behavior on the basis of a complete neurophysiological description of a person, and translate behavior back into such descriptions. Of course we aren't normally confronted with such information; we aren't normally even confronted with uninterpreted movements (what a puzzle that would be! Then we would really have to try to interpret such things as behavior of such and such a kind). We see people behaving certain ways and want to predict other ways in which they might behave. (I think it's interesting that when Haugeland in
Pattern and Being changes the example to allow for interactivity, he also changes it from a cellular automaton to a
more congenial system, such as a computer that accepts opponents' moves via keyboard input, and continuously displays the current position on a screen (p 61 in Dennett and His Critics). That is a huge change! Even allowing, as Haugeland does, that the representations on the screen may not look like conventional chess pieces, they have the advantage that they'll be stable: you want to know where the black king is, there's a representation of it, there—you can point at it, and the representation is on the board, and its position relative to other pieces can be taken in relatively easily, etc: in fact the way these representations are positioned and move is constitutive of their being chess pieces. There is no reason to imagine that anything of that sort will be true of the Life board. What counts as "the black king"? I don't really see any reason to suppose that anything will, to be honest; certainly not any one thing (where "thing" might be geographical region of the Life board, higher-level construct on the board like sliders, region of whatever it is that counts as the tape, pattern on the tape, whatever). The change Haugeland makes might make the example more plausible, and also a closer parallel to the situation with people, but it also seems to make it basically different. The design and physical stances don't really make sense with Haugeland's example, or anyway it seems that the physical stance would be the creation of the pixels via the monitor, and while then it really is easier to predict what will happen on the monitor via the intentional stance, the physical layer has changed dramatically. The monitor, after all, isn't playing chess. Uh, or something: this part of this post is obviously not really thought through very well. But I do think that something's fishy about that change.)
Haugeland also says in a footnote that
whether anyone could, inf act, recognize them [chess pieces, locations, etc] as implemented in the Life plane is a separate question; but the essential point could be made as well with a less formidable implementation (p 68 n 8), and that may well be true, but given that the example was introduced as one in which this stance stuff is supposed to reap big benefits, that is certainly an odd proviso to note. It is anyway desirable if the examples you introduce in service of a point actually seem plausibly to make that point.