Retrograde Dynamic Tablature Assembly

Voluntarily Eschewing the Meta-game
January 9, 2012, 12:18 pm
Filed under: Uncategorized | Tags:

I see what you did there, and you know what I’m trying to do too, but neither of us call each other on it.  Why? We both know the rules of the game, and what strategies work best in the game. Knowing these things is to know the meta-game.

But neither of us goes there. We feign ignorance of the meta-game because it’s too complicated. In the game, we know what we need to do, things are structured. Going to the meta-game requires new strategies, no comfortable rules to hang onto. In other words, we don’t want to open that can of worms.

Is this a stable strategy? No, it can’t be. Eventually someone is going to man up and play the meta game, and if he can develop a strategy (any strategy at all) then he’ll always win against your wild flailing. You know the meta game exists, but you don’t know how to play it.

Unfortunately, that’s an arms race, and you have to play it if you want to win. That’s the meta-meta-game.


Update to “Things I Don’t Know”
November 10, 2011, 1:55 pm
Filed under: computer science, lists, programming | Tags: ,

It has been nearly 4 years since I wrote a post entitled “Things I Don’t Know”. After all this time, what is my progress on that list? Admittedly, I haven’t gone after the items in the list systematically. As is usually the case with a person’s interests, you follow up on some of them, and others fall by the wayside over time. In my case, I got more interested in programming language theory, and lost pretty much all interest in algorithms and computational complexity.

That is cool though, because when you’re doing your degree, at first the wide world of “Computer Science” seems so vast, you’re just interested in everything. As your knowledge catches up with the modern day, the state of any given field is more or less interesting to you, and for me, modern programming language theory is far more interesting than modern computational complexity. And so, I tended to lean in that direction in my studies.

But down to the list…

1.) “What monads are” Well this one was cleared up pretty much when I learned haskell. It’s of course very easy to understand the basic monad laws and therefore “understand what a monad is”, but if you aren’t actually using them to do some programming, it ends up being hard to get any intuition for what they are useful for.  The basic idea is that a monad is a wrapper type, or environment for some other data type. The basic operations of a monad are bind and return. Return just lets you wrap a value in the monad environment, and the bind operation allows you to chain together operations on the underlying data type, while preventing that datatype from escaping the monad. What bind does to chain together the operations is what makes the different monads interesting, and where they get their power.  Monads are good for IO in Haskell because Haskell is lazy, so sequential execution of side-effects is not guaranteed. The IO monad helps with this by giving an implicit sequence of actions. In order for the last IO action to be evaluated, the second-to last must be evaluated first, and so on, so that they are all causally linked and are therefore always executed in order. The IO monad also helps by not giving a way to pull values out of the IO environment and back into pure code (contrary to my previous understanding, side-effects aren’t avoided, just contained to the IO monad).

2.) “What currying is” I admitted in the original list that a brief trip to Wikipedia would have answered this one for me, but it turns out I really only got around to understanding it when I took a class on programming languages and we were learning how to use the functional language ML. Currying is essentially a way to make a function of multiple arguments into a function factory that takes the first argument and returns a new function that takes the next argument and so on. Only once all of the arguments are absorbed does the final result come out. Currying is possible in any language that has closures, but functional languages like Haskell and ML give you some very nice syntactic sugar to create curried functions.  In fact, it’s so easy, it’s rare for a function to be uncurried in Haskell (ML is a bit more mixed on the issue, with some standard library functions like fold taking uncurried helper functions).

3.) “What the minimal set of features for a Turing complete language are” Interesting that I put it this way, because it would be answered in almost that exact form when I took the aforementioned programming languages course. Of course, the pedantic answer is that if your language can simulate a Turing machine, then it is Turing complete. That is true, but not useful for evaluating a specific language quickly. It turns out that basically you need branching and looping (in an imperative language like C). Branching allows the machine to take different computation paths based on dynamic information, and looping allows the machine to return to a previous state, instead of being forced to move closer and closer to the end of the program. In other paradigms than imperative, instead of ifs and whiles, we look for features with similar abilities. In lambda calculus, these capabilities are handled by having first class functions and recursion. In object oriented languages, like featherweight Java (a language that strips object oriented languages to their core), you get branching by overloading in child classes, and looping by recursion again. If someone comes up with some radically different programming concept from these, looking for branching and looping is a good heuristic.

4.) “How to program in Smalltalk, Haskell, Scheme, Objective-C, OCaml, Tcl/Tk, Brainfuck, Befunge, and Python.”  Since the original post, I learned to program in Haskell, and Python quite well. To the point where I consider them both in my top 3 preferred languages. Smalltalk, Scheme, Objective-C, OCaml, Tcl/Tk, Brainfuck and Befunge are not really on my reading list, though perhaps another 4 years will change that.

5.) “How to program x86 with all the neat extensions like MMX, and SSE 1,2,3 etc. “  I still haven’t taken up learning x86, though I have had to dig through assembly dumps of some of my C programs from time to time, so I have a passing familiarity with it. I’ve taken a few intensive architecture courses since the original post, so my fascination with assembly language has waned somewhat due to it being less mysterious.

6.) “Whether P = NP “ Obviously, I still don’t know this for sure, but I feel a lot more confident understanding why the experts are doubtful that they are equal.  I also learned a bit about the results that depend on whether this is true or not, which makes you appreciate that it’s not just “we doubt these two classes are equivalent because we can’t come up with an example of an NP-complete problem that we have a polynomial-time algorithm for”, but rather that there are a web of interconnected results that all depend in one way or another on the answer, and that if P = NP, then it would mean a large number of unlikely results would come out of it. On their own, not necessarily a dealbreaker, but taken together it makes the prospects of P=NP very unlikely.

7.) “The exact relationship between time complexity and space complexity.”  Again, I haven’t learned any new universal equation that allows one to show how many extra megabytes of memory will be needed to shave a constant factor off of a time-complexity analysis, but it’s very likely such an exact equivalence doesn’t exist. As I stated above, my interest in algorithmics waned over time, but before I lost interest I did learn that PSPACE is a huge class, whereas P is relatively small (P being problems with solutions that take polynomial time, and PSPACE being problems with solutions that take polynomial amounts of space with no constraints on time). The problem with this discrepancy of proportion is that P is limited to polynomial space simply because you can only write to a polynomial amount of memory addresses in a polynomial number of steps. Space can be reused however, so polynomial space and unlimited time is a very powerful paradigm. If you give the ability to reuse time (like say in a computer that can send its results back in time to itself when the computation started), then you gain effectively polynomial space and infinite time.  It’s the difference between the arrow of time and the nature of space that gets us here.

8.) “enough about reversible computation.” Here is one area where I know almost exactly as much now as I did then. I think I perused a wikipedia article a year or so ago on the topic, but other than that, I still know very little about reversible computing. Although I have heard it is gaining more traction in architecture circles.

9.) “How to write a compiler.” By contrast, here is a topic I know truckloads more about. Mainly because I wrote a compiler as part of a compilers course I took. Lexing, Parsing, Type-Checking, optimization, Garbage collection and more are now in my list of things I understand.

10.)  “How to parallelize quicksort in Erlang.”  I still haven’t done it myself, but Wikipedia says it’s possible, although the speedup is only O(log n). Not worth it?

With that, I conclude my followup. But it wouldn’t be very interesting if I didn’t include a new top 10 list of “Things I don’t know 2011”.

Things I Don’t Know 2011:

1.) I really don’t understand Arrows all that well.

2.) Continuations give me a headache. I understand what they are, but I don’t have a good intuition about how to use callCC and such, or how the continuation monad gets its special powers under the covers.

3.) How dependent types work their magic. I can understand the basic Vec n type, but more complicated dependent types, and the foundational theory? Not so much

4.) How to program in Agda, Coq, Epigram, Ωmega, or Scala. Yes, I’m back to being interested in Scala.

5.) Whether type inference can be practical in a language without parametric polymorphism. Specifically, what is the best way to distinguish functionally identical types with different sizes (int16 versus int64)

6.)  Whether P = BPP. Ok there are a whole ton of these, but this one I might actually be interested in seeing the proof for (and it’s more likely to get one).

7.) Whether there is a way to randomly generate algebras with associative operations. Identity? Easy. Inverses? Easy enough. Operation associative? Aw mannnn… that’s hard.

8.) Whether there is a way to formalize the effectiveness of programming language features. It’s easy enough to prove correctness and type-safety, but how do you prove ease of use?

9.) The mathematics of quantum mechanics and quantum computing. Sure, we’ve all read the pop-science books about spooky action at a distance and double slit experiments, but how do physicists really model these things mathematically?

10.) What is best? Bears or beets?

Algorithm compression
April 10, 2009, 3:40 am
Filed under: algorithms, complexity, computer science | Tags: ,

Information theory has a very developed and precise grasp on what it means for data to be “compressed.” Specifically, the data has to be essentially random. If there is any way to describe a pattern in the data that would allow the overall description to be shorter than the data itself, then it is possible to compress it further. When compressing however, one must consider both the data and the decompresser.For instance, you could pass a message to your pal that simply said “2” and that would mean, “Read message number 2 from the code book.” The message could have any arbitrary amount of complexity to it, but this doesn’t mean you’ve compressed the long message down to simply the number 2. You have to consider the entire codebook as part of the overall message. And if you are training a computer to look up these coded messages, you have to include the rules that the computer follows to look things up as well.

The point of all this is that it was long ago discovered that there is a minimum theoretical level to which a message can be compressed. You can’t keep compressing things down forever, it will eventually reach some minimum size, and further attempts to compress it do nothing, and can even make it larger. This is why zipping your folder of mp3’s does little to reduce the overall disk size. The idea is that there is some basic information that can’t really be done away with or squeezed any further.

Algorithms are like data in a certain way. Specifically, they can be efficient or inefficient, similar to the way data can be compressed to fill space efficiently or inefficiently. Data that has a lot of redundancies and patterns that can be abstracted out can be considered an inefficient use of space. The reason is that a lot of the space goes towards storing data that contains information that is already stored elsewhere. The same thing can be said of algorithms that are inefficient, though I’ll be specifically referring to inefficient in running time. I believe the reason that some algorithms for certain tasks are faster than others is because they are more efficient in that they do not make redundant calculations or calculations that result in information that has already been obtained.

Any given algorithm is composed of many individual steps. Each of these steps can be grouped into one of three large categories:

  1. Steps that obtain information that makes the correct result more likely to be the result of the overall algorithm.
  2. Steps that obtain information that makes the correct result less likely to be the result of the algorithm.
  3. Steps that do not change the likelihood that the correct result will be obtained by the algorithm.

Theoretically, we would like to get rid of any steps that fall into the categories number 2 or 3. We certainly don’t want to have any steps that make it more likely that we will obtain an incorrect answer. And of course, steps that do not help the overall progress of the algorithm in obtaining the correct answer we will want to skip, in order to save time. Among the steps in category 1, we can break them down even farther into these three categories:

  1. Steps that contain no shared information with any previous result obtained by the algorithm. In other words, the step produces wholly new information towards the result.
  2. Steps that contain some shared information with a previous result. These steps contain some new information and some information that has already been obtained.
  3. Steps that produce information that has already been obtained. These steps are completely redundant.

Again, steps in categories 2 and 3 should be avoided. If a step produces some shared information, then the goal should be to produce that new information without the redundant information. And if a step is producing completely redundant information it should be skipped entirely.

Hackers vs. Hacks
March 4, 2008, 10:19 pm
Filed under: programming

There are two different kinds of hacking. Of course, that’s a pretty general statement, and we can certainly divide the category of hacking into two parts many different ways (white hat vs black hat, hardware vs. software, etc), but the way in which I am speaking today is not a functional split. It is a split in environment in which the action is undertaken, rather than a difference in what task is actually performed. So far, I’m being pretty opaque, so let’s get to specifics.

I refer to hacking in probably something like the original sense of the term, which is doing something clever and resourceful with something that wasn’t intended for the purposes it has been put to by the hacker. This could mean infiltrating a remote computer using a buffer overflow attack, or writing new software for your iPod so it can play Doom. The key element here is constraint. The actions of the hacker are constrained in some way, and he must use intelligence and experience and whatever else is at his disposal to accomplish some goal despite those constraints. If my definition is allowed to be stretched, any work done around constraints can be viewed as hacking (For example, see the excellent Dinosaur Comics, in which the author uses the same image but different text each day to make something new).

If hacking is about constraints, how about we divide up hacking into two different types based on whether the constraints are self-imposed or imposed by some external agent. In the case of the hacker trying to gain access to a remote system he is not allowed into, the constraint is imposed by the admins of the remote system. They put into place security measures to prevent the hacker from gaining access, and if he wants to gain access he must twist around these constraints and find weaknesses that can be put to his use. In the case of the hardware hacker putting Doom on his iPod, the constraints are externally determined by the architecture of the device, but the choice of device is at the discretion of the hacker. While the hacker into a remote system may only be able to get what he wants from that specific server, if the hardware hacker decides it’s too hard to work on an iPod, he can do what he wants on some other device.

The far end of the self-constraint/external-constraint spectrum is the hacker who is not constrained externally at all. This is where we come to the crux of the “vs.” mentioned in the title. The hacker who places constraints on himself can be either lauded as a genius, or despised for implementing a terrible solution. The difference is the environment in which the “hack” is executed. If the hacker is on his own time and chooses constraints to make something challenging, and comes up with a solution that is unexpected, it doesn’t affect others. The pejorative “hack” comes when the hacker was simply too lazy to come up with a proper solution, and his only “constraint” was that he didn’t want to take the time to do something the “right way.”

So let’s focus the discussion on professional programmers who are employed for a purpose, or to open source programmers working with others toward a common goal. On the one hand, the externally constrained hacker is the one who the night before the expected launch of a program finds a massive bug that would require rewriting 40% of the code base to fix “the right way.” He implements a hack, and is a hero the next day. On the other hand is the programmer whose code is just one long string of hacks, completely unmaintainable and unreadable by anyone but the original hacker. This man is either dropped for incompetence, or kept on forever because he’s the only one who can maintain the code. Either way no one (save perhaps himself) likes what he’s done.

In the former example, we may raise issues with how the project was structured that such a major fundamental flaw was found only at the last minute, but in general most people would agree what the hacker did was right. There was no time available, the constraints were placed on him. In the latter case, it’s a safe bet everyone would blame the “hacker” who couldn’t be bothered to stop hacking. In both cases, the hacked code probably looks similarly incomprehensible, or may not scale, or may make the code base more fragile, but the circumstances are what decide whether these weaknesses are acceptable or not.

The good hacking of the first example can turn into the bad hacking of the second example if the architectural problems are not rectified. The time constraint of the first example evaporates at some point, and now you just have bad code sitting in there. There might be some other constraints (such as budget), but if the goal is quality, that hack done in the nick of time needs to be rectified. Here is the reason why: Hacking is a response, not a strategy. Hacking is a riposte, not an attack. If you are not backed into a corner, you shouldn’t be risking your life to get out.

If there is no constraint, you shouldn’t be hacking.

How to Get a Perfect Game of Cursor*10 (with time to spare)
January 7, 2008, 1:34 pm
Filed under: Uncategorized

This morning I got addicted to a crazy little flash game called Cursor*10
that is pretty mind boggling. It’s fun once you start to realize what you need to do. So in order to suck all the fun out of it for anyone who is stuck on it, here is my 100% walkthrough of the game to get all the triangles and finish the game with the maximum of 188 points.

Guide *Spoilers obviously*

  • Cursor #1: Go immediately to floor 4, find the box, then go down and clear out the bottom 3 floors of triangles.[EXTREMELY IMPORTANT! Remember which box the staircase was in since it is randomly placed every game.] If you have time to spare, go back to the box room and memorize where the box with the stairs is.
  • Cursor #2: Go as quickly as possible to floor 8, then hold the button until 400. After that, go back down skipping the triangles on 8, and clear out 7, then 6, then 5.
  • Cursor #3: Go quickly to floor 9, clear it out, then go back down to floor 8 and clear it out.
  • Cursor #4: Go quickly to to floor 10, and click on the box with the “100” as fast as you can until you have 200 cursor life. (Don’t worry about finishing off the box, your future cursors will do it.) Then race back down to floor 6 and hold the button for your
    remaining life. You should be on the button before 75 cursor life.
  • Cursor #5: Go quickly to floor 10, click furiously on the “100” box until it opens, then clear out the triangles from floor 10.
  • Cursor #6: Go to floor 10, click furiously, then clear floor 11 and 12.
  • Cursor #7: Go to floor 10, click like crazy, go to floor 13, clear it of triangles, then go to 15 and hold the leftmost button until the end.
  • Cursor #8: Go to floor 10, click a lot, go to floor 14, clear it of triangles. Then go to 15 and hold the center button until theend.
  • Cursor #9: Go to floor 10, click a little, saunter up to 15, clear it of triangles and hold the rightmost button until the end.
  • Cursor #10: Go to floor 10, it should take care of itself, go to 15 and wait until your score says 88, then click on the staircase to floor 16.

Hope this is useful to somebody! I am friggen exhausted after doing all the research for this little guide…
Cursor*10 Win Screen

Things I don’t know.
December 13, 2007, 7:58 am
Filed under: computer science, lists

How’s that for a post title that gives me a lot to talk about? Obviously, I can only talk about things I know I am ignorant of. Also, I will restrict the topic to the vague category of computer science, so that will narrow down the list. And what the hell, I’ll make it a top ten list, since those seem to be so popular these days. So, without further ado, here is the list of 10 things that finish the sentence, “I don’t know… “

#1 “What monads are.” — I have tried and tried to get a straight answer to this one. I have not tried terribly hard, but that’s how these things happen. I know monads are some kind of tricky way to avoid side effects for certain things like IO in functional languages. I know that they’re a transplant from category theory in mathematics, but to date monads are a black magic as far as I’m concerned.

#2 “What currying is.” — Listen. I will just tell you that I have not investigated this one. I keep seeing it thrown around as a feature, or mechanism certain languages have, but haven’t looked into it. A quick trip to Wikipedia would probably cure me, but I need to pad this list a bit.

#3 “What the minimal set of features for a Turing complete language are.” — All modern programming languages are Turing complete, sure. SQL is not. Regular expressions are not. I have an intuitive sense of the ways these things are different from real programming languages. Currently though, if you gave me a brand new programming language you just invented, I couldn’t tell you with certainty whether or not it was Turing complete.

#4 “How to program in Smalltalk, Haskell, Scheme, Objective-C, OCaml, Tcl/Tk, Brainfuck, Befunge, and Python.” — This list only includes languages that A: I could think of off the top of my head, and B: languages I’m interested in learning eventually. Scala almost made this list, but I don’t like the JVM enough to get excited. .NET can likewise go away. I won’t list all the reasons I am interested in these languages, but Befunge is worth learning specifically because it is 2D.

#5 “How to program x86 with all the neat extensions like MMX, and SSE 1,2,3 etc. “ — I have programmed assembly for the Motorola 68000 for a class on computer organisation, but the huge instruction set available on a modern Intel machine intrigues and mystifies me.

#6 “Whether P = NP “ — Luckily, I am in good company on this one. Personally, I don’t think they are equal, but the optimist inside me always holds onto hope. The rate at which mankind would progress following a constructive proof that P=NP would be something to marvel at.

#7 “The exact relationship between time complexity and space complexity.” — Everyone seems to have the intuition that there is always a tradeoff between the amount of computation that must be done and the amount of storage space required. Store a bunch of lookup tables and you can reduce your computational load. Increase the complexity of your algorithm and skip making up a bunch of tables. It seems very fundamental, but is there an exact mathematical theory for a law of “conservation of complexity” or something similar? I don’t know. And I promised myself I wouldn’t go to Wikipedia while making this list. (Actually I already cheated, because I looked up Befunge to make sure it was the 2D language I was thinking of. It was, so I claim it doesn’t count!)

#8 ” enough about reversible computation.” — This one is stretching the format a bit, but essentially, I know a little about reversible computing, but not anywhere near as much as I would like to. The basic idea is really neat: theoretically cool processors. Information isn’t destroyed in the process of computing, so no heat is generated. The sad part is that I have told you basically everything I know about reversible computing in this little paragraph.

#9 “How to write a compiler.” — I am sorely lacking in the skills to write compilers of computer languages. I am more specifically lacking in the skills to write compilers for 2D languages…

#10 “How to parallelize quicksort in Erlang.” — In theory, it should be easy to make divide and conquer algorithms parallel, but I cannot for the life of me figure out how to do this one. It is probably more a result of my lack of competency with Erlang than with any real barrier to writing such an algorithm… But there you go: number 10.

There are a lot more things I could write in here, like: “How long sandwich meat keeps in the fridge.” and, “When it’s finally going to cool down in Florida. It’s still 70 in the middle of December.” and “Who let the Dogs out?” But I think I will leave it this length for the time being.