For starters, David Eppstein's Combinatorial Game Theory page is the page I'll try to beat.

Surreal Numbers. by Michael Conrady with the help of Prof. David Joyner.

Hackenstrings by A N Walker.

Surreal Numbers by Tony Smith.

     The following is a nice write-up by Scott Purdy.
     Most games presented as a game theory puzzle are of a single type: finite, dynamic, zero-sum with complete information.  Finite games have a well-defined end.  Most games are finite, but imagine chess as played without stalemate rules or a version of tic-tac-toe where after a draw the board is cleared and the game begins again; these would be infinite games.  Dynamic games involve the taking of turns, as opposed to static games, where players choose independently and their choices are revealed simultaneously.  The classic Prisoner’s Dilemma situation is a static game.  Zero-sum games mean that for one person to win, another has to lose.  Complete information is exactly what it seems to be, complete knowledge of the situation.  Poker is a common example of a game with incomplete information.  Backgammon, too, is a game with incomplete information; the players don’t know what dice rolls will occur.
      There are good reasons that game theory puzzles are usually this particular type of game.  Payoffs are easy to calculate in a zero-sum game, particularly one with only two players.  Dynamic games are easier to follow than static games.  Consider rock-paper-scissors; no strategy can guarantee a win, so most plays are based on a ‘hunch’.  If played as a dynamic game, the strategy is clear and produces a win for the second player every time.  (One assumption of game theory is players always try to win.)  Most importantly, finite games of complete information are always solvable.  It can be mathematically proven that one player has a win or that, as in the case of tic-tac-toe, a draw can be forced.  This is one of the basic premises of John Conway’s numbers -- setting up games with simple rules, then determining the winning strategies and generalizing to, for example, different sized game fields.
      The most complete, though by no means simplest, means of solving a game is by diagramming a game tree.  It sets up every possible move in a branching tree, then repeatedly eliminates those branches that a player would not make until it has pruned back to the base of the tree.  The first pruning of a tic-tac-toe tree would remove all moves in which a player who could win didn’t.  The second would remove those branches in which a player could have blocked those wins, but didn’t.
      To comprehend the magnitude of such a feat, consider that a complete game tree for tic-tac-toe contains somewhere between 15 and 400 thousand branches, with the actual number probably around 200,000.  Much of game theory is about seeking ways to lower this number or to circumvent the need for a tree at all.  For example, the base of a tic-tac-toe tree requires only three branches (for corner, edge and center), rather than the nine I used for my estimate.  Besides its lack of efficiency, solving a game through a complete tree (rather than a modified tree or another method entirely) is unsatisfying because it is merely a calculation, rather than an insight into the strategy of the game.
     Their ability to do the calculations necessary for game trees quickly and accurately, to a much greater level of complexity than the human mind is capable of following is one of the great strengths of computers in game playing.  The Othello-playing program Logistello has, as I understand it, solved Othello for the final 25 moves of the game.  Since, barring passes, a complete game of Othello lasts 60 moves, this is quite a feat.
      One of the greatest drawbacks, however, of computers in game playing is their general inability to recognize patterns and other clues about potential strategies and then to apply those insights on a broader scale.  Deep Blue, the chess-playing computer that recently defeated Gary Kasparov, presumably learned chess the same way most people do.  First it learned the basics of play and then it was taught general consensus of human strategy.  However, since it has begun playing, its strategies have been improved on the basis of things it has learned.  It is not the computer, though, that determines what strategy changes to make; it is the programmers.  Deep Blue, though capable of great levels of calculation and forethought, has difficulty seeing the forest for the trees.  On its own, it learns about the value of the various moves from a particular position, but has trouble generalizing that knowledge.  Instead, its programmers watch and begin to recognize patterns in its play, which they then code into its strategic knowledge.  There was a bit of an uproar from the chess community when they did this in the midst of the Kasparov match.
      Most of us don’t have access to a computer of the caliber of Deep Blue, making further examination of this point difficult.  As an example which is more available to the general public, I recommend finding some commercially available computer version of Connect Four.  In particular, it needs computer play, and the option to set grid size and win length.  I know that the Microsoft-produced Tic-Tac-Drop meets these requirements.
      Set the grid size to the largest available, preferably with a greater width than height.  Also set the win length to equal the width of the grid.  It doesn’t take much thought into the matter to see that a win requires a complete row.  A bit more thought shows that a complete column, or even a piece anywhere in every row, will prevent the player from losing.  A simple mirror strategy by the second player (presuming an even number of columns) will always result in a draw.  I believe a related strategy on an odd number of columns can also force a draw.  The computer seems slow to grasp these points and can nearly always be beaten.
      Strategies like the mirror strategy mentioned above are the real fundamentals of game theory.  Most of these strategies are difficult to describe or even think about outside the world of the game in which they are played.  The best way to learn about them is to look through Winning Ways for yourself, though it’s not the easiest text to find, making the site above a good place to start.  Of course, that will just get you started on the real best way to learn about game theory, which is trying it on your own.  For that, I would recommend David Eppstein’s page (above), a collection of many game theory resources on the Web.

Why is the mathematics of Go interesting?  Mr. Ing Chang-Ki set up a trust fund of 1.5 million dollars for the first Go program to defeat a professional Go player.  He sponsored the Ing Cup, and the first International Computer Go Congress in 1985.   This prize remains unclaimed.  As a comparison, for Chess, the Deep Blue computer was able to beat Kasparov, the world champion.  Checkers and Othello also have computers as their champions now.
  Go requires a deviousness that computers have trouble grasping.

Feeb's Impartial Games Page has an excellent introduction to the Numbers of Games.

Clever Games for Clever People shows some of the games analyzed in Winning Ways.  These games are discussed in John Conway's On Numbers and Games.