Teaching Your Computer To Play Super Mario Bros. – A Fork of the Google DeepMind Atari Machine Learning Project

For those who want to get right to the good stuff, the installation instructions are below. This is a modification of Google DeepMind’s code: instead of training a computer to play classic Atari games, you train it to play Super Mario Bros.

1. On a linux-like system, execute the following steps:

sudo apt-get install git
git clone https://github.com/ehrenbrav/DeepQNetwork.git
cd DeepQNetwork
sudo ./install_dependencies.sh

This will grab the necessary dependencies, along with the emulator to run Super Mario Bros. Note that if you want to run training and testing on your CUDA-equipped GPU, you’ll need to install the appropriate CUDA toolkit. If you’ve done this and the script doesn’t automatically install cutorch and cunn (it looks for the presence of the Nvidia CUDA Compiler NVCC which might not be installed on Debian systems for example), uncomment the lines at the end of the script and try again.

2. Obtain the ROM for the original Super Mario Bros. from somewhere on the internet, place it in the DeepQNetwork/roms directory, and call it “smb.zip”.

3. Run the following to start the training from DeepQNetwork/ (assuming you have a CUDA-friendly GPU – if not, run the “train_cpu.sh” instead:

./train_gpu.sh smb

Watch Mario bounce around at random at first, and slowly start to master the level! This will run for a very long time – I suggest you at least let it get through 4 million training steps in order to really see some improvement. The progress is logged in the logs directory in case you want to compare results while tweaking the parameters. Once you run out of patience, hit Control-C to terminate it. The neural network is saved in the dqn/ directory with a *.t7 filename. Move this somewhere safe if you want to save it, because it is overwritten each time you train.

If you want to watch the computer play the network you trained, execute the following (again, use the “_cpu” script if you don’t have a compatible GPU):

./test_gpu.sh smb <full-path-to-the-saved-file>

To help get you started, I posted my network trained through about 4 million steps here (over 1GB) – if you want to start with this to just see how it works, or use it as a jumping-off point for your own training so you don’t need to start from scratch, use this by uncommenting the saved_network line in the train script and specifying the complete path to this file.


Earlier this year I happened upon Seth Bling’s fantastic video about training a computer to play Super Mario Bros. Seth used a genetic algorithm approach, described in this paper, which yielded impressive performance. It was fun to get this working and see Mario improve over the span of a couple of hours to the point where he was just tooling the level! The approach Seth used, however, was almost 15 years old and I started wondering what a more modern approach would look like…

Around the same time, I happened upon this video about the Google DeepMind project to train a computer to play a set of classic Atari games (see especially around 10:00). Now this was something! DeepMind used a process that they called a Deep Q Network to master a host of different Atari games. What was interesting about this approach was (i) they only used the pixels and the score as inputs to the network and (ii) they used the same algorithm and parameters on each game. This gets closer to the mythical Universal Algorithm: one single process that can be successfully applied to numerous tasks, almost like how the human mind can be trained to master varied challenges, using only our five senses as inputs. I read DeepMind’s paper in Nature, grabbed their code, and got it running on my machine.

What I especially liked about Google’s reinforcement learning approach is that the machine really doesn’t need to know anything about the task it’s learning – it could be flipping burgers or driving cars or flying rockets. All that really matters are the pixels it sees, the actions it can take at any moment in time, and the rewards/penalties it receives as a consequence of taking those actions. Seth’s code tells the machine: (i) which pixels in the game were nothing, (ii) which were enemies, and (iii) which parts Mario could stand on. He did this by reading off this information from the game’s memory. The number he programmed Mario to optimize was simply how far (and how fast) Mario moved to the right.

What was most exciting to me was unifying these two approaches. Why not use a Deep Q Network to learn to play Super Mario Bros. – a more complex game than the arcade-style Atari titles… Seth called his approach MarI/O, so you can might this version Deep-Q MarI/O.

How It Works

The Google Atari project, like many other machine learning efforts in academia, used something called the Arcade Learning Environment (ALE) to train and test their models. This is an emulator that lets you programmatically run and control Atari games from the comfort of your own computer (or the cloud). So my first task was to create the same sort of thing for Super Mario Bros.

I started with the open source FCEUX emulator for the Nintendo Entertainment System, and ported the ALE code to use FCEUX instead. The code for that is here (though I didn’t implement parts of the ALE interface that I didn’t need). Please reuse this for your own machine learning experiments!

I then modified DeepMind’s code to run Super Mario Bros using this emulator. The first results were…rather disappointing. Super Mario Bros. is just way more complex than most Atari games, and often the rewards for an action come quite a bit of time after the action actually happens, which is a notorious problem for reinforcement learning.

Little by little, over the course of several months and probably a couple solid weeks of machine time, I modified the parameters to get something that worked. But even then, Mario would often just kind of stand around without moving.

To correct this, I decided to give him a reward for moving to the right and a penalty for moving to the left. I didn’t originally want to do this since it seems to detract from the purity of using just the score to calculate rewards, but I justified it by thinking about how humans actually play the game… When you first pick up Super Mario Bros, you basically just try to get as far as possible without dying. Score seemed to be almost an afterthought; something you care about after mastering the game. So it definitely is not quite as pure as DeepMind’s approach here because it does get into some unique aspects of the game, but I think it better matches how a human would play. Once I made this change, things started to improve.

The second issue I noticed was that Mario seemed to have a blithe disregard for his own safety – he would happily run right into Goombas again and again…and again…and again. So I added a penalty for dying, also justifying this by saying that this is how a person would actually play.

With these modifications, I spent extensive time tuning the parameters to improve performance. I noticed a few things. First, there would often be an initial peak after a relatively small number of steps, and then scores would decline steadily with further training. I think this might be related to my increasing the size of the one of the higher levels of the neural network (which handle the more abstract behaviors): the downside to this additional expressive power is overfitting, which could have contributed to that early spike and then steady decline with more training…

The second issue I noticed was that there seemed to be little connection between the network’s confidence in its actions and its actual score. I came across another recent paper on something called Double Q Learning, also courtesy of DeepMind, which substantially improved Google’s original results. Double Q Learning counters the tendency for Q networks to become overconfident in their predictions. I changed Google’s original Deep Q Network to a Double Deep Q Network, and that helped substantially.

Finally, the biggest improvement of all came when I was just more patient. Even running on a powerful machine with a Nvidia 980 GPU, the emulator could only go so fast. As a consequence, one million training steps took about an entire day, with quite a bit of variance in the scores along the way.

Changes to Google’s Code

So here’s a summary of my changes from Google’s Atari Project:

  • Changed the Deep Q Nework into a Double Deep Q Network.
  • Ported the Atari Learning Environment to Nintendo using the FCEUX emulator.
  • Granted rewards equal to points obtained per step, plus a reward for moving to the right minus a penalty for moving to the left.
  • Implemented a penalty for dying.
  • Clipped the rewards to +/- 10,000 per step.
  • Scaled the rewards to [-1,1].
  • Doubled the action repeat amount (the number of frames an action is repeated) to 8. I did this because the pace of Super Mario Bros. can be quite a bit slower than many Atari games. Without this, Mario seems a bit hyperactive and in the beginning just jumps around like crazy without going anywhere.
  • Increased the size of the third convolutional layer from 64 to 128. This was an attempt to deal with the increased complexity of Super Mario Bros. but definitely slows things down and I’m sure risks overfitting.

How Deep Q Learning Works

Since I was starting from scratch, it took me quite a bit of time to understand reinforcement learning. I thought I’d try to explain it here in layman’s terms, since this sort of explanation would have really helped me when I embarked on this project.

The idea behind reinforcement learning is you allow the machine (or “agent” in the machine learning lingo) to experiment with playing different moves, while the environment provides rewards and penalties. The agent doesn’t know anything about the underlying task and starts out simply doing random stuff and observing the rewards and punishments it receives. Given enough time, so long as the rewards aren’t random, the agent will develop a strategy (called a “policy”) for maximizing its score. This is “unsupervised” learning because the human doesn’t provide any guidance at all beyond setting up the environment – the machine must figure things out on its own.

Which way is the ball going? We can’t tell looking at just one frame.

Our goal is a magical black box that, when you input the current state of the game, it will provide an estimate of the value of each possible move (up, down, right, left, jump, fire, and combinations of these). It then plays the best one. Games like Super Mario Bros. and many Atari games require you to look at more than a single frame to figure out what is going on – you need to be able to see the immediate history. Imagine looking at a single frame of Pong – you can see the ball and the paddles, but you have no idea which direction the ball is traveling! The same is true for Super Mario Bros., so you need to input the past couple of frames into our black box to allow it to provide a meaningful recommendation. Thus in our case, a “state” actually comprises the frame that is currently being shown on the screen plus three frames from the recent past – that is the input to the black box.

In mathematical terms, our black box is a function Q(s, a). Yes, a really, really big nasty complicated function, but a function nonetheless. It takes this input (the state s) and spits out an output (an estimate of the value of each possible move, each of these called an “action” a). This is a perfect job for a neural network, which at its heart is just a fancy way of approximating an arbitrary function.

In the past, reinforcement learning used a big matrix called a transition table instead of a neural network, which kept track of every possible transition for every possible state of the game. This works great for tasks like figuring out your way through a maze, but fails dramatically with more complex challenges. Given that the input to our model is four grayscale 84×84 images, there are a truly gigantic number of different states. There are 15 possible moves (including combinations of moves) in Super Mario Bros., meaning that each of these states would need to keep track of the transitions to all other possible states, which rapidly becomes utterly intractable. What we need is a way to generalize the states – a Goomba on the right of the screen is the same thing as a Goomba on the left, only in a different position and can be represented far more efficiently than specifying every single pixel on the screen.

Over the past few years, one of the hottest topics in computer science is the use of a type of neural network called a “deep convolutional network“. This type of network was inspired by the study of the animal visual cortex – our minds have a fantastic ability to generalize what our eyes see, and recognize shapes and colors regardless of where they appear in our field of view. Convolutional neural networks work well for visual problems since they are robust against translation. In other words, they’ll recognize the same shape no matter where it appears on the screen.

The networks are called “deep” because they incorporate many layers of interconnected nodes. The lowest layers learn to recognize very low-level features of an image such as edges, and each successive layer learns more abstract things like shapes and, eventually, Goombas. Thus the higher up you go in the network, the greater the level of abstraction that layer learns to recognize.

Each layer consists of a bunch of individual “neurons” connected to adjacent layers and each neural connection has a numeric weight associated with it – these weights are really the heart of the network and ultimately control its output. So if a given neuron has enough inputs that fire and, when multiplied by their respective weights, the result is high enough, that neuron itself fires and the neurons that are connected to it from above receive its signal. Together, this set of weights that define our neural network is called theta (θ).

For this project, I used a the following neural network, adapted from Google’s code:

  • Input: Eight 84×84 grayscale frames;
  • First Layer (convolutional): 32 8×8 kernels with a stride of 4 pixels, followed by a rectifier non-linearity;
  • Second Layer (convolutional): 64 4×4 kernels with a stride of 2 pixels, followed by a rectifier non-linearity;
  • Third Layer (convolutional): 128 3×3 kernels with a stride of 1 pixel, followed by a rectifier non-linearity;
  • Fourth Layer (fully-connected): 512 rectifier units;
  • Output: Values for the 15 possible moves.

Armed with this neural network to approximate the function I described above using weights θ (written Q(s,a;θ)), we can start attacking the problem. In classic “supervised” learning tasks, the human trains the machine by showing it examples, each with its own label. This is how, for example, the ImageNet competition for classifying images works: the learning algorithms are given large set of images, each categorized by humans into one of 200 categories (baby bed, dragonfly, spatula, etc.). The machines are trained on these and then set loose on pictures whose categories are hidden to see how well they do. But the issue with supervised learning is that you need someone to actually do all the labeling. In our case, you would need to label the best move in countless different scenarios within the game, which isn’t going to fly… Thus we need to let the machine figure it out on its own.

So how would a human approach learning Super Mario Bros? Well, you’d start by trying out a bunch of buttons. Once you figure out some moves in the game that kind of work, you start to play these. The risk here is that you learn a particular way of playing, and then you get stuck in a rut until you can convince yourself to start experimenting again. In computer science terms, this is known as the Explore/Exploit Dilemma: how much of your time should be spent exploring for new strategies versus exploiting those that you already know work. If you spend all of your time exploring, you don’t rack up the points. If you only exploit, you’re likely to get caught in a rut (in the language of computer science, a local minimum).

In Deep Q Learning, this is captured in a parameter called epsilon (ε): it’s simply the chance that, instead of playing the move recommended by the neural network, you play a random move instead. When the game starts, this is set to 100%. As time goes on and you accumulate experience, this number should slowly ramp down. How fast it ramps down is a key parameter in Deep Q Learning. Try tweaking this parameter and see what difference it makes.

Thus when it comes time for Mario to pick a move, he inputs the current state at time t (called st) into the Q-function and then selects the action at that time (at) that yields the highest value. In mathematical terms, the action Mario selects is:

at = maxaQ(s, a; θ)

There is also the probability ε that instead of playing this action, he’ll select a random action instead.

So how does the learning actually work? The Q function starts out with randomly assigned weights, so somehow these weights need to be modified to allow Mario to learn. Each time a move is played, the computer stores several things: the state of the game before the move at time t, the move played a, the state of the game after the move st+1, and the reward earned r. Each of these is called an experience and all of them together are called the replay memory.  As Mario plays, every couple of moves he picks a couple of these experiences at random from the replay memory and uses them to improve the accuracy of his network (this was one of Google’s innovations).

This is the central algorithm of Q-learning: given a state at time t (called st, and remember a state includes the past few frames of gameplay), the value of each possible move is equal to the reward (r) we expect from that move plus the discounted value of the best possible move in the resulting state (st+1). The reason the value of the future state is discounted by a quantity called gamma (γ) is because future rewards should not be valued as highly as immediate rewards. If we set gamma to one, the machine treats rewards far into the future equally as a reward right now. Setting gamma low makes your computer into a pleasure-seeking hedonist, maxing out immediate rewards without regard to potentially bigger rewards in the future. In mathematical terms, the reward of making a move a at state st is expressed as:

r + γmaxaS(st+1, a; θ)

This gives us a way of estimating the value of making a move a at state st. For each learning step, the machine compares its estimate of this with the actual future rewards observed through experience. The weights of the neural network are then tweaked to bring the two into closer alignment using an algorithm called stochastic gradient descent.

The rate at which the network changes its weights is called the learning rate. At first, a high learning rate sounds fantastic for those of us who are impatient, since faster learning is best, right? The problem here is that the network could bounce around between different strategies without ever really settling down, kind of like a dieter following the latest fad without ever really following through a regimen far enough to see results.

As you iterate through enough experiences, hopefully the network learns how to play. By observing enough of these state-action-result memories, we hope that the machine will start to link these together into a general strategy that can handle both short- and long-term goals. That’s the goal at least!

What all this stuff means in practice is that you have a large number of levers to play with. That’s both the fascinating and infuriating thing about machine learning, since you can endlessly tweak these parameters in hopes of getting better results. But each run of the network takes a long time – there’s just so much number crunching to do, even with the rate of play sped up as fast as possible. GPUs help a lot since they can do tons of linear algebra calculations in parallel, but even so there’s only so fast you can go with commonly available hardware. With this project, I tried running it in the cloud using AWS but, even with one of the more powerful instances available, my home machine was faster. And since you pay by the hour with AWS, it quickly became obvious that it would be cheaper just to by more powerful gear for my home computer. If you are Google or Microsoft and have access to near limitless processing power, I’m sure you can do better. But for the average amateur hacker, you just have to be patient!


So those are the basics of Q learning. Please tweak the code and parameters to see if you can improve on my results – I would be very interested to hear!

    Talk on Linux and Open Source Software

    Here’s a talk I gave at the Bainbridge BARN (our makerspace on Bainbridge Island).

    Slides can be found here.

      San Diego Triathlon Classic

      bikev2_webThe last two years I’ve competed in the San Diego TriRock Triathlon, but due to some scheduling conflicts I ended up doing the San Diego Triathlon Classic instead. This has some similarities to TriRock – you get special permission to go on a military base, the swim is in San Diego Bay, and the run is flat. But the bike course for the Classic is really awesome!

      TSanDiegoTriClassic2015_2he ride goes up from sea level to the top of Point Loma and out to the Cabrillo Monument. That necessitates a fairly steep climb just up from the Ballast Point sub base. This was a bit challenging, but definitely worth the effort – coming down is pure fun! All the cyclists I saw were very respectful and cautious, which is good on a relatively narrow course going downhill.

      The run, on the other hand, was brutal! It was unseasonably hot and humid on race day, and I gulped down an entire bottle of water (I was carrying it in my hand) just on the first lap. It was similar to the Rocketman Triathlon I did last year in Florida, but my additional water made a big difference…at least I didn’t bonk this time. Nevertheless, it was a slow run for me, and I was happy when I made it through.SanDiegoTriClassic2015

        Cycling Hurricane Ridge – The Most Fun One Can Have on a Bike?

        Hurricane Ridge MapI think this is it – the most fun it’s possible to have on a bike. Granted, I’m not a mountain biker and I haven’t really been riding very long, but as far as I’m concerned, this is it. Going up and down Hurricane Ridge with no cars is just simply awesome.

        I’ve only really been cycling for about 5 years. For me, the most intimidating part of either swimming, running, or cycling was biking up long grades. Since we moved to the Northwest, riding up Hurricane Ridge struck me as being one of the tougher local cycling routes. But since I’ve been biking to work regularly, up quite a number of hills, I’ve felt ready to tackle it for quite some time now, except for one thing: there are really no bike lanes on the route.

        Up StatsHurricane Ridge Profile


        Down ProfileBut one day each year, the road is closed to car traffic for the morning – from the entrance to the Olympic National Park all the way to the top. I only found out the day before, but it was just too tempting to pass up! The ride starts from the Peninsula Community College, which is about 5 miles below the Park entrance and is not closed to traffic, but since we got going at 7 AM, this really wasn’t a factor.

        I decided to ride my triathlon bike rather than my commuter road bike to see how it did going up long grades. The gearing is a bit higher and it’s definitely less maneuverable, but it was no problem at all. I think I really benefited from the low weight as well.Hurricane Ridge Top

        In fact, the most difficult part of the entire ride is getting to the gate – it’s the steepest grade. But here’s the thing – it really wasn’t that bad! I got to the gate feeling great and we started up – 12 or so miles of spectacular scenery up a winding road into the Olympic Mountains. Hurricane Ridge is one of those rare places where you can literally drive up into the alpine environment without enduring the standard hardship going up several thousand feet into the mountains.

        The grade is actually very moderate (though it is pretty much strictly monotonic the entire way). There were a few pit-stops along the way for hydration and snacks, but really it was just an enjoyable, steady ride up to the top. My heart rate spent most of the time in Zone 2, which I think is a testament to all the hill work I do (by necessity) just getting to and from work. One hill right before my office, for example, regularly sends me well into Zone 5. Nothing like that going up Hurricane Ridge! Just pure fun…Hurricane Ridge On Top

        There’s a small gathering at the top with more snacks and drinks – it was a bit strange being up there without tons of cars and tourists! After taking some photos (sadly, the least snow anybody can ever remember seeing up there), we started down.

        That’s really the fun part – I averaged about 30 miles per hour all the way back, which took a mere 37 minutes. It was so unbelievably cool – no cars to worry about, you could go as fast as you wanted, and the scenery was just beautiful! I definitely had to be a bit careful going around the numerous turns at speed – I’m just not used to riding that fast for that long. I was in the aero bars most of the time except for some of the scary turns, where I definitely was riding the brakes a bit. But the ride down alone makes the entire trip worthwhile!

        For me, this was absolutely one of the highlights of the entire cycling season.

          La Jolla Half Marathon Take 2

          logoThis was the second time I raced the beautiful La Jolla Half Marathon. I definitely have a preference for triathlons over straight running events, but I make an exception for this one. The course is just beautiful and there are two challenging hills to contend with.

          My time this year was about 1.5 minutes slower than last year – essentially going from an average 8:55 minute mile to a 9:05. The hills seemed just as hard! Nevertheless, my nutrition strategy really went perfectly this time – one Shot Block every 20 minutes, and water at every hydration station. Because this race is so well supported, there’s no shortage of stations, and I felt very well hydrated throughout. I also experimented with compression leggings, which may or may not have made any difference (my legs felt fine during the race, and were sore afterwards as usual).

          I guess I could be tempted by the America’s Finest City Half, which is part 3 of the “Triple Crown” in this series, and maybe the Carlsbad race as well – we’ll see if my schedule allows it.


            The Napa HITS Triathlon

            mapLast weekend was the HITS Napa triathlon. I had read about this race somewhere as a beautiful course in a cool location, so I was game. It’s also a very early season triathlon, which helps balance out my year. In the weeks leading up to the event, however, we did two skiing trips to Whistler and I came down with a nasty case of food poisoning, which really screwed up my training schedule. This was also my first race with my new Cervelo P2, so naturally I was a bit apprehensive about riding it in a competitive situation… A week before the race, I asked them if I could switch from the olympic distance to sprint, which they did no problem. The HITS people are really well organized and put on a great event!

            I left Seattle Friday night for Sacramento and met my parents at the airport. I was extra careful packing the bike this time given that it’s all carbon fiber – I wrapped everything in bubble wrap and made sure there weren’t any small extra parts that could fall out in transit. From the airport, we drove down to American Canyon (just south of Napa) to our hotel. One mistake I made in planning this trip was not paying enough attention to distances between the various activities. It turns out the race is at Lake Berryessa, over an hour’s drive from where we were staying on small winding roads. Our other destination for that Saturday – Point Reyes – was in the opposite direction.

            napa3Nevertheless, we started out Saturday morning for Point Reyes. The entire area is really spectacular – rolling hills and open fields dotted with oak tree groves – it’s what I think of as “classic” California. The drive to Point Reyes was just really neat – it’s a spectacular place for bike riding (and a good workout with all the hills). The only trouble is that many of the roads lack appreciable bike lanes.

            We did two hikes down to the ocean – it was like seeing California before Western civilization came: you see spectacular flocks of birds (that are increasingly rare), and very few structures other than some historical ranches.

            napa4We got lunch after that and returned to the hotel, where I assembled the bike. Then we got back on the road (now it was about 6 PM) for the drive to the race course for packet pickup. We arrived around 7 and got the materials – again, everything was really organized. I had to find a CO2 canister (since you can’t bring these on airplanes) but fortunately one of the bike shop tents was still open. While we were there, some of the Ironman-distance athletes were trickling in. By then, it was going on 14 hours for them, and they looked about ready to expire. It seemed sad that they should be completing such an epic undertaking with only a handful of people around to see it – if I ever do an Ironman, I want it to be a *big* race with a *big* celebration at the end! These guys were true athletes – they’d probably do the race if nobody was there at all.

            We didn’t get back to the hotel until after 9, so I definitely didn’t get as much sleep as I had hoped. We left at 4:45 the next morning for the race. We drove on in the dark, without really any cars on the road, and then one by one picked up another set of headlights here and there. By the time we turned off onto Lake Berryessa, there was a long convoy of cars as far as you could see in either direction, just as the light was starting to collect in the east.

            napa2Everything went smoothly (in contrast to the Florida race I did last October), and I had plenty of time to get set up. My wave was the first to start, and it was a shock to the system jumping in that cold lake water. This was the coldest swim leg I’ve had so far, and it a good 10 minutes for me to start warming up, and transition took a bit longer than usual due to my numb hands. But it felt good to get out on the bike and start working again to warm myself up.

            What became quickly apparent this time was that, unlike every other race I’ve done, I wasn’t getting passed on the bike. There were a couple of athletes around me who were my speed, but for the first time ever I felt like I was holding the line after the swim. The bike also handled wonderfully – time-trial bikes love going fast, and they feel very stable with speed even tucked into the aero position. I went upright on the handlebars going uphill and for flat and downhill portions I was aero. I didn’t notice any soreness except for my neck, which was due to having to peek up under the helmet to see.

            In contrast to an olympic race, the bike leg just flew by. I was pulling back into transition feeling great. The transition to the run wasn’t as painful as it usually is either. Maybe this was due to a few more brick workouts than normal, but honestly I think it was just the shorter distance.

            napa1I finished the race feeling strong and not nearly as blasted as usual. The results completely surprised me – I took second in my age group and 39th overall! I *never* expected to do that well and was left trying to figure out what happened. I think the single biggest factor was having a fast bike. Normally, I’m doing pretty well coming out of the swim, but then quickly drop waaaay back on the bike. This time, I only dropped a few places on the bike, which was a massive improvement. I dropped quite a bit on the run, though, which is definitely something I’m going to focus on. Also, this was the first sprint distance I’ve done in over a year, and I think my body had grown accustomed to the longer olympic distance, so that might have had something to do with my times as well. Finally, I was really well rested for this race (other than the preceding night), so I’m sure that was a factor too.

            Whatever it was, I was very pleased with how things went and left the event energized and motivated for the next race. The course as advertised was pretty cool – the bike portion is along the edge of the lake and provides views of the rising sun coming up over the water. It is definitely rolling up and down – there’s very little flat at all. But the grades are moderate – I never got out of the saddle going up any of the hills. The run was very moderate as well – gently up on the way out and gently downhill on the way back. The only hard part about the race is it’s relative remoteness – I think by far the best strategy is to camp at the race site the night before – there were plenty of places available. If you stay in Napa, it’s still a solid one hour drive on windy roads to get to the transition area.

            The next day we (of course) visited a Napa winery and got a bunch of bottles before heading back to the airport. Like the Florida trip last year, it was a *very* busy weekend but I came back feeling fantastic! The Napa HITS Triathlon certainly isn’t as famous as it’s cousin the Vineman, but the location is fabulous and it’s a well-supported and organized event.

              Florida Rocketman Triathlon

              RocketmanPhoto2014_SMALLI’ve always been fascinated with the history of space flight, in particular the early NASA programs: Mercury, Gemini, and Apollo. So when the opportunity came up to actual ride my bike around the launchpads at Kennedy Space Center, I naturally signed up. This would be a quick trip – leave Seattle on Friday morning and return Sunday night. Because my time was so limited, I needed to make sure all the logistics were carefully lined up. Packet pickup was on Saturday, which included a ticket to the Kennedy Space Center Visitor Center, so I planned to spend all day exploring this and Merritt Island Wildlife Reserve, and of course assembling my bike and dealing with any issues that came up. I’ll write another post about Cape Canaveral itself – here’s the race report.

              Kennedy_Space_CenterThis is the second year the Rocketman Florida race has been run. I understand they changed the course a bit this year to get us more time actually within the Space Center, with a run around the historic aircraft of the Valiant Air Command. Usually mere civilians aren’t permitted to roam around the launch pads (you can take a tour bus there), so this was a really special opportunity. In other words, a recipe for an absolutely awesome race!

              I’d heard from some of the other athletes that the organization last year wasn’t that great, and this year I think was better but still added needless hassles. For example, for packet pickup you needed to go wait in one line for your numbers, then another one for your swim cap, then another to get marked, then another (not clearly labeled) to pick up the all-important Kennedy Space Center tickets. All this was done in the hot Florida sun. I was wearing my sandals, and every minute or so while I was waiting in line these little black ants would crawl up my feet and sting my ankles, making me progressively crankier and crankier! It’s beyond me why they couldn’t just include all these items in the “packet” that, as indicated by the name, you’re supposed to just pick up. And they didn’t have timing chips, so we had to grab these the morning of the race. I think they’re still working out the bugs since it’s such a new race.

              Rental_CarI parked my tiny rental car (shown here next to the bike box for comparison) next to the pickup area. My bike box just barely fit inside, and I had to keep the seats pushed up a bit forward. I pulled out the box on the lawn and put everything together – this is always a bit nerve racking since TSA inevitably opens up the box and I’m anxious something will be broken or won’t make it at all.

              The morning of the race, I left my hotel in Cocoa Beach in plenty of time to have a relaxed set-up. The transition area was in a field adjacent to the Astronaut Hall of Fame, but I was surprised to learn upon arriving that you couldn’t make the normal left-hand turn into the parking lot. WTF? I kept driving to a gas station did a U-turn only to then discover that you couldn’t turn right into the parking lot either! Now I was starting to get a bit anxious. I got back onto the freeway and approached the entire place from the other (the only other) direction. Fortunately, some quick work with Google Maps got me there. But then I drove aimlessly around the parking lot looking for an empty spot – nobody to assist directing traffic or anything was helping. Finally, I found a spot and grabbed my self-contained all-purpose race bag (I put everything in one giant backpack for simplicity), and started speed-walking to the transition area. Of course, on the way I heard the announcer announce 3 minutes until the transition closed, then 1 minute, and then he asked everybody to exit. Ah snap! I hadn’t flown all the way across the country to be DSQ’d because I didn’t get my transition set up in time!

              TransitionAreaMapI ran into the transition area, and did the fastest set-up (by a significant margin) that I’ve ever done before. I managed to verify that, yes, my bike still had two tires and at least the front one seemed reasonably full (so much for topping off the pressure). I figured the most important thing at this stage was my swim cap and goggles, so I made sure I had those. I then had to hunt around to find the person with the timing chips. In retrospect, I’d have been better off had I never done this, since my times were so lousy… Oh well.

              Manatee_Sign After all this rushing around, I proceeded to stand around with my wave…for about twenty minutes. The swim was in the Indian River, which is about five feet deep. In the race instructions, they mentioned manatees might be in the water – how cool is that! I didn’t see any during the actual swim, but I did see some the previous morning…

              We all got out to the line and when we started, many people just started running! I can’t say I’ve ever seen this before – I was swimming (damnit, this is my best leg!) but I don’t think I was making any better time. This was the first time in a while that I didn’t have a wetsuit (the water was all of 81F) and I think my trisuit top was a bit baggy and slowed me down some. Nevertheless, the swim felt fine.

              I ran into T1 and happily I’d set up everything I needed and both tires proved reasonably functional. The bike leg is definitely the most awesome part of the course – it’s totally flat and you have these long stretches of straight road so you can just cruise along.

              BikeMapWe rode past the Visitor Center entrance, into the gate, and then made a big left turn towards the iconic Vehicle Assembly Building. This is the massive hangar in which various space vehicles are put together. You can see the thing from many, many miles in all directions (did I mention Florida is flat?) and because it’s so huge, it actually takes longer than you’d think to reach it. By that point, the course had really thinned out, and I don’t think I had any cyclists within .25 miles in either direction. So it was just this cool feeling of riding your bike out around the Space Center.

              Vehicle Assembly BuildingWe rode past the Vehicle Assembly Building and next passed the gigantic crawler that transports the spacecraft out to the launch pad. Again, riding a bike next to this thing was just so cool! The turn-around was out at Launch Pad 39A, the site of many famous launches.

              IMG_20141011_144348The bike leg for this race was 29 miles both for the “classic” and for the “international” distances, which is really unusual (though totally understandable given the location). The half-Iron was 56 miles because of some bonus loops thrown in. Nevertheless, this was the longest bike leg I’ve ridden so far. It was also getting rather hot out, and I’d gone through one bottle of sports drink and one bottle of water with still 6 or so miles to go. The problem with dehydration is that, once you get behind, it’s hard to catch up. I pulled into T2 feeling fine but knowing I had already lost a ton of hydration ground and needed to try everything I could to catch up. I chugged a bunch of water and sports drink as I was getting my shoes on.

              Launch_PadThe run course is almost all on roads and is largely devoid of protection from the sun. It was also well into the 80s, so I quickly started to suffer. After the first aid station around mile 2, I started alternating running and walking, which was really, really frustrating since I’ve never had such a lousy run. At any rate, I tried to enjoy the course, which wound through some really neat historic aircraft owned and restored by the Valiant Air Command Museum. If only I had more ability to enjoy them! After that, the course winds around the airport a bit and turns around.

              I think many, many people were having a tough race, since there were lots of walkers. I was just so damn hot (by my standards at least) and despite taking water at every aid station I came to, I just couldn’t seem to get my running mojo back. After a long, long couple of miles, you near the finish and (like Black Diamond), they throw in a bonus loop right at the end! AHHHHHH! I pushed as much as I could into the finish but then felt really pretty crummy.

              I parked myself in some shade and started downing sports drink and pouring water over my head. After 10 minutes or so of this, I felt better. But going back to the transition area to retrieve my bike was punishing – in the direct sun and very hot. Something about my physiology doesn’t do well with heat and honestly it was a challenge getting everything packed up and making sure I didn’t leave anything. My legs were cramping pretty badly and I started walking my bike back. At one point I decided to ride a bit to at least spin my legs and get some fresh breeze. Well, I promptly almost fell off in the parking lot as my left calf decided (without my approval) to contract hard. I had to stop and stretch it out before continuing.

              The ride and swim were pure fun – if I could have skipped the run somehow, it would have been an awesome race! I’m not exactly sure what the problem was – probably a combination of being under-trained and racing in weather that I’m really not accustomed to. If I do it again, I think I’ll stick with the Classic distance – man, I was so jealous of those folks turning around on the run while we were only 1/4 of the way into it.

              Nevertheless, it’s a great race and gives you an excuse to be a tourist at the Space Center, which alone is worth the trip.

              RocketMan Triathlon Logo

                Lake Angeles – Klahhane Ridge – Heather Park Loop

                Lake Angeles MapOn Sunday of last week we were staying out in Port Angeles so I decided to do the Lake Angeles-Heather Park loop trail. I’d done this once before, maybe two years ago, and the weather was spectacular – especially for so late in September. This was partial consolation to the cancelling of our Constance trip, which still remains unclimbed…by me.

                I decided to bring along my new Garmin 910xt along with the heart rate monitor. I wasn’t expecting to feel particularly energetic, given it was my fifth consecutive day of exercise, and that I’d done a swim-bike brick the day before. But I was curious as to how hiking compared to other forms of more intense but shorter duration exercise on the device.

                The trail leaves from (and returns to) the Heart O’ The Hills trailhead. With my trusty Garmin watch functioning, I was able to get precise information on position, distance, heart rate, elevation, etc. The sign at the trailhead indicates that Lake Angeles is 2.7 miles, which is not correct. Both my topo map and my watch pin it rather at 3.3 miles. I had forgotten to set the barometric altimeter on the watch, so the elevation data I believe was all off by about 200 feet or so.

                Lake AngelesThe trail ascends somewhat steeply to Lake Angeles, which was gorgeous with the sun just coming up. From there, you follow an unsigned trail even more steeply up to Klahhane Ridge. I felt strong and energetic up to the lake, but after that started to tire a bit. Interestingly, it looks as if my heart rate kept slowly increasing to the point where I took a break, and then this cycle repeated. My pack was 18 pounds (a bit heavy for a day hike, but I wanted the training and, especially since I was by myself, I wanted to ensure I could easily spend the night out there if necessary), and I was wondering if perhaps there’s no way I can move sustainably up a steep trail without having to rest. Maybe it’s a consequence of my SUV-like metabolism (lots of power, but not great on the fuel efficiency).

                Klahhane RidgeFrom the top of the ridge, you get spectacular views of the Strait of Juan de Fuca (which incidentally was swam last week by several friends), Ediz Hook, Dungeness Spit, Orcas Island, Mount Baker, and even up into the Coastal Range of BC. On the other side, Olympus stood majestically among many hundreds of miles of wilderness. It’s rare you can see it so clearly – not a single cloud in the sky! One day I’ll climb it – along with Constance, it’s at the top of my list.

                The entire trail was essentially deserted. I saw one other couple (interestingly, friends of the family) doing the same thing, and a couple of people where the trail intersects with the switchbacks coming up from the road.

                Klahhane RidgeI was wearing my leather hiking boots instead of my mountaineering boots (Scarpa Trangos) and I think my feet were happy for it. I started developing hot spots on my heels (typical for me going uphill with a burden) and I applied my new remedy – a grease-like goo that’s supposed to prevent blisters. It seemed to work, though I still had a bit of residual pain.

                The trail goes right under the shadow of Mount Angeles (when I climbed it earlier this summer, it was raining and wet, with 360 degree views…of clouds). There’s quite a bit of up and down as you cross a narrow pass north of the mountain and contour around the other side. There’s one additional steep-ish section leading to Heather Park and then it’s all downhill back to the trailhead.

                IMG_20140928_134017The total milage according to the Garmin was 12.5. Although the elevation gain from the trailhead to the highest point is about 4,200 feet, with all the up and down my watch thinks the total vertical was 6,800 feet. That seems like a lot of up and down to me, but maybe it’s correct. I think the relative elevation data should be accurate, so if that’s the case it’s on par with one of the more arduous climbs (like Whitehorse) and thus a great conditioner.

                Without my mountaineering boots, everything actually felt pretty good during the several last miles. But one thing the Garmin does allow you to do, and I’m not sure how I feel about this, is tick off the remaining milage bit…by…bit. It’s like watching the last few tenths of a mile click by on a treadmill. I think it’s marginally better than not knowing at all, since you feel a psychological satisfaction with every 1/10 mile…but I just couldn’t help glancing at the watch every two minutes or so.

                ProfileDespite many years of mountaineering, I still don’t have some things totally dialed-in. I ran out of water about 2 miles short of the finish. Not a big deal, and I could always have refilled at a stream (using my iodine tablets) if I started to feel dehydrated, but for whatever reason I just can’t seem to predict my fluid consumption well. Throughout most of the trip, I managed to stay well hydrated and well fed, but I wished I had started with 3 liters instead of 2.5. Carrying water is painful when it essentially doubles the weight on your back (especially since your pack is heaviest right at the beginning of the trip…often steeply uphill), but unless you have access to streams and have a way of purifying it, it’s just a cost of the trip.

                TrailHere’s the interesting part – the trip took me almost 7 hours, with plenty of stops for rest, eating, and photos, and burned over 3000 calories (according to the watch), yet none of it really felt like “working out”. My brick the day before, by comparison only burned 1,600 (again, assuming the Garmin is accurate here). The lesson is that long, aerobic, fun activities like hiking in some ways get you more bang for your psychological buck. It doesn’t feel like a chore, as running sometimes (OK, most of the time) does. And if you don’t worry about pace and maintain good nutrition and hydration, you can go many, many hours without it feeling like an ordeal that must be gotten through. So rather than hitting the gym on the weekend for an hour or two, you can hike for 7 and get more (aerobic) benefit, and have way more fun.

                  Triathlon Trifecta

                  My race schedule this year ended up being a bit compressed, with three Olympic-distance races within a five week period. This is definitely not such a great idea in terms of performance, since you don’t taper for each event, but it fits with my generally Lazy approach to training. Aside from the plain fun of competing in these events, I ended up learning an important lesson about nutrition and hydration.

                  Lake Meridian Triathlon LogoThis was the second year in a row I’ve done both the Black Diamond and TriRock San Diego races, and it was my first go at the Lake Meridian event. Lake Meridian was my second Olympic-ish distance event of the year, after Seafair. The swim, as usual, was fine (though the start took me a bit by surprise, as sometimes happens in the more casual events). The bike was quite enjoyable, with a number of rolling hills and light traffic. It was my first attempt with my clip-on aero bars, which sadly didn’t seem to make much of a difference from just being on the drop bars (though this may be due to my own bad technique).

                  But the run was awful! It’s a good course through a big park, but I was basically just trying to survive and keep going. Not even halfway through, both of my feet went numb. This has occasionally happened to me in the past, and I suspect it has something to do with dehydration – as my fluid volume decreases, it becomes harder to get circulation down there, even with the constant motion and pounding. I also felt sick. It was truly a relief to finish, and it took quite a while to recover.Black Diamond Triathlon Logo

                  Black Diamond Triathlon FinishIt was a similar story at Black Diamond – in terms of how it felt, without doubt it was my hardest race yet. I felt strong coming off the bike, but at the end of the run is a cruel, cruel detour right past the finish line and around the lake. It’s a paltry 1.2 miles or so, but it was one of those “When Will This Ever End” situations. I was just totally drained, my muscles were cramping, my feet were numb, and my back muscles were very sore from the bike. For the first time in a race, I actually had to walk – tragically, within probably .25 miles of the finish. Again, it took a long time for me to recover, and I had to consume substantial amounts of snacks and fluids before feeling a bit normal again.

                  TriRock Logo

                  A week later, I flew down to San Diego. TriRock San Diego isn’t a pure Olympic distance (it’s an “Intermediate event”), but is a great course. The swim is right off of the San Diego Convention Center in the bay, the bike leg is through a Naval station south of Coronado Bridge, and the run is through Seaport Village, with the turnaround at the USS Midway. The benefit for the bike and run is that they’re almost completely flat. But the swim in the bay introduces an element that you most definitely do not encounter in a lake: tidal flow. I guess this the equivalent of a hill for cycling or running…

                  I was in the first wave, and we lined up next to a temporary stairway used to get people in and out of the water. But instead of starting everyone in a massive, barbaric horde like last year, they did a time trial start, with three swimmers at a time entering the water separated by about ten seconds each. This is a much better way to do it, since you totally avoid the usual feeding frenzy during the first 5 minutes of the swim. I was near the front, which was great since I had an almost open shot in front of me. But as I was swimming to the initial turn in the bay, I noticed the mark drifting farther and farther to my right (or rather, I was drifting farther and farther left). It quickly became apparent that we were in a bit of a current, which is a new experience for me during a race. Once we rounded the first mark, it was a swim directly against the tide, dramatically decreasing speed over the ground. It was a bit of a slog getting up this leg of the course, but I was pleased since a harder swim leg is actually to my advantage (since swimming is my strongest segment).TriRock Triathlon Swim

                  Once we made the second right-hand turn, we were swimming at a right angle to the current. I angled substantially into the direction of the flow and was able to track more or less on target to the next mark. Those who didn’t do this ended up swimming essentially directly upstream again just to round the buoy. After that, I cruised back down the bay, ran head-first into one of the marks by accident, and exited the water at the stairs.

                  Since I started so near the beginning of the group, I was one of the first actually out of the water, which was super cool. The transition area was almost completely empty, and I had the first several minutes of the bike leg to myself. Like every other triathlon I’ve done so far, I quickly started getting passed. This course, probably more than others, tends to favor road bikes since there are many turns. But nevertheless, I was dragging ass on the bike. Naturally, I attribute this to me racing my Trek 1.1 Alpha road bike, which is what I commute on. I’m sure it has nothing at all with my lackadaisical training. Nope…without a doubt, absolutely…nothing…at…all…

                  The course is lined with military personnel in addition to the usual volunteers, which was cool. But as always on races, I don’t end up taking in too much of the scenery since I’m just too damn focused on going fast and not hitting anything. My time for the bike leg was actually slightly slower than last year, which leads me to believe the clip on aero bars aren’t making too much of a difference, or else I’m using them improperly (a definite possibility, since I’ve yet to get a professional bike fit).

                  The run was pretty uneventful and nice and flat.

                  HumveeI’m convinced my engine is more like a Humvee than a Prius, and requires more fuel to produce a given output. After the preceding two races, I decided to come to grips with my gas-guzzling metabolism. The usual recommendation for Olympic distance races (at least that I’ve heard) is one bottle on the bike, and maybe two gels while riding, followed by maybe one gel and some water on the run. Previously, my strategy was to hydrate before the swim and take one gel at that time, then two partially-filled bottles on the bike (one water and the other sports drink), and then one gel before the run. But I now believe this is totally inadequate for me, especially in hot weather.

                  GU and FluidsSo this race my strategy was as follows: two complete bottles on the bike (one water and one sports drink), one gel before the bike, one gel halfway through the bike, and then one gel before the run. This was followed by taking a gel at every water station on the run (as well as a cup of water). This ended up being seven gels!

                  But what a difference! I’m not sure it actually made me faster, but my mental state was just so much better. I was actually happy to be racing again instead of just slogging along. I didn’t feel nearly as drained after the finish too. I was also happy with my standard recovery bean, rice, and cheese burrito. I don’t know why, but eating Mexican food after a race seems to have a remarkably salutatory effect.

                  One should never be too cavalier about nutrition and hydration – you can maybe get away with it for shorter distances, but it rapidly catches up with you as the duration increases. For me, the threshold seems to be about an hour – for any sort of exercise beyond this, I need to be taking in electrolytes and preferably a sports drink. Dehydration is particularly insidious, as you can get away with it for a while without totally crashing and burning. But given enough time, it just kills your performance. It makes you cranky and saps your strength, and it can be tough to recognize the symptoms since you figure you’re just tired and need to tough it out. Not fun!

                  I’m intending on repeating this gel- and fluid-intensive strategy for my last triathlon of the season: Rocketman Florida! I’ll post the results after the race.

                  RocketMan Triathlon Logo


                    Upper Royal Lake Basin Take 2 (The Snow Edition)

                    Mt. DeceptionThis was my second trip to the Royal Basin area, this one much earlier season then the first. The ground up there was covered in snow and there were almost no people. The marmots were just waking up and frolicking – marmots have a way of galloping that really must be seen to be understood.

                    We had plans to climb Mt. Deception but canned it after…well, getting lazy. Deception is the second tallest peak in the Olympics and an impressive sight. You can’t even really see it until you turn a corner in the Royal Creek valley. There looked to be a snow covered scramble route that would have been fine, but of course I can’t say for certain without actually having gone up.

                    This trip was quite different than the first: no bugs, no Park Ranger telling us to move our tent, and no crowds. Of course we couldn’t swim in the lake given its snow-covered condition…


                    Camp PhotoLookingAtDeceptionLookingAtTheBasinTheNeedles1TheNeedles2TheNeedles3