15 September 2010

Ladies and Gentlemen, we are LIVE!



(Read the title in your best Bruce Buffer voice).

My new game, Clone Wars Adventures, is now LIVE and out of Beta! It's free to play and easy to download so head on over to the website and check it out!

27 August 2010

Open Beta!



My new game, Clone Wars Adventures, is now in Open Beta! Head on over, create an account and let us know what you think on the forums (must sign in with your Station name)!

01 June 2010

Finally!

We just announced the new game that I'm working on. Finally I can talk about it!

So, without further ado:


This is a new online free-to-play PC game targeted to kids (and adults) due out this fall. It is built upon the technology first developed for Free Realms. It will be largely mini-game based allowing things from Lightsaber Dueling to Speederbike Racing, the obligatory Starfighter missions and even a 3D tower defense game called Republic Defender. However, there is still a large social aspect with friends, housing, leaderboards and even a kid-friendly Facebook-inspired profile page.

On a personal note, this is a great project to be on. I am a huge Star Wars geek (over 15,000 LEGO bricks in my office representing different Star Wars ships) and my son loves Star Wars and the Clone Wars (and, incidentally, this game too!).

I hope you enjoy the teaser site and video. I look forward to actually being able to show in-game demos (*ahem* E3 *cough*)!

Update 6/1/2010 1:47PM PST:

15 April 2010

/resurrect iPhone

I broke my iPhone 3G. Again.

A few months ago I dropped it in the supermarket. It landed flat on the screen and when I picked it up I was horrified to discover a spiderweb-shaped crack in the screen. Fortunately, everything still worked. For $70 I had it repaired at Volt Mobile. They did pretty good work, but the touchscreen was intermittent (I'd have to flex the phone to make it work again).

Last week, I dropped it again. This time the touchscreen was completely broken--I couldn't even turn the phone off or enter my passcode to do a backup.


This time, I figured I'd try my own repair. Hey, it's worked in the past.

A quick look online shows several places that sell digitizers (touchpads), LCD screens, batteries, you name it. I ended up going with a digitizer sold from AccessoryOne (which purports to be OEM) for less than $20 shipped and taxed. There is also a three-part video series on cracking open the iPhone and swapping out the digitizer. The videos are made by http://www.directfix.com/ who also have parts but tend to charge a bit more.

This blog basically chronicles my repair. If you're doing this yourself, watch the videos; they're much more informative. Also, there is no guarantee that this will work, and it will void your warranty (mine has already expired). I just figured that I'll be upgrading to the 4th-generation iPhone when it comes out anyways.

The first step was pulling out the screen assembly. Two screws from the base (near the speakers) are all that is holding the screen assembly on. Unlike the videos, I used a suction cup on the screen near the home button to pull the screen out. I believe this works better and doesn't molest the rubber seal around the screen assembly. This picture shows the screen assembly removed from the base:


This picture shows the new digitizer/glass on the left top, the removed LCD screen on the left bottom and the old frame/digitizer assembly on the right bottom:


Removing the LCD screen is tricky and must be done carefully. I've read posts from people who crack their LCD screens at this step and it's a much more expensive component. I was very gentle.

The digitizer that I ordered was just the glass/digitizer combo; it didn't include the screen frame or the components that are mounted to it (speaker, skin sensor, home button, etc). Therefore, the frame must be removed from the original glass/digitizer and attached to the new one. This was probably the most tricky part. It involves using a hair dryer to weaken the glue and pry the frame off. Then new two-sided tape must be applied to the frame and attached to the new glass/digitizer.

This picture shows the removed frame (bottom center) and the old cracked digitizer on the bottom right:


After a few minutes of work, the frame was attached to the new glass/digitizer:


The new glass had an appropriate amount of protective film, so it was fairly easy to minimize dust getting in between the screen and the glass. The new screen assembly with LCD installed is now shown on the bottom:


After getting the screen cables connected I powered on the phone and was able to enter my passcode to unlock the phone (note that the protective film is still on the screen, hence the sticker in the middle of the screen):


Yay! All-in-all, a very successful repair for under $20 and about an hour.

Now... to find a better case...

15 March 2010

Be Careful with Default Arguments

CRC16::computeMemory( const char* data, unsigned length, uns16 crc = 0 );
CRC16::computeString( const char* data, uns16 crc = 0 );
This little gem caused quite a headache. Allow me to explain.

EQII's streaming client uses 16-bit CRCs to tell if an asset has changed. All 500,000 individual assets have a 16-bit CRC calculated and stored in the master asset list (manifest). When an asset is downloaded, it is cached on disk and stored with the 16-bit CRC. To save time, the client doesn't calculate the CRC on each asset that it downloads, it just uses the CRC stored in the manifest.

Every time the client runs, we download the manifest and check the CRCs in the manifest against the stored CRCs for our cached assets to see if any changed. If an asset has changed (the CRC is different), we delete the cached copy and request the replacement, even if it's not immediately needed.

When the streaming client launched, everything worked as planned. It worked great! Little did we know that a particularly evil bug was lurking.

I discovered a problem when I wrote a utility to convert old PAK files (the game data shipped on the DVD) to cached streaming assets. NOTHING matched the CRCs in the manifest. Every asset was wrong. I looked over the code several times and everything looked fine. This bug didn't make sense.

And then it hit me.

The code that was building the manifests was doing this:
uns16 crc = CRC16::computeString( data, dataLength );
Talk about /facepalm. The function I meant to call was CRC16::computeMemory(). The function I actually called treated the input like a null-terminated string. This means that the CRC was only calculated up to the first NUL character and the dataLength parameter was actually being treated as a starting CRC value. This was a bug that had to get fixed. Someday, perhaps many years from now, this would be a huge bug that would waste a lot of someone's time to hunt down.

Oh, but the fun doesn't end there. I couldn't just change the function to fix the bug. Doing that would mean that every streaming client user would have to re-download everything that they had already downloaded. Every CRC would change and the naïve client would happily delete everything and start over. To fix this properly would take a highly-synchronized effort to fix and push the manifests while deploying a one-time-only tool that would re-calculate the CRC for all assets that people had already downloaded.

The moral of the story: be careful with default arguments. They can really hurt if misused.

09 February 2010

Evolution of a Streaming Client

It's funny how many things in the game industry start out as "I wish..."
... I wish we had Guild Halls.
... I wish we had Shader 3.0 support.
... I wish we had Battlegrounds.

... I wish our game was easier to download.

It's equally funny how many things are started by people in their own time just trying to make the game better. That's how EverQuest II's streaming client started out.

Taking an existing game (with 12GB of client assets no less) and streaming it is no simple task. I started off with a "proof of concept" just to prove that it could be done with EverQuest II. As I got into it, the concept became a full fledged project. It wasn't officially on the schedule, so it was really a labor of love on my part. After a few weeks of silently working on it, I called the producer into my office and said, "Hey, check this out." Needless to say he was pretty surprised.

There are three major conversion steps for a streaming system.

Serving the assets


EverQuest II has roughly 500,000 client-side asset files: meshes, textures, collision meshes, shaders, data files, sounds, music, you name it. Have you ever tried putting half-a-million tiny files in a directory? Take my word for it: Don't.

My first inclination was to build a custom server. The server would run off of the PAK files that we already ship with the DVD-based game client. I had grandiose plans about how to track files that clients were downloading and automatically send assets to clients that they didn't know they needed.

But alas, it was not to be. A custom server means that every client would have to be talking to our server. We would have to think about where to place the server geographically, handling varying load characteristics, availability, bandwidth, etc. These were all questions that had already been answered; we didn't need to ask them again and try to come up with our own answers.

What else is great at serving files to a large number of clients all over the world? Web servers! Specifically, HTTP servers. We already used a CDN for patching purposes--we just needed to serve all the game assets individually and on-demand now.

This caused another wrinkle. The client needs to know a list of all the assets that are available and whether the assets that it has previously downloaded are out of date. We call this the "manifest." This manifest must be fully up-to-date before the client tries to load ANY assets. My custom server knew how to negotiate a manifest with the client in a fairly bandwidth-friendly way because it was smart. CDNs are less smart--they just serve files. EQII's manifest is about 6MB, which you definitely don't want to download every time you run the game. The solution I developed involves parts of the manifest available as separate files and an overarching CRC file that is requested first. The CRC file is always requested, but it's only about 8KB. Based on comparisons with the CRC file, the client reconstructs the full manifest by grabbing parts that it needs.


Requesting the assets


Compared to everything else, serving the assets is the "easy" part. Requesting the assets is far more difficult. You're essentially replacing file system access with a network connection. That sounds a lot easier than it is. File system access is inherently goverened by the Operating System and allows any thread to open nearly any file and read data from it. A network connection is a single pipe (or in our case, a collection of pipes) that must well-defined and tightly-controlled access. Any thread that could just expect to read from a file at any point must now be synchronized with other threads requesting assets from a network resource.

Another major difference is that file system access is synchronous from an application's perspective. This means that while waiting for the Operating System to read data from a file, the thread goes to sleep and allows the system to do other things. Generally this happens so quickly that you barely notice, but network connections aren't nearly as fast as your local hard disk. For this reason, we want most of our asset requests to be asynchronous: we send the asset request and go about doing other things until it finishes at some later time.

Unfortunately, it's much easier to do synchronous reads than asynchronous. The EverQuest II client had many synchronous reads that you didn't even notice because the file system is fast enough. If they weren't made asynchronous, a streaming client would appear to 'lock up' while waiting for an asset to be fetched. Obviously, this is undesirable, and nearly unavoidable in some cases.

Furthermore, network connections in games are usually given time by the main thread to do their work (colloquially referred to as "pumping"). That won't work in this system. What if the main thread needs to synchronously load an asset (which still happens occasionally, especially on client startup)? It would be waiting for an asset to finish loading and wouldn't be able to update the network connection that it is effectively waiting on.

Clearly, a system is needed that can pump itself. Any thread can request an asset synchronously or asynchronously and the network connection continues updating as long as the client is running. The system should be able to determine if a request for an asset has already been sent and we don't need to waste bandwidth by requesting it again. The system should be able to recognize and quickly send higher priority requests. And, oh yes, let's not forget about failure cases. This piece of technology is the very heart of the streaming client.


Storing the assets

Obviously, once an asset has been downloaded, we don't want to waste bandwidth downloading that asset again. It might take minutes to enter a zone for the first time, but we don't want to take that long every time we enter that zone. Therefore, that asset must be stored locally.

A possibility is to store each asset as its own file, but this fails in practice. Operating Systems are not optimized for hundreds of thousands of tiny files. No, these files must be stored in a larger file, packed together and easily accessible.

EQII already has a packed file format. Unfortunately, the way it's set up does not lend itself to modification. When EQII's packed files are written, they're never intended to change. With new assets being downloaded constantly, these files will be changing, and often.

My solution was to develop a new type of asset database specifically suited to our needs. These database files can store a large number of tiny assets, rapidly add and remove assets and quickly retrieve individual assets.


Other Considerations

The most difficult part of building a streaming system for an existing client has been trying to change synchronous asset requests into asynchronous. Consider the following simple example:
Animation* pAnim = pAssetSystem->LoadAnimation( "animation/player_anim1" );
if ( pAnim )
{
// Do something with loaded asset
}
The above example would need to fetch player_anim1 synchronously. Changing this to be asynchronous might look like the following example:
Asset<Animation> anim( &myAnimLoadHandler );
pAssetSystem->StartLoad( &anim, "animation/player_anim1" );
...

void AnimLoadHandler::OnLoaded( Asset<Animation>& a )
{
// Do something with loaded asset
}
There's much complexity missing from the second example, but the point should be clear: making something asynchronous is much more difficult than making something synchronous.


Conclusion

Working on the streaming client was one of the most fun projects that I've ever worked on in a technical sense. It was challenging, but the results are a huge payoff.