Quantcast
Viewing all articles
Browse latest Browse all 20

Getting started with OpenGL

While the kinds of games I like to play have traditionally used isometric graphics, and those do have a certain retro appeal, I wouldn’t be happy as a player today with that. Even playing OpenTTD today I miss being able to arbitrarily zoom and reorient the camera as I play.

Plus learning 3D programming has been something I’ve wanted to do since I first played Elite, and I even remember working out the math to do perspective when I was supposed to have been studying German in class at school.

So this project will be using OpenGL. I actually started playing around with this a year or so ago on iOS, but got distracted by the removal of Google Maps from the iPhone and wrote Embarcadero instead, so I’ve been revisiting some of the code I wrote back then.

I’m not going to use iOS for this project though, at least not at first. The extra overhead to dealing with the different platform, the simulator, or devices, would get in the way of the learning and experimenting. I’m going to use OS X instead, since that’s what’s running natively on my laptop. Of course OpenGL means the game should be easier to port to other platforms later if it gets that far.

There’s more than a few differences between iOS and OS X to get to grips with; for a start, iOS uses OpenGL ES 2.0 while OS X uses OpenGL 3.2. They are similar enough though:

On iOS, the most useful drawing surface is the GLKView, which we combine with a GLKViewController subclass that we implement that also serves as the view’s delegate. When that view is loaded we create an EAGLContext and assign it to the view, and this is made to be the current OpenGL context when each frame is needed to be drawn. To draw each frame we implement the glkView:drawRect: method and fill it with appropriate GLyness.

OS X’s approach is similar, but the names and roles are different. The convenient drawing surface is the NSOpenGLView which takes care of some of the setup for us, in particular the creation of the NSOpenGLContext for drawing. Rather than implement a delegate model, we subclass the view and override its methods such as drawRect: to draw a frame.

In either case, the core of any game is going to be the rendering loop. But then comes the question, how often do you redraw the screen? There’s several different answers to this, each with their own benefits and weaknesses.

The least effort approach would be to only redraw the screen when there’s something that’s changed. If the program is largely made of static, unchanging, content this should be a huge win since the CPU and thus power requirements would be much lower. But for a game, something should be always changing on the screen; nobody likes looking at a static image, so everything has little subtle animations.

If we’re going to do animations, we know that there will be some amount of time to calculate the animation and, especially if we’re moving the camera, the next frame in general. We could use this calculation time as the delay between redraws, and then immediately begin the calculation for the next frame once redrawing has completed. In other words, a tight loop, updating and rendering as quickly as possible.

That approach consumes more resource than we really need. On a low-powered device it probably makes no difference, but on a high powered device we could end up redrawing a hundred times a second, consuming far more power than necessary.

A middle-ground would be to use a timer fixed at the desired frame-rate, say 30 or 60 frames per second. Each timer fire would update the scene, and then redraw it, and sleep again until the next timer fire. If we took longer than 1/30th or 1/60th or a second then we’d have to be careful to ensure that we don’t queue up timer firings, perhaps by only setting the timer at the conclusion of the redraw based on the amount of time the redraw took.

All of those approaches, even the latter one, suffer from another problem though: screen tearing. This is because the screen itself is being redrawn at a fixed rate. What we really want is to have our game redraw the screen as often as the screen itself is being redrawn, with our rendered content presented along with the rest of the screen.

If we take longer than the screen refresh cycle to render a frame then it’s saved in the buffer we were using until the next screen redraw and used then, and drawing function skips however many intermediate frames we missed because we were busy.

Core Video on OS X and iOS provides is this feature via CVDisplayLink. In fact, on iOS it’s completely free because that’s what GLKit already uses and how it decides when to call the glkView:drawRect: method you implement.

On OS X we can use this too, but it requires a little more setup work to hook into the view. Fortunately Apple provide a Technical Q&A for this; with this in place the view’s drawRect: method will be called once, but we can ignore it and instead just implement the getFrameForTime: method suggested in the article.

For animation we need to know the time since the last frame, the outputTime object passed in can be used to calculate that:

double timeSinceLastFrame = 1.0 / (outputTime->rateScalar * (double)outputTime->videoTimeScale / (double)outputTime->videoRefreshPeriod);

We also need to do a little work in the getFrameForTime: method to set the context and deal with any threading issues that might come up.

NSOpenGLContext *currentContext = [self openGLContext];

// Display Link is threaded, so GL context must be locked.
CGLLockContext((CGLContextObj)[currentContext CGLContextObj]);
[currentContext makeCurrentContext];

renderer->renderFrame(timeSinceLastFrame);

[currentContext flushBuffer];
CGLUnlockContext((CGLContextObj)[currentContext CGLContextObj]);

Now we just need something to render!


Viewing all articles
Browse latest Browse all 20

Trending Articles