NetMission Engine

Not available for download… For now it’s just a personal tool that I use to make other things.

Maybe someday I will hand NetMission out more openly, but since I have total control over it and just want to make it better and better, I often make drastic changes that, as a side effect, break compatibility with older games (like Defend Your Flaahgra, which I have to keep editing just to keep it running exactly the same). With so much community and industry support for Unity, Unreal, Game Maker, and so many other programs that weren’t viable options back when I started, your best bet is probably to grab one of those if you want to make a game. :wink:

Hi again!!

Well, I haven’t posted an update here in a long while, but I have done so much work on my engine that I figure I might as well post something! Let’s take a moment to reflect… literally.

NOTE: This post is about a special effect that you can briefly see in P2D’s 15 Years of Prime video. It’s actually not a core feature of NetMission Engine (NME), but I built it using the tools that NME provides, so I’ll talk about it anyway.


Intro


Let’s say you’re in an ice cave that someone made with MS Paint. There’s a metallic enemy in your way, but if you shoot it with enough energy blasts, it might fall into a pool of water below and short-circuit. The cave is not lit well, so every time you charge your weapon, its glow becomes an important source of light for you to see where you’re aiming. One way to make this effect look particularly awesome would be if the metal, ice, water, and any other shiny surfaces in the room reflected that glow back.


There are many open-ended ways to do this. One is as I’ve done above: scribble in squished yellow dots all around the canvas. Wait, sorry, no, that’s not actually a viable option. (Please forgive my ugly sketches – I am writing this post from an airplane). You could write special code just for this room, which watches for the player’s energy blasts and copies their graphic into specific locations, perhaps distorted and not as bright. If you want this effect to work across many rooms, you could write code for simple light sources interacting with reflective surfaces, then tag the corresponding objects with that code. If you want even more detail, you could render your scene multiple times under different transformations: once for the main display, once squished and upside-down for the ceiling, another upside-down for the water, a flipped and shrunk version for the metal, and so on, then combine them together. This is what 3D games have to do, so these reflected worlds will often look simpler than the real world, to keep the game running smoothly.

But 2D games have a pretty cool shortcut to fudge the effect: when we render a scene, it becomes a finished pixel image that we can just copy and paste around the screen in various ways to create reflections at certain angles.


But hang on… now that the robot is getting reflected, we can see a small issue. The real robot has a proper reflection shining off of its belly, but the reflected robot does not, because that was not part of the original scene we’re copying around. Should we also copy extra reflections there? But now, what if the ice floor is reflected in the reflected robot? Should we also copy reflections there? What if that reflection has another robot in it? How would we keep track of all this and how many layers deep should we go?

For many games this may not be an issue in the slightest, so there would be no need to fix it. But here’s another thought for you: if any object – let’s say a football (yes, imagine one in vivid detail) – passes in front of the robot, how do we keep the reflection behind the football? As it is now, the reflection is copied at the end of our frame, so the football would awkwardly travel right between the robot and its tummy shine. Since we can only stack one graphic on top of another at a time, to fix this the reflection would have to be processed first, then the ball, just for this one scenario. But this means the ball wouldn’t be in the reflections!
NOTE: Some graphics coders here might be thinking about the depth buffer as a solution, which would allow the reflections to be processed after the ball while still appearing behind the ball. However, this would only be possible if our football graphic had no transparent regions, meaning it was pure vector art with no anti-aliasing. It’s not realistic to expect every game object to fall into that category.

The Previous-Frame Approach

There is a quick and dirty solution to both of the above problems: use the previous frame’s finished image as your reflections! Now you can safely stick a full scene’s reflection at any layer with no complicated juggling of space-time, and think about it: they build off of each other, so after only 5 frames you already have reflections of reflections of reflections of reflections. This previous-frame technique is used in many games dating way back – one that comes to mind is Mario Kart 64’s Wario Stadium. You might notice the jumbo-TV on the wall displaying your character ever-so-slightly behind reality. When you’re riding over the long line of bumps down the middle of the stage you can catch glimpses of the TV inside the TV, which is a view even further into the past.

So, conceptually, we need a pair of canvases that we can alternate between. First, paint onto CanvasA using the data from CanvasB, and put the result on the screen. Then for the next frame that gets rendered, paint onto CanvasB using the data from CanvasA, and put that result on the screen. Lather, rinse, repeat!


This is very easy to do in NetMission. There is a function called createCanvas(width,height,{options}) which I use to make CanvasA and CanvasB in the camera object, and there are handy functions pushCanvas(canvas) and popCanvas() which allow us to paint on the correct canvas. For those who don’t know, push and pop are standard coding terms for placing something (like paper) onto a stack (like a stack of papers) and then removing it, with the assumption that we can only use the one on top. Why such violent terminology? I don’t know. No canvases were harmed or exploded in the making of this frame. In the end NetMission lets us wield inactive canvases interchangably with other images, and it takes care of all the crazy graphics card details involved in making this easy-to-use system work without a hitch.

Limitations and Corrections

No solution is perfect, so there were of course many setbacks which I had to address as I was developing my water reflections.

1.) The one-frame lag looks terrible when the camera moves. For some reason it looks totally fine if individual objects get a bit behind, but it’s very unsettling when an entire environment shifts around relative to itself, especially after frame-skips.


The easiest way to fix this is to shift or offset our reflection’s reference coordinates by how much the camera moved. Now everything lines up, so we’re done, right?

Uh oh. Because the camera moved, there is a piece of our reflection that didn’t exist in the previous frame! We could perhaps make our canvases bigger than the screen to fill this gap, but at some point if our camera moves fast enough there will still be a missing region. Plus it’s such a waste of resources to render more than we have to every frame, just for a subtle special effect. We can do better!

I decided to see how it would look if we simply stretched the reflection to fill the whole space… and it is too obvious. Now when the camera slides left, the reflected objects temporarily stretch right, then return to the correct spot when the camera stops. It’s the opposite effect when the camera goes right. Sure, technically we are filling the full space, but you’ll have to take my word for it that it is unacceptably distracting! However, I figured out a nice balance of this strategy: the reflection can stretch from the center of the screen rather than from the sides, so that objects in the center always line up with their reflection below. This means we have to stretch more than before, so there is now a section that extends off-screen. Not only does this remove the distraction, but it also makes the scene look slightly more 3D!! Since the reflected objects scroll faster than the real objects, they look closer to the camera as they scroll by.

I eye-balled it for the animation above, but in code the coordinate math on this was more annoying than I’d like to admit… For good measure we can pretend the camera is always moving at some minimum speed, so that the stretching is always in place a tiny bit. The reflection will only grow more if the camera is moving at high speed, and at that point you will not notice or care if the reflection isn’t perfect.

2.) Unfortunately there is still missing data. Those missing gaps we had to plug are symptoms of a larger issue. Think about the vertical component of our reflection for a second. It can only reflect what has been rendered above it. As we raise the water higher and higher, we get more room for the reflection to appear, but less of a scene to actually reflect!

The best we can do with water is just fade the reflection out to hide that sharp cutoff where the reflected sky ends. (And yes, the ball’s reflection is still a frame behind). We could put in the effort to render additional views of off-screen areas, but again, that seems like a huge waste of processing power over an optional special effect.

In the end, water does not actually suffer from missing vertical data. If a game designer claimed it was a conscious aesthetic choice to show less reflection when water fills the screen so that you can see what’s happening inside the water, I would totally believe them. But I can imagine scenarios where this limitation could raise an eyebrow, as players might notice an ice cave dramatically change its lighting as the camera slides in and suddenly provides more data to the reflections.

3.) These are not real reflections. What we’re doing here is an interpretation, a simulation of what the character sees from their point of view, twisted to be understood from our point of view. It looks nice, but in the real world you’d never look at a suspended pool of water from the side and see upside-down mountains! You have to be looking into the water from just above the surface to see that. Plus this really only makes any sense with surfaces that run perpendicular to the camera. As you begin to angle a mirror toward the camera, it changes the whole perspective of what’s needed in the reflection, as seen here where you can see 1 eye vs. 2 eyes due to the angle:

Even background parallax can make reflections look a bit fishy if you observe closely and ponder. But hey, where some might see our approach as a unavoidably sloppy and inaccurate, others see it as an opportunity! Regardless of how little it might make logical sense, these reflections look cool, which certainly doesn’t stop us from making them look even cooler! Remember that sin function on your graphing calculator?

One of the simplest distortions we can apply is to simply offset our texture coordinates based on a few of these sine waves. Stack a few together, make them move over time, and you’ve got yourself some wavy water. Check it out – I put this entire planet underwater and now it looks like a space ship!

Reality? What’s that? We’re talking about things that look cool. :stuck_out_tongue:

The Results

Of course, water reflections are just the surface (ugghhh) of all the work I have been putting into my game engine, as well as code for P2D. The total time this effect is visible in the two-minute Prime 2D video is… about 6 seconds. And you probably didn’t even notice! Which is fine, because there was actually a wobble due to a rounding error which I fixed right afterward, haha.

You may have heard the stories from our team about how much of that video was thrown together in the final 24-48 hours before its release. Was it really worth it to put effort into such a minor feature? …

Absolutely.

Well, okay, maybe not for the video. But the game that Prime 2D takes inspiration from had a ton of attention put into such environmental embellishments, and these really help immerse players in the world. Another hard-to-catch detail is that the rain is rooted in its absolute position in the world, rather than being attached to the camera – just watch the angle of the downpour as the space ship flies left and right.

None of these individual elements are particularly original or difficult, and you probably wouldn’t consciously notice them when you play a game, but I hope that when everything is combined together, they at least convey some amount of the care that went into the quality of your experience. Plus, reflections will get their chance to shine (UGHGHHG) in later scenes.


Writing updates in this style takes a pretty big chunk of time, and I wish I could do them more often. They are fun to make and force me to think deeper about my code, and I hope other people get some benefit out of them as well. I picked reflections because I felt like this feature would work well as a blog post with images, but there is so much more I could talk about.

If there is anything in particular you want to discuss about building a full game engine from scratch, please ask! I will be happy to answer questions and take feedback, plus you can help me decide what my next long-form post should be about.

5 Likes

I dig the effects, this engine is looking amazing, I’d love to see all the cool weather stuff and splashes you could do heh, lots of potential for lava/heated areas too

Lava/heated areas are fun.

In other news I got upscaling shaders in place.

  1. This is the raw pixel-rounded upscale. It looks really crisp, except that the pixel spacing is uneven (check out that wonky dithering) which is really distracting, especially when the camera moves around.
  2. This is the graphics card’s default “bilinear filtering” algorithm which you see all the time like in browsers. It works well for 3D textures and most objects within our canvas, but it tries to blur away the pixels which is not what we want for a retro pixel aesthetic!
  3. This is a special upscale shader specifically designed for pixels. You can see where the pixels are supposed to be, at least way more than in no. 2, and they appear evenly-spaced once you zoom out which is better than no. 1. It looks much better at 1x on your screen, and I didn’t capture this on the most HD resolution ever, and I also messed up a setting which would make it ever-so-slightly crisper but am too lazy to take another screenshot. But anyway, this also means it’s now possible to set all those cool filters you find in emulators, like these. Not that I’m going to bother.
4 Likes

And we’re back!

By popular demand today’s topic is PHYSICS! It also conveniently happens to be what I’m working on at the moment.


Making a game without a physics system is like making a game in PowerPoint. Objects just float there, and somehow you have to slide them around convincingly.

That said, physics systems come with their own complications. Objects just flail and bounce off each other, and somehow you have to slide them around convincingly.

Unfortunately object movement isn’t something I am ready talk about yet. After a full decade of off and on attempts, I still have not successfully finished putting a physics system into NetMission!

Despite that there is already way too much to discuss in one post. This is Part 1 of what will be a series of long-form posts on physics, and the goal by the end of the series is to finally, finally be sliding objects around convincingly. :sunglasses:

The Early History of Physics in NetMission

When I first took over the “Net Mission” game in 2006, I built a small arena in Game Maker 6 to test out some of the mechanisms.

Remember, Net Mission was originally going to be an online multiplayer mouse-controlled shooter. Maybe someday I’ll have a chance to revisit that idea!

Game Maker 6’s built-in physics lacked any substance, so I’d been studying this popular interactive N tutorial on the Separating Axis Theorem for boundary detection, in the hopes that I could spin my own system together. “Wow, real math!” I exclaimed with sparkling eyes. That sparkle disappeared once it turned out that Game Maker, at the time, was too slow to handle much scripted math. “But I need good physics!!” With no clue what I was getting myself into at age 14, I announced that I was switching from Game Maker to C++ on March 1, 2007. Thus began the “NetMission Engine” as we know it today.

In early 2008, I finally caved and decided to trust someone else’s physics code for once. It was Super Mario Galaxy of all things that convinced me I didn’t stand a chance on my own. I researched available 2D physics libraries and narrowed my options down to this highly promising one called Box2D which was still in the works. I tinkered and made some random experiments, like this bouncing red box:

(Pretend the red box is bouncing. The colorful space ship is unrelated)

Or this eye-burning demonstration, which played loud sounds for every bounce:

Or this destructible stack of blocks for guess-whom on guess-what-day:

By mid-2009 I managed to whip together a clunky Meta Ridley ragdoll: (click to play video)

That last example was the first time I had combined all my isolated bits of code into one single “engine” package, which was so exciting for me that naturally the next thing I did was remove Box2D from my engine and work on everything else instead.

Wait, what?

Game physics in general is an awfully intimidating and cumbersome topic, and I didn’t have the tools to make anything significant out of it. Moving my mouse through a picture point-by-point and typing coordinates into code was a grueling process. Awesome programs like R.U.B.E didn’t exist yet. I had no money, and I was in the mindset that someday I would make my own super-duper fancy NetMission Engine Editor all on my own. My attempts at that had been pretty limited and buggy…

So I gave up and didn’t bother with physics for 5 years. In that time Box2D developed into somewhat of an industry standard. It was picked up by Angry Birds (Dec 2009) followed by a ton of other games (even Shovel Knight (June 2014) which you’d never expect!), and Google expanded it into LiquidFun (Dec 2013). Legit game engines started integrating it natively, like Unity Engine under the name RigidBody2D (Nov 2013), and before that it was even built right into Game Maker (late 2012)!! Dang it! Guess I could’ve waited it out and saved myself writing a whole C++ game engine from scratch…

But in the summer of 2014, I decided it was time to give physics another go. I discovered the Tiled Map Editor and went nuts with integration and new features. Now I can draw the boundary lines around my scenes! I can draw complex shapes and load them in as objects! Surely that was the only thing I was missing before.

The Second Attempt at Physics

NetMission uses Lua scripts for its games, so I have to do what’s called “binding” or “wrapping” to enable access to Box2D’s C++ interface through Lua. Such bindings already exist, but they are more of a literal translation. My coding philosophy won’t let me inflict raw vanilla interfaces of any kind upon my Lua scripts, when I have this perfect opportunity to make things better and leverage Lua’s strengths!

The main technique I use when designing interfaces is the “Every time you X you also have to Y” pattern. There is probably a better name for it, but programmers constantly sniff around for that smelly phrase in our thought process. It’s our bread and butter. After all, that is precisely what computers are here to fix! If you X a lot, and the computer can automatically Y for you when you X, then we need to make it happen.

As you carry out your next day, see if you can notice the “Every time you X you also have to Y” pattern. It’s all over the place! Even something as simple as having to close out of a program before the newer version will continue installing, or always typing your name at the bottom of emails, or writing the same info here, here, and here on a form. It’s especially prevalent in the physical world, where automating something can imply an enormous engineering challenge or even an entire field of research. Maybe someday you’ll find yourself hooking up motion sensors and facial recognition components just to log into your phone faster! Wait a second…


With Box2D, here is one of the first obstacles I had to overcome. Check out this shape:

It’s a bit funky, but not unheard of to find something like this in a game. Maybe it’s a UFO on its side, or the end of a large weapon, or a cardboard cutout of a lizard monster wearing a shoe, and for whatever reason you need its collision to be this precise.

In NetMission, you would create this object like this:

body = createBody({position={x,y}})
shapes = body:AddShapes({SHAPE_POLYGON, 0,0, 40,40, 84,4, 44,12, 60,-24, 24,-8, 32,-32,
                                        52,-40, 16,-48, 14,-28, 12,-46.8, -4,-46},
                        {friction=0.5,restitution=0.2,density=1.0})

Those coordinates are specified in no particular direction. Looks like clockwise I guess.
Okay actually, more likely what you’d do is visually draw that shape in a TMX file, call it “funkyPolygon”, and grab it from there instead:

body = createBody({position={x,y}})
shapes = body:AddMapShapes(objLayer, "funkyPolygon", {})

Ah yes, this is how it should be. Draw it in an editor. You don’t even need to override the friction, restitution, or density settings anymore since you can specify those and other properties in the TMX file, too.

On the flip side, here is how you would create this object in Box2D:

b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
b2Body* body = world->CreateBody(&bodyDef);
b2FixtureDef fixtureDef;
fixtureDef.friction = 0.5f;
fixtureDef.restitution = 0.2f;
fixtureDef.density = 1.0f;
b2PolygonShape polygonShape;
fixtureDef.shape = &polygonShape;
b2Vec2 vertices[b2_maxPolygonVertices];
vertices[0].x = 8.4; vertices[0].y = 0.4;
vertices[1].x = 4; vertices[1].y = 4;
vertices[2].x = 4.4; vertices[2].y = 1.2;
polygonShape.Set(&vertices,3);
body->CreateFixture(fixtureDef);
vertices[0].x = 5.2; vertices[0].y = -4;
vertices[1].x = 3.2; vertices[1].y = -3.2;
vertices[2].x = 1.4; vertices[2].y = -2.8;
vertices[3].x = 1.6; vertices[3].y = -4.8;
polygonShape.Set(&vertices,4);
body->CreateFixture(fixtureDef);
vertices[0].x = 6; vertices[0].y = -2.4;
vertices[1].x = 4.4; vertices[1].y = 1.2;
vertices[2].x = 2.4; vertices[2].y = -0.8;
polygonShape.Set(&vertices,3);
body->CreateFixture(fixtureDef);
vertices[0].x = 4.4; vertices[0].y = 1.2;
vertices[1].x = 4; vertices[1].y = 4;
vertices[2].x = 0; vertices[2].y = 0;
vertices[3].x = 2.4; vertices[3].y = -0.8;
polygonShape.Set(&vertices,4);
body->CreateFixture(fixtureDef);
vertices[0].x = 3.2; vertices[0].y = -3.2;
vertices[1].x = 2.4; vertices[1].y = -0.8;
vertices[2].x = 1.4; vertices[2].y = -2.8;
polygonShape.Set(&vertices,3);
body->CreateFixture(fixtureDef);
vertices[0].x = 2.4; vertices[0].y = -0.8;
vertices[1].x = 0; vertices[1].y = 0;
vertices[2].x = -0.4; vertices[2].y = -4.6;
vertices[3].x = 1.4; vertices[3].y = -2.8;
polygonShape.Set(&vertices,4);
body->CreateFixture(fixtureDef);
vertices[0].x = 1.4; vertices[0].y = -2.8;
vertices[1].x = -0.4; vertices[1].y = -4.6;
vertices[2].x = 1.2; vertices[2].y = -4.68;
polygonShape.Set(&vertices,3);
body->CreateFixture(fixtureDef);

WHOHOHOH! What’s the deal? There are waaaay too many coordinates and fixtures in there! Isn’t this all just one shape?

Well, no. The deal is that polygons in Box2D must be convex (no dents), the vertices must be specified in counter-clockwise order (left on top), and there can be no more than 8 points by default. These limitations are wonderful from a low-level technical standpoint. It means that the calculations can be greatly simplified because they can make useful assumptions about the vertices, so your game can run faster with more things flying around. They can always combine into more complex shapes as needed. This image shows how our shape is actually stored internally, regardless of which above method you used:

Remember “Every time you X you also have to Y”? Here’s our first real example! Every time you create a polygon, you have to make sure you’ve broken it into well-mannered bite-size chunks. Seems like a missed opportunity there for a computer to do some work – NetMission to the rescue! To be fair, something as mundane as polygon partitioning is likely not within the scope of Box2D’s mission to provide a pure kernel of physics. But I do think that it’s a shame that every time a programmer wants to use arbitrary polygons in Box2D (X), they have to figure out how to solve this geometric problem first (Y).

NOTE: An observant eye may have noticed that the scale of coordinates is different, too. We have 4.6 in Box2D and 46 in NetMission, leading to the same location. When you’re designing a 2D game, you want to work and think in pixels, but physics calculations work best at a much smaller scale, such as meters (which are bigger, yes, but I mean there are fewer of them). NetMission automatically converts your units, so you can think in pixels or whatever unit you want.

The Third Attempt at Physics

Welp, that second attempt in 2014 didn’t last long. I patched up some of the immediate holes I could find, but soon enough my attention swayed over to more appealing things. Like… resource dependency resolution. And… input buffering.

So now we’re in 2018. Lack of physics is by far the biggest thing holding my engine back. I am working at it and making great strides, this time for real, and I’m excited to share all the clever ways I’ve been integrating physics into my engine. Two weeks ago I wrote a much different blog post, longer than this one, about all the work I had done, except I noticed something fishy when I was reading it back.

“NetMission now handles this for you, too!” I claimed in writing. Yes it does! I proudly thought, having already tested it several times.
Wait, unless you…

Okay but I can fix it really quick!
I couldn’t.

Creating and Destroying Things

Your player aims at an enemy. They shoot a beam of energy and land the target! KAPOW! The enemy takes damage, the blast is destroyed, some extra debris bounces away, and the game crashes.

NetMission is totally fine if all you want to do is destroy that beam. It secretly buffers objects being added or deleted into a command list, and executes those commands at the end of the frame, not even aware that this also helps Box2D out. But let’s say your beam is made of ice and you want it to stick onto the enemy for a second. The simple act of attaching the two together as soon as they collide is enough to crash Box2D, so every time you want to do this, you have to write down “I want these two objects to be attached at this point” and mail it to a later part of your code that isn’t inside a callback.

To me this is a serious design flaw. Every time you want to change the physics simulation during a collision (X), you have to store that action in a list and perform it later (Y). No matter how any of the underlying interfaces work, I want my engine to be as easy to use as possible. If you’re making a Snake clone, you should be able to type

function snakeBody:BeginCollide(other, contact)
    if (other.isFood) then
        self:attachToTail(NewTailPiece())
        other:teleportElsewhere()
    end
end

without worrying about crashing the entire program, because you forgot to mark for later that your Update() function is where the tail piece should be added, not here.

This turned out to be an intricately thorny issue. I scratched my head over all the possible ways to fix it for days. “Maybe PreSolve() can be renamed to ShouldSolve() so you aren’t tempted to make structural side effects?” I proposed. “Maybe I store all the callback data in a list and call them later? Or just every edited property of every possible new data structure?” I nauseated. “Maybe I edit Box2D’s object interface to allow Bodies, Shapes, and Joints to exist on their own before being added to the world?” I twitched.


There hasn’t been a picture for a while, so here is some abstract art in case you weren’t bored yet

I was going to make a case with the creators of Box2D that this (and many other things I haven’t mentioned) is an important next step, but then I found this post from 2009:

I think now is a good time to say that I have a lot of respect for Box2D, if that wasn’t already evident. First of all, it’s just plain cool, and always quite a spectacle to watch and play with. There is a significant degree of beauty and brilliance to its design, and good reasons why it became such an adopted standard. It was the first of its kind and remains the primary source of inspiration for its competitors. Its low-level workings are rock-solid and fully capable of whatever crazy physics scenes and behaviors you can imagine. It has been pristinely polished in so many aspects… except for user-friendliness.

It was a very sad day for me to come to the conclusion, but after a lot of careful research and prototyping, I began switching my physics code over to Chipmunk2D instead. This is going to be just as much of an uphill struggle as before, but there are some key differences in its interface design that make it possible to do what I want. My apologies to Box2D – we’ve known each other for so long… weve been through so much together… went so far… i didnt think it would have to end this way… :cry:

To see what happens next, stay tuned for Part 2 of this magical adventure

3 Likes

And we’re back!
By popular demand today’s topic is PHYSICS!

NETMISSION PHYSICS PART 2: RESCUE RANGERS

Welcome to Chipmunk2D! A physics system inspired by a 2006 prototype of Box2D, Chipmunk2D is open-source and available for free, now including all of its “Pro” features. While its repertoire of games is not as outlandishly impressive as Box2D’s, Chipmunk2D is built into the popular Cocos2D engine among other things, and supposedly it runs faster than Box2D in most scenarios. One big Chipmunk2D game that comes to mind is Contraption Maker, made by the same folks as that amazing old The Incredible Machine series. If you play mobile games, chances are you’ve tried something powered by Chipmunk without realizing it.

That said, I’m not here to market either of these physics libraries to anyone. I just want to keep a sort of journal of my experiences working on this integration. As you saw in NETMISSION PHYSICS PART 1: THE BREAK-UP (lol), I just want physics to be an easy-to-use part of my Lua-scripted game engine, and apparently that’s a pretty ambitious thing to ask for. Again, Lua bindings already exist, but they are more of a literal translation of low-level C functions. I have high usability standards for my engine, and this great opportunity to do some “interface redesign” and take advantage of Lua’s strengths.

I’ve put a lot of work into this project recently, and I can’t wait to show off some nifty developments. But since my switcheroo from Box2D to Chipmunk2D came up so suddenly in the last post, I’m going to restrain myself and stay in (mostly) chronological order. Unfortunately that means this post is more of a reactionary opinionated compare-and-contrast between two background pieces of software, so I can’t promise any entertainment value. I’ll try to keep it short and draw lots of images in MS Paint. Let’s get started!

A House of Trampolines

One of the first and tiniest details I noticed was that Chipmunk2D handles friction and bounciness differently from Box2D. Each is on a scale of 0 (none) to 1 (full). For example, ice would have a friction close to 0 because it’s slippery, while sandpaper would have a friction closer to 1 because it’s rough and stops things from sliding on it. A steel ball would have an “elasticity” or “restitution” closer to 1 so that it can keep bouncing, while a wooden ball would be closer to 0. None of this is necessarily physically accurate, but for videogames it makes for a good approximation and can be overruled as needed.

Here’s where things get iffy: when two objects collide, Chipmunk2D does no more than multiply each objects values together (frictionA*frictionB=frictionAB) to determine the result of that collision. Think about that for a second… If you throw a perfect “bouncy ball” (elasticity=1) at a wall, the only way it would bounce back around the same speed would be if the wall was also close to elasticity=1 (1*1=1), otherwise the ball would stop in its tracks and fall as if it hit sludge (1*0=0). Most materials are closer to 0, since they don’t bounce much, so this doesn’t seem right to me. I shouldn’t have to build my house out of trampolines just to throw some basketballs around!

Luckily, these physics libraries provide ways to customize many of their features. So when I find things I think are quirky or incorrect, it’s usually just one call to tweakThisThing() or replaceTheDefaultBehaviorWithYourOwnHere(), which is wonderful. I found some alternative formulas for smoothing over this exact scenario, so NetMission overrides Chipmunk’s default decisions with some that made more sense to me. And if my opinion turns out to suck, they can be overridden further by the game scripts as needed.

Anyone Have a Broom?

Box2D has a special feature called “swept collision detection” or “bullet mode”, which pretty much guarantees that your objects won’t squeeze out of bounds or pass through each other. Chipmunk2D doesn’t have this guarantee. They say you don’t really need to worry about it, but I encountered this issue pretty much immediately. In my small test room, if I dropped a circle from top to bottom it’d continue straight through the floor!

If you don’t already know, game objects typically don’t actually “slide” around: they teleport in tiny increments. When you hear a term like “60 frames per second”, that means they’re teleporting 60 times per second, which looks smooth enough from your perspective. But if objects are speeding fast enough, they might teleport past the floor in the span of a single frame, never having a chance to collide! This is called “tunneling”.

One way around this is through “raycasting”: you can scan forward and see if your object is about to clip through a boundary, and then stop it from doing so. You can also just limit an object’s speed to no more than its velocity-axis radius per frame. I went with the speed limiting method first, but it felt too artificial so I switched to raycasting and was able to randomly zip that circle around a complex scene at literally 1,000,000 pixels/second without any tunneling. As you can imagine this seems to be one of the biggest bullet points that scares people away from Chipmunk2D, but it can be kept under control…

Soup and Pills

Of course, my switch to Chipmunk2D involved more than just overcoming various shortcomings. It does have a few new bells and whistles that make it stand out.

I’ve had a test room for years where you can click to add MS Paint Metroids and watch them bounce around awkwardly. Because of the complex geometry, if you accidentally placed just 2-3 of these Metroids inside of each other, Box2D would always slow down dramatically, even with the number of physics iterations set to a minimum. I’ve read that it’s something to do with the way it shuffles contact point data around, but I never studied the underlying cause. I switched the scene over to Chipmunk2D, and lo and behold, it doesn’t break a sweat! Not only that but it successfully pushes them out of each other right away. I can stack like 100 Metroids on top of each other and the simulation keeps running as before, now with a room full of Metroid soup. This is just one particular scenario and not a scientific test by any means, but it definitely got me jumping for joy.

Weirdly enough, Chipmunk2D lets you bevel (a.k.a. round off) the surfaces of objects, by giving anything a “radius”. In general, circles are some of the fastest collisions to detect in 2-D space (a few multiplications and you’re done), so I assume there is minimal performance penalty for what is quite an interesting and useful feature. Box2D has an old unsupported patch for capsule shapes, which if restored have their own special interface, but here they are a freebie just by adding a radius to a line :smiley:

Also, Box2D has its own share of tiny oddities and messes which by design don’t exist in Chipmunk2D. The most obvious one to me is that there’s one type of joint awkwardly missing from the “goodbye” dependency tree for destruction callbacks…

To me that’s the kind of bug where I’d rather fix it on the spot than write it into the manual. It’s one of the most straightforward cases of the “every time you X you also have to Y” pattern from Part 1. But hey, at least it’s documented clearly :stuck_out_tongue:

Nervous First Impressions

Both of these physics libraries have a lot of room for improvement and, I’ll admit, have left me grumpy and sour about things from time to time, but as a fan I would be thrilled to see them grow and develop further. I’m still rooting for both, but since Chipmunk2D is the underdog and the final choice for powering my engine, I’m especially rooting for Chipmunk now! That said, it isn’t as marketable as it could be, at least to me.

Box2D has a lot more community support, especially with this absolutely incredible iforce2d Tutorial Website which explains every concept so clearly and goes above and beyond into more and more advanced features. Chipmunk2D doesn’t have that kind of backing, at least not as prominently as Box2D – heck, sometimes I would search the web for a specific Chipmunk2D detail, and those iforce2d Box2D tutorials would be at the top!

Then Chipmunk’s online manual doesn’t always match the source code, sometimes even for functions that have been around for five years. Pretty early on you’ll find some TODO example filler messages, and there broken internal links all around. If I was still on the fence, these early observations could make a huge difference.

Then one of the first things you’ll want to set up with any physics library is the “debug drawing”, a.k.a. display the wire-frames on the screen so you can see what’s happening. In my experience this was significantly more complicated and missing documentation in Chipmunk. Following this, there are no special helpers for building chains of line segments together, such as the room boundaries in a TMX map, so I had to write the code for hooking them together.

“Geez, quit your complaining, Troid!”
“Why did you even switch to Chipmunk2D?”
“Next thing you’ll start yelling at us to get off your lawn!”

Okay okay. :stuck_out_tongue: But none of the above actually matters to me. Every missing feature I’ve mentioned in this entire post can be fully overcome, so I did just that and am chronicling it here. Box2D? Chipmunk2D? In the end they look the same on the outside. Truthfully, there is just one interface subtlety that makes all the difference to me, one difference to rule them all…

Synchronized Dancing

Because the physics simulation runs separately from the game scripts, we need to link the two systems together. So when a creature disappears into the “other dimension”, it has to tell the physics simulation to destroy whatever shapes and joints were there. A few moments later, when our same creature “returns to this realm”, all those shapes and joints need to be added back to the physics simulation. This association should happen automatically, to save our game developer a constant headache.

Box2D made this easy: you can disable a physics body, which keeps all its associated shapes and joints in memory but removes them from the simulation, then enable it later to get everything back.

In Chipmunk2D, this is way more complicated: bodies, shapes, and joints can each be enabled or disabled independently, and there are peculiar side effects for each. Philosophically speaking… what does it mean to connect two active bodies with an inactive joint? Or connect two inactive bodies with an active joint? What if your body is inactive but all its shapes are active, connected to an active body with inactive shapes?

It’s a head-scratching set of combinations, and Chipmunk2D’s answers to these scenarios (most mismatches are not allowed but some are encouraged!) are not necessarily self-evident. Even worse is that when joints and shapes go inactive, their bodies completely forget about their existence, so you have to track them separately if you want to hook them back up later. There is a lot of stuff that Chipmunk2D expects us to juggle here that Box2D took care of just fine!

However, sorting out all of the above properly gives us one, single, enormous, enormous advantage over Box2D. Let’s return to this hypothetical Snake script I presented in Part 1, which is hopefully understandable even if you don’t code:

function snakeBody:BeginCollide(other, contact)
    if (other.isFood) then
        self:attachToTail(NewTailPiece())
        other:teleportElsewhere()
    end
end

Both Box2D and Chipmunk2D warn that you should never alter the structure of the physics simulation during a “collision callback”, the moment when they tell you about two things colliding (like BeginCollide). In other words, the above script would crash. What you are supposed to do is record your structural changes (self:attachToTail(NewTailPiece())) in some kind of list, and then once the collisions are processed you are free to finally apply those changes.

In Box2D this would be an excruciatingly frustrating and ugly problem to reverse, which is literally the only reason I switched to Chipmunk2D. Even in Chipmunk, your program will abort immediately if you attempt a physics command you’re not supposed to, so they at least help you bubble your code into that list for later. But, with a little extra effort using Chipmunk2D as-is, we can pretend to completely break the rules. I’m happy to report that, NetMission lets you make structural changes to the physics simulaton during collision callbacks, something neither of these physics systems promise. Welcome to 2018.

See you in Part 3. :slight_smile:

2 Likes

NETMISSION PHYSICS PART 3: PLAY A GAME

Less talk, more show. Here is a new game I posted the other day. Enjoy!!

Conclusion

I was planning to talk about “joints” or “constraints” as the final topic, then release this demo to showcase all the work I’d done.

But…

As I built the demo, I wanted to include every basic physics feature in some form, and the list of topics I could talk about grew, and grew, and grew, and grew…

Rather than writing and illustrating an entire technical book, I figured I would just open this up to questions. Are there any technical aspects of the demo you are interested in hearing about? Even the sounds and graphics make heavy use of physics features. I’d be happy to talk about anything.

Lastly, if you’re curious to see some of the inner workings of Swing Swing Swing, there is a super-secret right near the end. *wink wink*

1 Like

Fourm semi dead :frowning:

Obligatory “what’s new” post. How’s NME development been going?

Mainly bug fixes this year. Nothing too spectacular.
I took the summer off completely, but this fall I’ll sit down and see what would be most beneficial to work on.

EDIT: Wow is Photobucket really adding giant thumbnails to all their images and ruining my previous posts here? That’s an easy way to destroy itself. Time to switch elsewhere.

They’ve been getting worse for a loooooong time. I got fed up and left them circa 2009.

Any updates of NME since last year?

Yeah! I fixed some bugs.

I’m sorry about all the above posts in which I simply say I was fixing some bugs. I’m sure everyone’s figured out by now that I was certainly doing a lot more than just that over the years. :slight_smile: (although fixing bugs is also very important).

In case anyone still reads this old forum thread, I’ll announce here that I’ve officially moved my updates about NetMission Engine over to a blog! I’d already posted the link here in a couple places, but there wasn’t anything substantial to read on the blog yet…

…until today! NetMission Development: NetMission Engine in 2023

I tried writing a few other topics first, but nothing made sense without the full context. So, I am kicking things off with a general overview of the engine before I do any deeper dives. Think of it as a continuation of what I used to do here, before I went secretive. (Again, sorry about that.)

1 Like