My Aim Is True


When last we left Project Catalan, I had refashioned my boxy humanoid avatar in Blender from a Unity-compliant form to an Unreal-compliant form, and had got some basic movements going on via Animation Retargeting. Things were looking good as far as a straightforward walk cycle, but I knew I would need more. I had vague thoughts of a character who could pivot at the waist to aim while walking or strafing from side to side. The vagueness of these thoughts turned out to be a problem in itself; I’ll try to explain what I mean later on in this post.

In the meantime, I knew I wanted my character to fire that rifle, and I had no idea how to make it happen, so I went in search of examples. The first site I found turned out to be an encyclopedic behemoth of a knowledge trove (if not particularly searchable or well-organized): The Shooter Tutorial website, where a benevolent and mysterious dev lays out a series of methods and examples for making FPS content. What kinds of content? I found posts about implementing shotgun spread, boss fights, bullet time, and… ninjas? Do I need or want to implement any of these things? Not at the moment, no, but I’ll admit it does ease my mind to know someone has at least pointed the way down so many intriguing paths. Sometimes the hardest part of an implementation is figuring out your initial high-level approach to the problem. In fact, thinking some more I’d say that’s the case most of the time. Once you know what it is you’re trying to do, the actual doing often turns out to be surprisingly simple.

Of all the many gems and jewels collected on the shooter tutorial website, I think I benefited most from the examples and explanations of the various blueprint communication techniques. This is a subject that I gather trips up a lot of Unreal newbies, and I alluded to it last post in terms of Unreal’s emphasis on good hygienic OOP. In Unity, it’s all too easy to just make all your variables public, and not muck around with getters and setters, despite the future problems that approach opens you up to. Unreal’s Blueprint node system wants you to get valid references to other game objects by using built-in nodes (like “Get Player Character”), then casting those results to your own derived custom types. In addition, my casual Unity workflows often involved having the object I’m working on (say a crate) get whatever references it needs (say to the player’s control script) directly, so I quickly ended up in a circular morass where every script had its hands in the pockets of every other, and the boundaries defining who was responsible for what got blurry and confusing. Unreal has a number of other ways it would prefer you do things, and after some experience I’m inclined to think it has a point.

The key to understanding blueprint communication, at least for me, was to understand the importance of the Game Mode class. By default, the base Game Mode type contains references to a bunch of other files and variables that populate the game, such as the default Player Pawn, the default Player Controller, and more abstract entities like the Player State, which can be used as a container for various information about the player, and has the advantage that other classes can look into it and read that information. The beauty bit here is, if you create a class called “MyGameMode” or whatever, as long as it’s derived from the base Game Mode class, you can then set that as the default mode, and fill it up with your own versions of everything. So then, when you hit play, the game will for instance automatically spawn and activate your custom Player Character class, that has all the components you need on it, so you don’t have to script out a bunch of tedious startup logic to get all your variables and things ready to go. You just tell the engine in advance, “use my custom versions of all these game architecture files on startup, and run whatever logic I happen to have in there”. As a way of focusing your developments efforts on gameplay rather than housekeeping, I’ve got to say this is a really enjoyable and intuitive system, once you get a handle on the core concepts.

So that’s one easy and straightforward way to handle communication between blueprints: make a custom Player State object that gathers information every tick, like the player’s current velocity, location, whatever you might need. Then, if you need that information elsewhere, use the “Get Player State” node, cast the result to your custom Player State class, and browse that sweet buffet of knowledge like a hobo at a Sizzler. You can do similar things with the “Game State” and “Game Instance” classes, the latter of which I gather is designed to assist multiplayer games with handling information from multiple clients. I find it all a lot cleaner than what I had going on before… again, once the initial learning curve flattened out.

If you want to get even more sophisticated and compact with your inter-blueprint communications, there’s the Struct variable type. Structs are like enums in that I didn’t give much thought to either when I encountered them in C#, as the concepts were just abstract enough to evade my understanding of their usefulness. However, blueprint nodes make the power of Structs clear right away. I think of a Struct as a sort of gun cabinet, or if you prefer, an ice cream truck, packable with a dazzling variety of wonders. In your Player Character blueprint, for example, you might have a Struct containing Vector3 variables for locations, floats for values, some bool switches, maybe even some strings. Then, in your player state, you get a reference to that Struct and use the Break Struct node. Suddenly you’re like a child catching a piñata with a massive uppercut: a multicolored smorgasbord of delights (in the form of valid output pins) is showered upon you.

Getting still more abstract with bluprint communications, we have Interfaces and Event Dispatchers, and there my education on that front ended. I may have used one Event Dispatcher but for whatever reason it didn’t seem like I needed the hard stuff this time out. I’m sure there will be some impetus to learn those properly somewhere down the road.


Unreal’s documentation is interesting to me because it incorporates so many versions and has so many authors. Like an ancient Greek palimpsest, it shows signs of forgotten layers, and a wrong turn can quickly take you into the woods of the old C++ API pages (although a helpful full-screen popup quickly appears to basically ask “are you sure you intended to come in here?”). As new features roll out, various pages may have different kinds of information about them. For instance, many internal and external tutorials make use of the word “Persona” as a name having something to do with Unreal animations, but it took some forum sleuthing to figure out: Persona is a multi-window, multi-interface suite, represented by the four-box icon set you see at the upper right when looking at a Skeleton, Skeletal Mesh, Animation, or Animation Blueprint:

Each of these interconnected views has its own flavor, and they work together to control animations of various kinds. As I touched on previously, many of these interfaces are the same as you’d find anywhere else: there’s a window for hooking sockets up to bones, and a 2d variable-driven animation graph, but in Unreal’s case the preview window was much more robust, showing the results you’ll see in-game, which allows for better debugging. The Anim Graph is, to me, the real divergent area where Unreal goes off to do its own thing. It’s sort of a combination of a traditional state machine graph and the event node system, where instead of passing flow control you’re passing information about a pose along various wires and through various permutating function nodes, until hopefully the effect you were trying to get emerges from the far end. As is becoming a theme: this was a brain-buster to deal with at first, but once I started to understand it, I started to see why a visual representation of this data has its own distinct advantages.

For instance, if you wanted a character to be able to aim from both the hip and the traditional iron-sights positions, you would generally need to construct multiple state machines, probably with some redundancies, to handle switching between the “arms up” and “arms down” versions of walking, running, crouching, etc. at any given time. With the Anim Graph, you can “cache” sets of animations and then recall them when needed, using a bool (flipped by the player in-game at will) to determine which pose to use.

Of course, this blending of animations via a single yes/no bool is just about the crudest use case, and Unreal offers a battery of Blend Pose By nodes, which seem to me like they could support all sorts of complicated animation behaviors. Fortunately I didn’t need to dig much deeper than this, although I did start to get involved with the concept of Animation Montages, thinking I might use one for recoil, but I bounced off this concept a bit and may need to return to it later. It’s connected with the idea of “animation slots”, which are one way to constrain animation to a subset of body parts. As in most engines, there are quite a few different ways to implement any given thing you need, and I managed to find a way to separate the Boxman’s upper and lower halves without delving into Montages, although I look forward to giving them more attention at some later date.

During this process, as I poked around at various parts of the system, not getting much of anything good to happen, I came to a familiar revelation, one that I often have multiple times during these sorts of projects. When I’m not getting a good result, it’s not generally that the problem is hard, or even that I’m going about it the wrong way, but instead it’s that I haven’t fully and completely articulated what it is I’m actually trying to achieve; this is what I meant when I talked about “vague thoughts” at the top of this post. In a situation where you’re prototyping with vague goals, it’s very unlikely for anything to come out perfectly just by accident, and while a kind of hand-wavy “I’ll know it when I see it” attitude can be quite useful in other kinds of creative efforts, here I’ve found it’s not the best play. You’re not likely to reach a satisfying result when you can’t judge any two attempts on whether either is closer to or further from your desired goal… because you haven’t sufficiently defined one.

Once I realized what was happening I took a step back and made some clear choices about what I wanted the arms, legs, and camera to be doing, and then tried step by step to achieve those specific results. Progress, unsurprisingly, sped up.

I initially supposed that I would be limited entirely to pre-made animations imported from a 3D modeling program, and wouldn’t be able to animate a recoil action by hand. Instead I had the bright (if somewhat oddball) idea of creating the recoil through a physics impulse, as though every time the player fired a gun, an invisible fist punched the avatar on the gun shoulder, in other words simulating the recoil “for realsies”, as it were.

I did find a few mentions on the boards of people having tried this same thing, but thankfully not many because it turned out to be completely unnecessary. Instead, I discovered in a related search the wonder of Skeletal Controls, a set of nodes only available in the Anim Graph that grant you the ability to pluck and pull at individual bones, which in theory allows you to construct whatever animation you like out of whole cloth, although in practice that would be unbearably tedious. These movements can also take advantage of something called Two-Bone IK, which I will probably need to investigate soon enough, as right now I just have a placeholder recoil animation, just to prove to myself I can do it this way. Before polishing the recoil, I wanted to get the upper and lower body to co-ordinate in the manner I had finally decided to try for. As luck would have it, I was looking through some training resources at about this time and discovered the Example Content project, which may be the single best Unreal learning resource in existence.


The Content Examples project can be found in the “Learn” tab of the Epic Launcher, and like the other projects there it can be downloaded and installed, so that you can open each map and blueprint in order to study how it works in detail. The content on display in the user-friendly interactive “booths” that make up the project is impressive in its breadth, showcasing just about every corner of the engine’s capabilities. It took me hours to just go through and look at everything, an exercise I highly recommend to anyone learning the engine. Much like the Shooter Tutorial site, or a secret agent’s cyanide pill, it provides comfort even when not in use. I don’t currently have a need to make a normal-mapped decal, or realistic-looking skin with sub-surface scattering, or a working water fountain, but I sleep easier at night knowing that, should the need arise, there’s a working implementation which I can take completely apart only a few clicks away.

What I was actively interested in was animation, and what I found was extremely interesting: a rigged, skinned, animated controllable 3rd person character, apparently called Owen (based on the filenames). He wears a trench-coat, is rigged to carry a gun (not included), and has a face that I originally figured was some kind of lizard-like scales or chitin (hence my nickname “Gator Guy”), but on closer inspection looks like some kind of seamed, chrome rock? All in all, he looks a bit like a c-list comic book vigilante, someone who might deliver exposition or supplies to an Avenger or X-Man. I pondered over his origins for some time, supposing that the tutorial developers just had a laundry list of things to demonstrate (free-moving cloth, odd multi-material monster skins, etc) and Owen was just the result of their cramming all of those bullet points together.

Before finishing this post, I did a little research, and apparently there’s a bit more to it than that, as detailed in this fascinating post over at Owen showed up as the protagonist of Epic’s 2013 tech demo “Samaritan”, and before that was an iconic pre-production character for “Nano”, a proposed open-world co-op shooter that Epic kicked around internally from 2008 until around 2013, when both CliffyB and Epic president Mike Capps departed for greener pastures, following the sale of most of the company to Chinese games megacorp Tencent. One assumes that these events helped evaporate any collective will that was left to rescue Nano from development hell, and the whole thing died on the vine, as such games often do (another fascinating tidbit: one of Nano’s other development code names was “Blueprint”… make of that what you will). Through this lens, Owen makes a lot more sense: Epic wanted to recoup a few pennies by re-using some assets in something less public-facing than a real game, while at the same time they weren’t eager to talk about that particular chapter of company history, so the Owen art and logic were unceremoniously slipped into the file cabinet of the Content Examples project, for future delvers such as myself to pore over in consternation. Not such a bad afterlife, if you ask me. One could do a lot worse.

Owen had the ability to walk forwards and backwards, strafe left and right, and swivel his upper torso as desired, and as far as I could tell he was doing it without the use of Montages. It took me quite a bit of trial and error to rebuild the individual parts, but here’s how my version of it works (minus the Transform Bone recoil stuff, which I discussed above):

1. Base Locomotion State and Blend Space

When the player moves the controller, it passes two normalized floats to the animation blueprint, and these drive a 2d animation matrix, which I went into a bit last post. The advantage here is that Owen is built so that the right combination of inputs will produce a unique strafing animation (and through the magic of Retargeting, Boxman can now do the same).

2. Aim Offsets

An aim offset is basically the same as a blend space: a variable-driven matrix of animation poses. The main difference is that they are blended additively, so as long as the aim offset legs and feet aren’t moving, you get an upper body pivot, provided you have proper vertex groups and bone weights and the like in your mesh.

3. The “Leg Trick”

I had pivoting and strafing functional, but it was easy to get into a situation where the character’s various rotations were not ideal. After some extended head-scratching, I implemented a small trick to keep the upper and lower halves in harmony: When the player pivots at the waist, we slowly rotate the lower half so that it eventually matches the rotation of the upper half. In a perfect world, I would craft some kind of “step and turn in place” animation for the legs, so it doesn’t look so floaty, but for now I’m not unhappy enough with the current behavior to care that much.

4. Object and component settings

Here’s where things got a little hairy, but just as a side-effect of Unreal’s component system being so full-featured. It turns out there are a bunch of different switch positions that can change, for instance, how the controller affects a camera attached to the player by a spring arm. Do we inherit values? Does the arm itself have rotational properties or a preferred angle? Does the movement component attempt to orient its rotation based on the velocity of the pawn? It’s a trial-and-error morass of combing through the details panes for various objects, but I can’t fault Epic for offering such an exhaustive range of options. Probably for some other project I’ll need to set all those switches differently. And after all, there’s no rule saying you can’t just roll your own implementation of whichever components are giving you trouble by offering you too many options. Much like growing your own food or making your own clothes: I’m not tempted to do it, but I’ve seen it suggested for sensible reasons and I understand what would drive others to do so. Let’s hope things don’t get that desperate.


  • Further employment adventures take us on a detour to Flatland, that fabled place where there are only two dimensions! Are the Paper 2D tools worth using? Is Unreal overkill for 2D? Will I ever get the hang of Montages? Find out!