Imagine the following scenario: you have a list of objectives you want to show in the game HUD. Sounds simple enough to implement in Unity’s built-in UI system, right? Just throw a HorizontalLayoutGroup on a panel, insert your elements, and the panel will automatically size to fit. Instantiate new children as new objectives come in.
However, when you add new objectives, it doesn’t look especially nice to have the UI expand immediately. Plus, you want to draw the player’s attention to the objectives list when it is updated. So, you decide to animate the objectives list.
Using Unity’s built-in UI components, this turns out to be a bit difficult to do. There is no built-in support for animation in the UI system. How, then, do we go about animating this list?
There are remarkably few resources around the internet for creating an audio mix in Unity. Of the ones that exist, they seem to always fall into one of two categories:
Get started playing sounds in Unity with an Audio Source!
Here’s how to integrate Wwise into Unity!
I needed something in the middle. Playing sounds using sources and listeners wasn’t going to cut it, and I didn’t have time to learn Wwise, as we were submitting our game to an expo in a month and we had zero sounds.
Since our game was an RTS, there were a few of specific challenges I needed to solve:
Many sounds are less important than others (a gun firing, vs “You are under attack!”). Unity does support priorities on audio sources, but a higher priority only makes sure that the sound gets played; we needed the sound to play louder relative to other sounds.
There are a ton of sounds going off, all at once. This especially presents an issue when a bunch of units all begin firing at once; this causes a very unpleasantly loud sound as the amplitudes add up.
The sounds are all over the map, so we need 3D audio. Since the camera is overhead, we can’t use the 3D position to attenuate, we need to use the 2D position relative to the units.
In desperation, I turned to Unity’s AudioMixer. Though most sources I’ve read around the internet say it’s not good, I managed to make it work.
A note of caution, first, that I am not an audio engineer. I’m not even particularly good at audio stuff. This is just what I found works for our project - there are no doubt better ways to do it. With that in mind, below is a high-level outline of my solution.
I’ve been using Unity’s free networking solution, UNET, in an RTS. On the whole, it works, but it doesn’t work especially well. Since UNET has to support many different types of games, the choices made by the developers lean towards versatility and flexibility, rather than efficiency.
Case in point: the NetworkTransform. It supports many things out-of-the-box, including interpolation, rigidbodies, and variable send rate, but it makes tradeoffs in the efficiency department. Every time it syncs a transform, it’s sending position and rotation uncompressed. With 3 floats for position and 3 floats for rotation, that’s 24 bytes every time a sync happens. The entire Unity networking library is open source, so you can analyze the NetworkTransform yourself.
There’s two reasons for wanting to reduce the bandwidth of the NetworkTransform:
The Unity Matchmaking Service enforces a per-second bandwidth limit of 4kb in pre-production mode. 4096/24 is about 171. Assuming 10 updates a second, thats means only 17 NetworkTransforms syncing at once - with no other traffic at all.
Reducing the amount of bandwidth allows us to push the send rate higher than 10x a second, reducing the amount of interpolation needed along with the perceived latency.
A couple of notes before we get started. First, a lot of this article is based on Glenn Fielder’s (Gaffer’s) snapshot compression article, which is applicable no matter if you’re using Unity or not. I’m going to be explaining a few concepts from the article, for completeness, but you should familiarize yourself with it before reading on.
Second, you should be familiar with bitwise operations. Since we’re trying to save as much bandwidth as possible, we’ll be hand-packing bits.
I’ve been on an art kick working on my most recent side project, an RTS made in Unity. In this post, I want to share some neat things I’ve been doing with shaders.
Note that this is an area I’m still exploring and learning in. These are just the results of some of my first experiments. If you have ideas on how to improve them, please leave me a comment at the end of the post.
Some knowledge of Swift and thread synchronization is required ahead.
Normally, on this blog, I write about games and game development. However, most of my time is spent on working on something else entirely; by day, I’m an iOS app developer at a big company you’ve probably heard of.
Recently, our team ran into a really interesting deadlock. I was working on writing a threadsafe cache implementation - a cache only one thread can read from, or write to, at a time. Seems like by-the-book multithreading. Except… well, it wasn’t, of course.
Lately, I’ve been working on a multiplayer dogfighting game in Unity. While I could have just had players fight over a flat blue ocean, I felt like the levels needed something more. Inspired by my previous experiments in terrain generation, I generated a perlin noise heightmap, and then created a mesh using regularly spaced points.
However, I felt like the terrain was… well, rather bland. I went searching for inspiration around the internet, and the one that really stood out to me was Woodbot Pilots. In some places, their triangles are huge, suggesting slabs of rock and towering cliffs. In other places, small triangles hint at crevices and finer detail. I didn’t fool myself into thinking I could achieve such a detailed result with procedural generation, but perhaps I could get close by using irregularly-spaced points, rather than points on a grid.
Back in Feburary 2015, I used amitp’s tutorial on voronoi cells to create terrain for a small tactics game. The terrain in that game looked kinda like what I needed, but in 2d. I set about trying to use voronoi cells to procedurally generate a mesh in Unity. In this post, I’ll go over the initial part of generating the triangulation and translating that into a mesh.
In a previous post, I covered the guts of the dropdown console I wrote for my very simple FPS. However, I didn’t mention any techniques on how to actually render the console. When I was first implementing this, it didn’t seem too bad - just throw up a quad with some text on it, right?
In actuality, text rendering turns out to be a fairly non-trivial task. My first attempt loaded each character as a single texture, and drew one quad/character at a time. However, I quickly ran into performance problems with this - even with just a few hundred characters on screen, there was noticeable lag.
The solution to my problem was to pack all of the characters into one texture, and then batch the draw calls together by line. This reduced hundreds of draw calls to just ten or twenty. In this post, I’ll cover the algorithm I implemented to pack multiple textures into one.
Back when I was writing a terrain generator in C++, I used this library called glConsole to put a quake-style dropdown console into the project. By a “dropdown console”, I mean the thing in the screenshot above - a place to execute scripts from inside the game. Basically, if I typed this in the console:
However, this wasn’t possible in glConsole. While it did most things pretty well, the one thing it could not do was bind member functions - functions which are part of a class. You could bind free functions and static functions, but not methods. Lately I’ve been working on a very simple first-person shooter using C++ and no game engine. I wanted something like glConsole, but without the glaring downside. Plus, I was writing the game from scratch for a reason - why not write my own console?
In this post, I’ll explain my approach to implementing a callback system which stores functions and executes them at a later date. My goals for this system were:
Member functions should be supported.
You should be able pass in any method/callable object to be called at a later date.
Devil Daggers is a mystery of a game: one big contradiction. It’s simple but complex, chaotic but calm, beautiful but horrifying. It’s a game that, at first glance, isn’t worth writing about - but here we are.
For the uninitiated, here is a brief description of Devil Daggers.
It is a first person shooter.
You have two weapons, a rapid-fire spray and a shotgun-like blast.
The entire game takes place on one circular platform, which shrinks over time.
There are different enemies. They spawn at preset times, in preset configurations.
The goal of the game is to live as long as you can.
Touch one enemy, and you die.
If you die, you start all over again.
Let’s lay out Devil Daggers’ crimes, the reasons for which it should absolutely not capture anyone’s interest. First off, The enemies are too predictable. Circle strafe enough, and it seems as if they’ll never hit you. The stages, too, are dreadfully repetitive. The same enemies spawn at the same times, all the time. Then, every time you die, you have to do it all over again! Same repetitive thing, over and over. It’s only made worse, of course, by those leaderboard replays; if you can watch others do it, what’s the point? You can just reproduce exactly what they do!
Well, all that’s true. So why’s it so damn compelling?