Recently I ran into a problem with Unity’s UI system. The problem is simple: How do I close a menu using a controller?
To give an example, a main menu hierarchy will look something like this:
New game button, options menu button, etc
Resolution dropdown, VSync options, etc
… other menus, like level select
If the player opens the options menu from the main menu, then hits the “B” button on their controller, they should be taken back to the main menu. Easily achieved by polling for Input.GetButtonDown in some component on the menu, right? Well, this works for most situations, but it falls over when some component needs to handle the cancel itself. For example, if a dropdown is open and cancel is pressed, then only the dropdown should close, not the menu.
The Unity UI InputModule already has a field which takes a cancel axis - can we leverage that? There is some limited support for doing so, but it turns out to be somewhat more complicated than at first glance.
If you’ve played Morrowind, you probably know exactly where this is - just outside Ald Ruhn in the Ashlands. Go north and you’ll find the small town of Gnisis, containing both a temple and Imperial guard post. To the east is forbidding Red Mountain, forever surrounded by the mystical Ghostfence. Just to the south you’ll find the Bitter Coast with its mushroom trees and dense marshes.
There are many things that make Morrowind special to me, but one of the foremost is the sense that I know the place. I’ve walked Morrowind from north to south. Its roads feel like home. I can still tell you, years later, the best routes to travel from Vivec to Tel Mora.
I can already see some of you rolling your eyes. “Oh yeah, this old argument. Fast travel is bad, yadda yadda, but seriously - who wants to spend fifteen minutes in a video game just walking?” And you’re right! I love Morrowind, but even I am not immune to the tedium. These days, it’s the big thing that puts me off from playing it again. And yet, something always makes my mind turn back to Morrowind, and think - why did that make me feel the way it did?
Note that this post contains spoilers for Morrowind, Dark Souls, and Subnautica.
Imagine the following scenario: you have a list of objectives you want to show in the game HUD. Sounds simple enough to implement in Unity’s built-in UI system, right? Just throw a HorizontalLayoutGroup on a panel, insert your elements, and the panel will automatically size to fit. Instantiate new children as new objectives come in.
However, when you add new objectives, it doesn’t look especially nice to have the UI expand immediately. Plus, you want to draw the player’s attention to the objectives list when it is updated. So, you decide to animate the objectives list.
Using Unity’s built-in UI components, this turns out to be a bit difficult to do. There is no built-in support for animation in the UI system. How, then, do we go about animating this list?
There are remarkably few resources around the internet for creating an audio mix in Unity. Of the ones that exist, they seem to always fall into one of two categories:
Get started playing sounds in Unity with an Audio Source!
Here’s how to integrate Wwise into Unity!
I needed something in the middle. Playing sounds using sources and listeners wasn’t going to cut it, and I didn’t have time to learn Wwise, as we were submitting our game to an expo in a month and we had zero sounds.
Since our game was an RTS, there were a few of specific challenges I needed to solve:
Many sounds are less important than others (a gun firing, vs “You are under attack!”). Unity does support priorities on audio sources, but a higher priority only makes sure that the sound gets played; we needed the sound to play louder relative to other sounds.
There are a ton of sounds going off, all at once. This especially presents an issue when a bunch of units all begin firing at once; this causes a very unpleasantly loud sound as the amplitudes add up.
The sounds are all over the map, so we need 3D audio. Since the camera is overhead, we can’t use the 3D position to attenuate, we need to use the 2D position relative to the units.
In desperation, I turned to Unity’s AudioMixer. Though most sources I’ve read around the internet say it’s not good, I managed to make it work.
A note of caution, first, that I am not an audio engineer. I’m not even particularly good at audio stuff. This is just what I found works for our project - there are no doubt better ways to do it. With that in mind, below is a high-level outline of my solution.
I’ve been using Unity’s free networking solution, UNET, in an RTS. On the whole, it works, but it doesn’t work especially well. Since UNET has to support many different types of games, the choices made by the developers lean towards versatility and flexibility, rather than efficiency.
Case in point: the NetworkTransform. It supports many things out-of-the-box, including interpolation, rigidbodies, and variable send rate, but it makes tradeoffs in the efficiency department. Every time it syncs a transform, it’s sending position and rotation uncompressed. With 3 floats for position and 3 floats for rotation, that’s 24 bytes every time a sync happens. The entire Unity networking library is open source, so you can analyze the NetworkTransform yourself.
There’s two reasons for wanting to reduce the bandwidth of the NetworkTransform:
The Unity Matchmaking Service enforces a per-second bandwidth limit of 4kb in pre-production mode. 4096/24 is about 171. Assuming 10 updates a second, thats means only 17 NetworkTransforms syncing at once - with no other traffic at all.
Reducing the amount of bandwidth allows us to push the send rate higher than 10x a second, reducing the amount of interpolation needed along with the perceived latency.
A couple of notes before we get started. First, a lot of this article is based on Glenn Fielder’s (Gaffer’s) snapshot compression article, which is applicable no matter if you’re using Unity or not. I’m going to be explaining a few concepts from the article, for completeness, but you should familiarize yourself with it before reading on.
Second, you should be familiar with bitwise operations. Since we’re trying to save as much bandwidth as possible, we’ll be hand-packing bits.
I’ve been on an art kick working on my most recent side project, an RTS made in Unity. In this post, I want to share some neat things I’ve been doing with shaders.
Note that this is an area I’m still exploring and learning in. These are just the results of some of my first experiments. If you have ideas on how to improve them, please leave me a comment at the end of the post.
Some knowledge of Swift and thread synchronization is required ahead.
Normally, on this blog, I write about games and game development. However, most of my time is spent on working on something else entirely; by day, I’m an iOS app developer at a big company you’ve probably heard of.
Recently, our team ran into a really interesting deadlock. I was working on writing a threadsafe cache implementation - a cache only one thread can read from, or write to, at a time. Seems like by-the-book multithreading. Except… well, it wasn’t, of course.
Lately, I’ve been working on a multiplayer dogfighting game in Unity. While I could have just had players fight over a flat blue ocean, I felt like the levels needed something more. Inspired by my previous experiments in terrain generation, I generated a perlin noise heightmap, and then created a mesh using regularly spaced points.
However, I felt like the terrain was… well, rather bland. I went searching for inspiration around the internet, and the one that really stood out to me was Woodbot Pilots. In some places, their triangles are huge, suggesting slabs of rock and towering cliffs. In other places, small triangles hint at crevices and finer detail. I didn’t fool myself into thinking I could achieve such a detailed result with procedural generation, but perhaps I could get close by using irregularly-spaced points, rather than points on a grid.
Back in Feburary 2015, I used amitp’s tutorial on voronoi cells to create terrain for a small tactics game. The terrain in that game looked kinda like what I needed, but in 2d. I set about trying to use voronoi cells to procedurally generate a mesh in Unity. In this post, I’ll go over the initial part of generating the triangulation and translating that into a mesh.
In a previous post, I covered the guts of the dropdown console I wrote for my very simple FPS. However, I didn’t mention any techniques on how to actually render the console. When I was first implementing this, it didn’t seem too bad - just throw up a quad with some text on it, right?
In actuality, text rendering turns out to be a fairly non-trivial task. My first attempt loaded each character as a single texture, and drew one quad/character at a time. However, I quickly ran into performance problems with this - even with just a few hundred characters on screen, there was noticeable lag.
The solution to my problem was to pack all of the characters into one texture, and then batch the draw calls together by line. This reduced hundreds of draw calls to just ten or twenty. In this post, I’ll cover the algorithm I implemented to pack multiple textures into one.