With AccelByte Development Toolkit, we are making ADT available to all game developers large and small. It’s been a long and challenging journey to get to this point, but we are really excited to have more developers using ADT and helping to shape its future. We are just getting started with ADT and we have so many future plans that we will share in the coming months.
With this, I wanted to take some time to share some of my thoughts about what drives me personally in building ADT. It all started with a love of games; I’ve loved games since I was a child, so it was natural that I ended up wanting to make them after I graduated. And so it was that I ended up making games for the next 20 years of my life: big games, small games, AAA games, indie games, console games, mobile games, and everything in between. Bringing new and joyful experiences to players constantly motivated and excited me.
Along my game development path, one of my ongoing frustrations was the lack of great tooling and the constant reinventing of the wheel. Tools were hastily built, game-specific, poorly maintained, and often thrown away at the end of a game as being unfit for their original intended purpose. With the advent of engines like Unreal and Unity, the situation did improve considerably, but there was still a gap in tools that focused purely on helping developers be productive. It became my passion to create effective tooling as well as make fun games.
When the opportunity came along two years ago to be involved with ADT (BlackBox as it was called back then), it was too good to pass up. Here was a tool that had the mission to accomplish many of the things I’d always wanted as a developer. In my mind creating powerful and effective tools is very similar to making games; tools have to not only be useful but also enjoyable to use. Something which is all too often overlooked.
ADT began its life four years ago as a crash reporting tool with the unique feature of being able to capture videos leading up to a crash to give developers extra context about the cause of a crash. Since then it has grown organically to encompass a number of key areas like build distribution, play testing, and performance profiling, but at its heart, the original vision remains fully intact. That vision is to help developers be more productive, or as we express it: to help developers make better games faster; to allow them to focus on the what and not the how; to provide a robust set of tools that can be helpful early in development, but also throughout the entire development cycle.
Core to this is the idea of shortening the iteration loop. As anyone who has made games for a long time knows, the faster and more often you can iterate, the better chance you have of your game being successful. This applies not only to gameplay iteration but also to raising the quality bar of a game across the board. In the past iteration was not so difficult: teams were small, they were in the same physical space, games were offline and builds were usually small. Typically it was as easy as shouting down the corridor to a designer to come and check your most recent changes directly on your machine.
When COVID came along it shook this up dramatically, but really only accelerated trends that were already happening; games were getting bigger and bigger, studios larger and spread across multiple internal and external teams, and games were moving online at an ever-increasing rate. Suddenly many of the basics of how we put games together were called into question and found to be lacking. Something as simple as getting builds out to an entire distributed team became a logistical nightmare and involved huge volumes of data. Collecting information about the health of a game in terms of crashes, errors, and other feedback involved collecting and organizing large amounts of information. The challenge became about workflows and data collection, management, and organisation. Even the task of getting team members together to test gameplay changes became challenging as most games these days have a multiplayer component and so can’t be tested realistically in isolation like a single-player game can be.
ADT is designed to step into these gaps. To be the glue that binds the stages of iteration seamlessly together. From a developer making a change to getting that change into other team members’ hands, collecting qualitative and quantitative feedback on the change, and then acting on that feedback. The goal is to make this as frictionless as possible. ADT is focused on three key areas to make this happen:
ADT started with crash reporting first as the most basic form of quantitative feedback on the health of a game. Knowing how and when your game crashes, or causes exceptions, is the most straightforward metric of health. From there we built out additional feedback mechanisms with support for crash videos and recently our in-game issue reporting tool.
We intend to expand the methods that ADT supports for gathering and acting on feedback dramatically going forward with a focus on providing as much contextual information as possible when problems occur in your game; whether the problem is gameplay related such as players getting stuck on a particular level, or even performance related such as finding and diagnosing performance drops in certain areas of your game. For example, want to know why players keep dying on a particular level? Gather all the places they die and display this on a heat map. Want to pinpoint exactly where your frame rate is dropping? Detect hitches in the frame rate and record it. These are the kinds of features we intend to add to ADT’s health tools.
If a user can’t get a version of your game quickly and efficiently then nothing else really matters. As a result, distribution has become a central pillar of ADT. On the surface, this seems a straightforward problem to solve, but distributing builds of modern games involves huge volumes of data. It’s not uncommon for a modern AAA game to be in the range of 100GB and this will only get bigger. Combine this with the fact that many studios want to make builds from every change list and you end up with terabytes of data every day.
Early in ADT’s life, we experimented with binary diffing as an approach to reducing data transfer and storage. This method has previously shown itself to be very effective in user distribution channels like mobile stores and Steam. However, for ADT this approach did not work well because, with so many builds across so many channels, the volume of diffs necessary became prohibitively expensive in its own right.
This led us to rethink the problem and to develop what we call smart builds. Smart builds take a file-based approach to diffing. It allows us to know at the file level what has actually changed in your builds and to only upload and download those specific changes. Taking this approach not only leads to optimal diffs but also opens up avenues for future development in the area of distribution. For example, want to know why the latest build has suddenly ballooned in size? Do a diff between those builds and list out exactly what changed. Want to know why your server build is unnecessarily large and how you can reduce it? Break down the files and categorize them by type. These are not capabilities we currently have but smart builds make them possible.
Between Health and Distribution sits playtesting. Playtesting is the reason we want to distribute builds and the purpose of gathering feedback on a game’s health. For something so critical to a game’s success the lack of tooling in this area is quite unexpected. Every studio seems to do this in a different way and to use an ad-hoc assortment of tools to gather players, get builds out to them, and then ultimately gather feedback. ADT aims to be a singular tool for this purpose that makes organizing and executing a playtest trivial.
We added play testing support in the alpha form to ADT last year. The focus of this initial alpha was in being able to define a playtest and then collate all the feedback that happened from the playtest in one single place so you could assess the success of a playtest. Based on the feedback we’ve had from the alpha we are now ready to move forward into the next phase.
Playtesting is probably the area in which we have the biggest plans and the second half of 2023 will be largely dedicated to delivering those plans. It will involve a complete redesign of the user experience for ADT that tightly integrates play testing, distribution, and health. The goal is for ADT Hub to be a single place where you can create a playtest, coordinate and invite players to join, distribute the required builds, and then summarize, collate, and triage the resulting feedback. We will have more to share on these plans in the coming months.
As I mentioned right at the start of this post, ADT is just getting started. We see what we have as foundational with many more exciting features to come. We’d love for you to try out ADT and help shape its future. We strongly believe in creating something that is useful and helpful for developers and are constantly looking for guidance and feedback to help make it a reality. You can get started with ADT on a 30-day free trial. Welcome aboard!