Multiplay currently operates two different scaling models, Allocations and Reservations.

This document pertains to the Allocation system, detailing its basic usage and operation of the Allocation scaling system and its subtypes.

The Multiplay Allocations API is asynchronous, so that in the unlikely event that Allocations take a long time to complete they do not consume resources. The Multiplay Allocations API complies with the RESTful Architectural constraints and is idempotent.

What is an allocation?

A simple definition of an allocation would be: a proactive request to obtain a Game Server Instance from Multiplay which can then be used by players.

In more technical terms, an Allocation is an API request for a Game Server Instance. Upon receipt of the API request, our orchestration system finds an unused Game Server Instance and flags it as used, a second API request is required to return a Game Servers information.

What is a deallocation?

A De-Allocation is the removal of an Allocation from our orchestration platform, this is completed via a single API call. A De-Allocation, does not directly shutdown a Game Server Instance, however if the capacity is no longer required and the instance was a Cloud instance it can then be shutdown and removed at a later day. Once deallocated a Game Server Instance can be reused by a subsequent allocation request..

What is a Fleet?

Multiplay uses Fleets to encapsulate a set of regions, locations, providers and profiles. This allows Multiplay to manage multiple environments with unique settings for each region, while maintaining a core template.

In the event there is no Cloud provider available in a defined region, it is possible that a Fleet can contain only Bare Metal Machines, it is also possible for a Fleet to only contain Cloud VMs as well if required. How do we scale based on Allocations? Allocation scaling choices are based on multiple criteria. Our orchestration platform makes decisions based on cost, velocity, locations, spin up times, instance types, wider internet issues and location reliability.

When utilising our API there is no delineation between Bare Metal and a Cloud VM when it comes to an Allocation, our orchestration platform makes intelligent decisions on which capacity to return based on the criteria outlined above. Multiplay ensures type, density, fragmentation and the availability are all taken care of when we return a successful allocation.

Matchmaking with Allocations

Enabling a matchmaker with allocations is a relatively simple process, although understanding player flow i.e. how you expect players to be placed into a match from matchmaker to game server is key, while also considering regional placement, profile request and UUID tracking.

To this end there are a few questions you need to worry about when selecting or designing a matchmaker:

The example below outlines a player flow where they the players are matchmaking into a game server instance supporting a lobby. Players are grouped together on the game server instance before play begins.

Additional scenarios to consider:

Multiple Game Modes

De-Allocation Scenarios

Now we have the basic information on how the system will work as well as some error handling we can start to map the actual functions to our API calls:

Allocation flow Diagram

Multiple Allocations calls

In a production matchmaking environment you should not issue a allocations call for each UUID. The best approach to this would be have a thread constantly looping all pending UUIDs as they can be batched together in one API call.

Allocations and Zero Downtime patching

Zero downtime with allocations is very simple to do. Your matchmaker needs to support multiple profile ID’s, a profile ID contains all configuration data and version information. So when a patch is released and updated your matchmaker can start allocating the new profile ID for players with the new matching client version. This method can support multiple versions over many profiles.

If the profiles being used for allocations are running version A, you would update and install version B, confirm the roll out of the patch, then start allocating profiles using version B. This would gracefully move capacity without dropping players.

Why the game server process should not exit itself after a session

If you decide to self terminate the gameserver at the end of the match (which can be called a destructive clear down), it will prevent optimal use of resources as the process initialisation is typically more expensive than cleanup. For example rather than restart the gameserver, return it to a lobby state without self termination.

Extra options and features

API References (in order of use):

  1. Game Servers#ServerAllocate
  2. Game Servers#ServerAllocations
  3. Game Servers#ServerDeallocate

Useful API calls for gameserver validation:

What is a UUID

You can check the Wikipedia entry here: