While doing a PhD aiming to improve the accessibility of gaming live streaming by generating audio description in real time, I was thinking about what the future of gaming accessibility should look like. If my first GameToSpeech prototype was designed with an API, I thought that I may need at some point to justify this design and, in the meantime, reply once for all to those asking why I focused my PhD around live streamed games and why I did not simply start with audio description for games.

This article will not be scientific. I might in the future cover the same topic with scientific writing, but I thought, for various reasons, that it should be stated in a blog article without a solid literature review and for a wider audience. I think also that this article is going beyond my PhD project. Please also note that, in the following, when the user or the player will need to have a pronoun, « she » will be used.

A few words about APIs

An API is an Application Programmer Interface. It is a software that output structured data which can be used by developers to create an implementation for end-users (own definition). APIs is nowadays a major design used by many in the web industry. Every app, every social media you know is API-based.

There is a lot of guidelines to follow while designing an API. I will just drop here a link about this topic: https://swagger.io/blog/api-design/api-design-best-practices/.

APIs in gaming accessibility

I think that every accessibility features which need to output extra information would benefit from an API. One of the first feature that I thought and that would use an API is audio description for games (obviously I am doing a PhD is this field).

Using an API means that the game will have to output raw data that will be used by another software in order to provide audio description. The audio description software could be, for example, an online service or another software installed on the same computer. It could also even be the game itself. Yes, the game could send data to itself. Then, what the point, you may ask. The goal is to give the user the power to choose which software she wants to use. If she wants to use the game internal audio description software, she can do it as well as using an external software. APIs are software speaking the same language. As the structured data provided by the API can be understand and processed by any software, it will remove a lot of barriers. Here is an overview.

APIs will help adapting the solution for the needs of the user. The goal of using an external software is to let the user customized their experience based on her own needs. If the user cannot stand the text-to-speech provided by the game, she would use another software with her own settings. I think customisation is important for all users, but I see in accessibility another chance to advocate for this as every user who have one or several (dis)abilities is unique and have unique needs.

APIs will help using accessibility features without game design related issues. For party games, for example! If we have one player who needs audio description, it could be a bit annoying for every players to have audio description if it is not describing your gameplay (in a split screen game played on a television for instance). It will be even impossible to play with several players needing audio description at the same time. We may also encounter the situation where the information transmitted by audio description should not be heard by the other players because the game is hidden information to some players (in the case where the game view is not the game state, as described by B. Brathwaite and I. Schreiber in Challenges for Game Designers (2008) p.25). To resolve this situation, the user using audio description would set the audio description API calls to be send to an audio description app on her phone, for example.

APIs will help audio description users to be included in various forms of social activities involving gaming. If a game company decide to implement audio description for one of their game (which would be super cool) — let’s say it is a narrative single-player game, just to remove the issues previously mentioned —, it might decide to add an audio description option like the other accessibility features (subtitles, color blind mode, game speed, etc.). With that, the company will think that blind or visually impaired player will be able to enjoy audio description triggered by the game. But what is happening if a person wants to watch a live stream of the game, or just spectate a friend playing the game, for example? The streamer or the friend may not want to have audio description, but as it is an feature locked into the game, it is impossible to let someone else having audio description when others are not forced to have it. With an APIs, the game company will have nothing more to do to let the users use a third-party software for their own use. In theatre, only people who want to have live audio description have an headset with it. As eSport or other forms of entertainment media using video games are more and more popular, APIs will resolve a lot of issues these industries will face.

APIs will foster innovation. Audio description for live streamed games is an example, but opening game’s accessibility data will let developers build software for products game developers may not have the time for. If an accessibility feature let the game export haptic feedback in an API, as a user, I may want to receive vibration on my smartwatch or other IoT stuff. I doubt game companies will invest time for developing real-time vibration on your smartwatch… or on other assistive technologies if we focus on accessibility.

Because yes, if you don’t want to have audio description, but if you want to use braille, for example, if the game is using an API, only the implementation is changing, not the API. So it is fairly easy to do. Then, APIs will expand the possibilities for assistive tech.

Conclusion

In this article, I took the example of audio description because it is my area of research, but all accessibility features could use APIs. Closed captions is also a good example (you can basically replace every « audio description » with « closed captions » above). Same for cinema, TV show, theatre and others. I am actually wondering if anybody tried to build a software for watching a movie in the cinema with an AR headset that is displaying closed caption… (which may be only possible thanks to a closed caption API).

As we saw in this article, using APIs for gaming accessibility will improve the user experience in a lot of different ways. From customization, to game design issues and vicarious play, building new accessibility features with APIs is about speaking the same language for the good of all. However, game companies (with the help of researchers?) need to define what is the language.

To conclude, APIs is about interoperability. And, interoperability is about inclusion. A gaming accessibility common API will include the diverse software or tools that fits users’ needs. And there is something else that is about inclusion: accessibility. Infinite loop!

And, about the focus of my PhD, one of the thing that pushed me to choose this subject is, among others, that live streamed games will definitively need an API for transmitting audio description to the users over the web. Then, by studying that, it will naturally let games having audio description for players (even though the player’s interaction will not be studied is the PhD).

That’s all! If you are interested by this subject, do not hesitate to send me a message, for example on Twitter.