AI Business looks to La Forge and Test Automation for examples of how AI can help large video game publishers

Sebastian Moss

September 17, 2021

11 Min Read

Cutting edge artificial intelligence research can seem fascinating, groundbreaking, transformational in its academic setting.

But developers working on commercial video game titles, with all the hard realities that the process involves, often say the same thing - ‘that’s cool, but it just doesn’t work in the real world.’

To bridge the gap between the forward-focused academic environment and the risk-averse corporate world, game publisher Ubisoft created ‘La Forge,’ a specialized team of students, professors, and employees working to find out how AI can improve game development.

“The mission of La Forge is to facilitate technical prototyping based on the latest academic research,” Yves Jacquier, head of the department, told AI Business.

“In other words, we examine the potential use cases for emerging techniques such as deep learning, biometrics, etc.”

“When we have a solid proof of concept or a prototype, and we understand its limitations, constraints, and potential usage, we apply that knowledge where it will have the greatest impact at the company.”

The group looks for projects that “serve both academic interests (create public knowledge) and have a concrete impact on game production,” Jacquier said.

“Some topics are obviously long-term, such as autonomous agents or procedural creation of assets like animations, but we always aim toward smaller, more tangible objectives.

“You don’t climb Everest in one go, you set your sights on getting to the next camp where you can revise your plans. When we tackle new topics, we start small and try to fail fast, should we fail.”

La Forge works with researchers, helping fund promising projects, and giving them access to the company’s internal data and tools.

“Over the years, we have developed an expertise in maintaining this balance between research that can be used to solve real problems while also being of academic interest,” Jacquier said. “We started working on deep learning more than ten years ago, before it was a buzzword, and we had our fair share of failures and successes.”

Simulation conflation

For La Forge to be successful, it is crucial that the team works with the wider Ubisoft business.

“I do not want La Forge to become an ivory tower or a silo, and we are constantly challenging ourselves to make sure we offer the best of both worlds: protecting the research and taking risks while having an impact on the organization in the long term, punctuated with short-term victories,” Jacquier noted.

Despite the commercial motivation, the group does look at projects that don’t have an immediate link to video games; for example, those involved with climate change and self-driving vehicles.

This is vital if you want to get researchers from other backgrounds involved, Jacquier argued, and helps innovation in other ways. “For example, at Ubisoft, we have often wondered how realistic our engines – especially the rendering part – were,” he said. “Are the light reflections on water good enough? This is highly subjective.”

At one point, the group was contacted by a team trying to simulate the flooding of familiar landscapes to educate people on the effect of climate change.

Users were asked to upload a picture of their neighborhood, so the team could envision how it would look like in 30 years. “To do this, the team needed to train an AI model with a lot of before/after examples and there is not a lot of available data, so they had the idea to simulate it in a video game environment,” Jacquier said.

“By contributing to this project, we learned a lot about new techniques called GANs – Generative Adversarial Networks – but we were also able to measure that our images significantly improved the performance of the training AI, by around 300 percent, while images from another game engine worsened its performance.

“In other words, by applying this research to a context that has nothing to do with video games, we were able to answer the rendering quality question.”

The group has been involved in research into e-learning, mental health treatments, realistic avatars, and AI-based methods for computer science, developed for the programming community.

“We are not specialists in these areas and do not pretend we’ll ever be,” Jacquier said, “but we think that working on such projects creates ‘triple win contexts’ for academia/Ubisoft/other areas.”

Of course, the team also works on research that could have a clear and noticeable impact on game development. One of the most interesting projects is Clever Commit, which is now in joint development with Mozilla.

“It all started with a student – Mathieu Nayrolles – who needed industry experience to finish his PhD,” Jacquier explained.

“We gave him an offer to work with La Forge and provided him access to lines of code, bug history, patches, etc. from more than 10 years of AAA games.

“The idea was to train an AI using our history of code, bugs, and the way we corrected them, so it would learn what a potential bug generally looks like, exactly like an experienced programmer would.”

This led to the creation of a prototype called ‘Commit Assistant’ that was able to spot 70 percent of the bugs before the code was submitted into test production.

“These were great results, but in the process of building it, the prototype, also raised important questions about what data we wanted to use to train an AI,” Jacquier said.

“If we trained such models with identifiable data, such as the gender of the programmer along with his/her history of bugs, we might increase the overall efficiency of the AI system, but also create biases in the prediction.

The possibility that the AI could eventually discriminate against groups of people convinced us not to include this type of data.”

A lot of work had to be done to bring the prototype to a point where it could be used in a production environments.

Jacquier explained: “When you take any prototype to this next step, it raises important questions and new challenges: who will access bug predictions? How do you explain the process, so programmers trust the result? Are some of the tasks obsolete? How do we train people?”

The original author, Nayrolles, was put in charge of a team tasked with working out the kinks.

The project grew to become Clever Commit, which now has a success rate of 85 percent in spotting bugs and is deployed in more than 25 productions worldwide, Jacquier revealed.

“At the same time, we are continuing to do research on collateral topics to predict online failures, or the load on game servers.”

SmartBots assemble

Another fascinating research avenue at La Forge is ‘SmartBots,’ hoping to solve the challenges facing systems that drive the action of in-game characters (confusingly, also known as AI) in increasingly large and complex virtual worlds.

“While traditional AI is a lot of ‘IF this THEN that,’ machine learning based AI relies on data to generalize and predict, whether you provide this data (in what is called a training dataset), or it creates its own data,” Jacquier explained.

“When you have in-game agents, such as vehicles, NPCs, opponents, crowds, or wildlife, the systems become increasingly complex. Each agent (or class of agents) needs to navigate in the world and interact with the environment, but also, with other agents. So when you add new elements, like new characters, new weapons, new vehicles, or new world behavior like the weather, each agent needs to take those new elements into account. Programming them with “if this then that” statements is not convenient, since that does not scale.”

SmartBots uses reinforcement learning to try and solve this challenge.

For example, instead of manually programming the trajectory of a car, you give the agent actions (steering wheel, brakes, accelerator) that lead to different results, on which you build rewards. For example, 100 points to the car if it can go from point A to point B, but every time it deviates from the center of the road it gets points deducted.

“Then the agent does a lot of trial and error to maximize its reward, and with time, converges to build an optimization strategy,” Jacquier said. “The programmer doesn’t tell it how to drive, it learns how to drive based on reinforcement techniques.”

This could have a huge impact on game development.

Say, you add a new feature, like snow, late in production; you don’t have to reprogram all the ‘if this then that’ statements. Instead, the agents simply retrain themselves in the new environment, still trying to maximize their rewards.

“The ‘art’ is defining the rewards, which is way more difficult than you would imagine, as there are sometimes conflicts between long-term rewards and short-term rewards, or exploration of strategies versus exploitation of tactics,” Jacquier said.

It’s still early days, but the same techniques that are employed for vehicle, drone swarm, and crowd navigation, are now featured in Ubisoft middleware.

SmartBots has also found applications in game balancing: “When you have complex and rich gameplay, some specific combinations of moves can always win unfairly: we call them exploits,” Jacquier explained. “With complex gameplay, such sequences in specific contexts, or with specific opponents, are difficult to find.”

It can take a large gaming community millions of matches to unearth the exploits – or you could have bots playing thousands of matches, trying thousands of different approaches.

“This is what we did on For Honor three years ago,” Jacquier said.

“The community signaled some issues with the gameplay, so we introduced an approach based on bots, where designers would parameter a new character or weapon, let bots play against this new character overnight, and get a report showing how statistically balanced the matches were.”

This proved incredibly useful for 1v1 arenas, but at the time, the bots struggled with more complex scenarios.

The hope is that a similar system will one day be able to play thousands of iterations in open world titles with different gameplay environments, but “there is still a lot of research to be done,” Jacquier admitted.

An army of bots

Over in India, a different team at Ubisoft is trying an alternative approach to game testing.

“We are a team of 22 engineers well versed in C++, C#, and Python programming, spread between Ubisoft Pune Studio and Ubisoft India Research Unit @ IIT-Bombay Research Park,” Test Automation architect Madhukar Joshi explained.

“The automation team in Pune works with Production teams to develop automation frameworks and cater to the automation and tools needs of both Production and [quality control (QC)] teams across HD and mobile games,” Joshi said.

“The Ubisoft team at IIT-Bombay works on developing solutions involving Computer Vision and Machine Learning that can be integrated with QC and Production needs in the future.

“Our team members at IIT-Bombay Research Park, as well as at the Pune Studio, also develop productivity tools that automate time-consuming manual workflows, or advanced tools that aid in faster and well-informed production and QC activities.

The plan is to develop a solution that eventually becomes a standard, “capable of testing the most common use cases executed by QC teams and dev testers across any Ubisoft games,” Joshi said.

With Ubisoft best known for its sprawling open world titles, this makes adequate testing an immense task. “The most common use cases are open world exploration, 3D testing, performance testing, character customization, Navmesh testing, etc,” Joshi said.

One system can explore procedurally generated worlds to spot mistakes. Another can try out the millions of different character, gear, costume, and weapon combinations that a game may have, while yet another looks for improper textures or other issues in game rendering.

“The solution aims to save time that is lost in repetitive activities performed by the testers on multiple versions of a game. It also targets customization tests, as giving complete coverage is an impossible task for human testers.”

Looking further ahead, some have suggested the idea of a system that not only identifies issues but fixes them.

Here, we head back to La Forge. Combining Levels of Bug Prevention and Resolution Techniques (CLEVER) is a project part-financed by the Natural Sciences and Engineering Research Council of Canada that seeks to do just that.

Led by Mathieu Nayrolles as part of the Clever Commit project, the early-stage research was applied to 12 Ubisoft systems, detecting code likely to cause bugs with 79 percent accuracy.

It was then able to recommend adequate fixes in 66.7 percent of cases. In a 2018 research paper, Nayrolles admitted the effort was still in its infancy and had several limitations, but Joshi told AI Business that La Forge expected to “utilize that system in our roadmap soon.”

This would help game development become just a bit more, well, clever.

To find out more about the emerging applications of artificial intelligence in video game development and publishing, download our EBook – ‘How AI will revolutionize video games

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like