EA Develops Self-Training AI Players To Throw Down Versus Humans In Battlefield 1

Battlefield 1

Artificial intelligence is a concept that has existed for decades and is often used to characterize computer opponents in video games. The effectiveness of AI varies greatly, however, as some schemes are simply programmed instructions of how to act and react in certain situations. Electronic Arts is kicking things up a notch by training self-learning AI agents to play Battlefield 1, and the results are pretty impressive.

The effort is part of a project at SEED (Search for Extraordinary Experiences Division) at EA, which tasks itself with exploring the future of interactive entertainment. Founded two years ago, SEED takes a practical approach to trying to predicting what gameplay will be like years down the line, focusing on technology the division thinks will impact interactive entertainment in the next 3-5 years.

"Upon learning how an AI created by DeepMind had taught itself how to play old Atari games, I was blown away. This was back in 2015, and it got me thinking about how much effort it would take to have a self-learning agent learn to play a modern and more complex first person AAA game like Battlefield. So when I joined SEED, I set up our own deep learning team and started recruiting people with this in mind," SEED's technical director Magnus Nordin said.


The recent project started with building out a barebones 3D first-person shooter to test the division's algorithms and train its network. After seeing some positive results, SEED teamed up with DICE to integrate its agents into a Battlefield 1 environment.

According to Nordin, the AI agents are "pretty proficient" at basic gameplay in Battlefield 1, and have taught themselves to change their behavior based on certain triggers, such as being low on ammunition or health. In the video above, the agents can be seen playing like a human player would, at least for the most part. Everything they do is the result of previous gameplay experience—SEED only provides encouragement for playing the objective.

"After the playtests, a few participants asked us to clearly mark the agents so that they could be properly distinguished, which to me is a good testament to how well the agents perform and how lifelike they are," Nordin said.

That said, there is still more work to be done. Human players are still able to outperform the agents, albeit not by way of a blowout, and the agents are not very good at planning ahead. As a result, they sometimes run around in circles looking silly, which typically happens when there is nothing in sight and seemingly nothing to do. A better strategy would be to seek out opponents, but the AI agents haven't figured that out yet.

In the short term, the goal is to help DICE scale up its quality assurance and testing, which in turn would help the studio collect more crash reports and root out more bugs. Looking further down the line, Nordin sees self-learning agents being part of the gameplay as deep learning technology matures. And at some point, Nordin things it's reasonable to expect that an AI agent could even beat a professional player.