The level of detail in this new video is almost eerie. It’s just a yellow man in a tricorn hat running about some checkered terrain, but it is quite the surprise that none of it is done via motion capture.
Yep, this is all the work of an AI, something comparable to what you might find in a videogame. And this video may very well represent what we’ll be able to do within the next ten years with our virtual avatars.
The animation you see was created by researchers from the University of Edinburgh and Method Studios, and was made in part using a complex system of machine learning. As a hot-button term that seems to be popping up more nowadays, you may know that ‘machine learning’ is essentially teaching a machine to do something by giving it a series of human-made inputs within a system. Like, for instance, with this MarI/O video, where an AI learned to ‘play’ Mario not by getting its hands on a controller, but by recreating the inputs a human player made with that controller.
The result is a gameplay video that has little to suggest it was made by AI.
Which brings us back to the video made by the University of Edinburgh and Method Studios researchers. Its method is a step up from MarI/O, utilizing not only a learning system with a series of step-by-step inputs, but also adding another step in defining how each individual step relates to another.
The MarI/O AI doesn’t know that it’s supposed to jump at the end of a platform, or that jumping coincides with defeating enemies. It only sees the input chain it is meant to follow. This new system, however, demonstrates an AI that is able to piece steps together based on the environment it encounters and its position in it, relative to how the input system says it can deal with each obstacle. With data points and an environment taken from a real-life scenario, this kind of neural network-based learning can fabricate movement that is extremely life-like.
This new video is significant not only because it looks pretty, but because it demonstrates AI taking on a function we’ve only seen reserved for human animators before. And not only are the results better than anything we’ve seen before, but they demonstrate a means of cutting out the middleman. It’s a scary prospect for human animators in the future perhaps, but it’s also very exciting for the development of videogames, and it goes a long way towards replicating more perfectly the environments we might dream of being a part of.
If you’re a technophile and would like to learn more about this new video, feel free to check out the article written by Ars Technica on the subject. It goes into much more detail about neural networks, phase functions and how the whole process works.
If you’d like to read the original research report, click here.