Computer game bot Turing Test
The Computer Game Bot Turing Test is a variant of the Turing Test, where a human judge viewing and interacting with a virtual world must distinguish between other humans interacting with the world and game bots that interact with the world. This variant was first proposed in 2008 by Associate Professor Philip Hingston[1][2] of Edith Cowan University, and implemented through a tournament called the 2K BotPrize.[3]
Contents
History
The Computer Game Bot Turing Test was proposed to advance the fields of Artificial Intelligence and Computational Intelligence with respect to video games. It was considered that a poorly implemented bot implied a subpar game, so a bot that would be capable of passing this test, and therefore might be indistinguishable from a human player, would directly improve the quality of a game. It also served to debunk a flawed notion that "game AI is a solved problem."[2]
Emphasis is placed on a game bot that interacts with other players in a multiplayer environment. Unlike a bot that simply needs to make optimal human-like decisions to play or beat a game, this bot must make the same decisions while also convincing another in-game player of its human-likeness.
Implementation
The Computer Game Bot Turing Test was designed to test a bot's ability to interact with a game environment in comparison with a human player, simply 'winning' was insufficient. This evolved into a contest with a few important goals in mind:[2]
- There are three participants: a human player, a computer-game bot, and a judge.
- The bot needs to appear more human-like than the human player. Judge scores are not bipolar — both human and bot can be scored anywhere on a scale from 1 to 5 (1=not humanlike, 5=human).
- All three participants are to be indistinguishable in the arena, with the exception of a randomly generated name tag, so as to reduce the chance of random elements such as name or appearance influencing the judges.
- Chat is disabled throughout the match.
- Bots were not given omniscient powers as they may be in other games. Bots must react only to the data that might be reasonably available to a human player.
- Human participants were of a moderate skill range, with no participant either ignorant to the game or capable of playing at a professional level.
In 2008, the first 2K BotPrize tournament took place.[4] The contest was held with the game Unreal Tournament 2004 as the platform. Contestants created their bots in advance using the GameBots[5] interface. GameBots had some modifications made so as to adhere to the above conditions, such as removing data about vantage points or weapon damage that unfairly informed the bots of relevant strengths/weakness that a human would otherwise need to learn.
Tournament
The first BotPrize Tournament was held in Perth, Australia, on 17 December 2008, as part of the 2008 IEEE Symposium on Computational Intelligence and Games.[4][6] Each competing team was given time to set up and adjust their bots to the modified game client, although no coding changes were allowed at that point. The tournament was run in rounds, each a 10-minute death match. Judges were the last to join the server and every judge observed every player and every bot exactly once, although the pairing of players and bots did change. When the tournament ended, no bot was rated as more human than any player.
In subsequent tournaments, run during 2009-2011,[7][8][9] bots achieved scores that were increasingly human-like, but no contestant had won the BotPrize in any of these contests.
In 2012, the annual 2K BotPrize was held once again, and two teams programmed bots that achieved scores greater than those of human players.[3]
Successful bots
To date, there have been two successfully programmed bots that passed the Computer Game Bot Turing Test.
- UT^2, a team from the University of Texas at Austin, emphasized a bot that adjusted its behaviour based on previously observed human behaviour and neuroevolution. The team has made their bot available,[10] although a copy of Unreal Tournament 2004 is required. A short video of their bot is available on YouTube.[11]
- Mihai Polceanu, a doctoral student from Romania, focused on creating a bot that would mimic opponent reactions, in a sense 'borrowing' the human-like nature of the opponent.
Comments from the winners can be found in detail at the BotPrize website.[3] Interestingly, these victors succeeded in the year 2012, Alan Turing's centenary year.
Aftermath
The outcome of a bot that appears more human-like than a human player is possibly overstated, since in the tournament in which the bots succeeded, the average 'humanness' rating of the human players was only 41.4%.[12] This showcases some limits of this Turing Test, since the results demonstrate that human behaviour is more complicated and quantitative than was accounted for.[13] In light of this, the BotPrize competition organizers will increase the difficulty in upcoming years with new challenges, forcing competitors to improve their bots.[14]
It is also believed that methods and techniques developed for the Computer Game Bot Turing Test will be useful in fields other than video games, such as virtual training environments and in improving robot-human interaction.[15]
Contrasts to the Turing Test
The Computer Game Bot Turing test differs from the traditional or generic Turing test in a number of ways.[2]
- Unlike the traditional Turing Test, for example the Chatterbot-style contest held annually by the Loebner Prize competition, the humans who played against the Computer Game Bots are not actively trying to convince judges they are the human; rather, they want to win the game (i.e., by achieving the highest kill score).
- Judges are not restricted to awarding only one participant in a match as the 'human' and the other as the 'non-human.' This emphasizes more qualitiative rather than polarized findings.
- With regards to a successful computer game bot, this is not be confused with a claim that the bot is 'intelligent,' whereas a machine that 'passed' the Turing Test would arguably have some evidence for its Chatterbot's 'intelligence.'
- The game Unreal Tournament 2004 was chosen for its commercial availability and its interface for creating bots, GameBots. This limitation on medium is a sharp contrast to the Turing Test, which emphasizes a conversation, where possible questions are vastly more numerous than the set of possible actions available in any specific video game.
- The available information to the participants, humans and bots, is not equal. Humans interact through vision and sound, whereas bots interact with data and events.
- The judges cannot introduce new events (e.g., a lava pit) to aid in differentiating between human and bot, whereas in a Chatterbot designed system, judges may theoretically ask any question in any manner.
- The two participants and the judge take part in a three-way interaction, unlike, for example, the paired two-way interaction of the Loebner Prize Contest.
See also
- Virtual reality
- Turing test
- Graphics Turing Test
- The Loebner Prize, a contest that implements the 'traditional' Turing Test
- Rog-O-Matic, a 1984 bot that plays the 1980s dungeon crawler Rogue
References
- ↑ http://philiphingston.com/Homepage/Homepage.html
- ↑ 2.0 2.1 2.2 2.3 Lua error in package.lua at line 80: module 'strict' not found.
- ↑ 3.0 3.1 3.2 http://botprize.org
- ↑ 4.0 4.1 http://botprize.org/2008.html
- ↑ http://gamebots.sourceforge.net
- ↑ http://www.csse.uwa.edu.au/cig08/
- ↑ http://botprize.org/2009.html
- ↑ http://botprize.org/2010.html
- ↑ http://botprize.org/2011.html
- ↑ http://nn.cs.utexas.edu/?ut2
- ↑ http://www.youtube.com/watch?v=VwIrZ3X4b6c
- ↑ http://botprize.org/result.html
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.
- ↑ Lua error in package.lua at line 80: module 'strict' not found.