4ncl

An Appeal for Fair Play – Part 2

Engine-assistance is the type of cheating against which the 4NCL targets the majority of its effort. Whatever your take on its three blind monkeys view on other forms of cheating, it’s clearly got its number one priority dead right.

Other forms of cheating require a certain level of preparation, skill, even dedication.  It’s all very well having a moral flexibility which permits you to consult opening manuals during the game: to take full advantage of this, you’ve got to have at least troubled yourself to have a glanced at them beforehand.  However, a novice armed with a current version of Komodo or Houdini could effortlessly blow away a World Championship, current or past.

Engine-assistance is the weapon of mass-destruction in the arsenal of the chess-cheat.

Yet the over accessibility of chess-engines and their amenability to software-integration, suggests a possible counter-measure – in effect reverse-engineering the process. Just as it is possible for a player to use a chess-engine to determine which moves to make, it is equally possible to compare the moves of a suspected cheat with the recommendation of the same chess-engine. If move-after-move, the player’s moves agree 100% with the engine’s choice, clearly he is not thinking for himself. Set a thief to trap a thief.

Put like that, it sounds like a decent anti-cheating bot is something that could be knocked out over a weekend by a competent computer science grad student. Indeed, if our novice cheat confines themselves to using a market-leading engine, and to slavishly following its 1st choice recommendation, he would get picked up reasonably quickly by even a rudimentary anti-cheating algorithm.

But please credit our tyro cheat with a little ingenuity. Firstly, the cheat doesn’t just have one engine at his disposal but a plethora, and may well switch from one engine to another between games or even intra-game. A routine web-search reveals that there are 18 distinct engines with their current rating over 3000 [1] Now, while top-engines agree almost unanimously as to which moves are rubbish, there is a fair level of variation as to which move in any given position is the best.  That’s how you get product-differentiation between chess-software developers, how engines acquire distinct styles, and why engines end up with different ratings to each other.

A recommendation from any 3000+ engine is going to be good enough for the cheat. However, for the anti-cheating bot, the multiplicity of engines (indeed of versions of the same engine) is only going to make to it harder to single out which engine (or engines) are being used: or, indeed, whether an engine is even being used at all.

And you weren’t forgetting, were you, that the cheat does not always need to select the engine’s top choice? The 2nd & 3rd or even 10th best move may be almost as good: it just depends on the nature of the position. How will your anti-cheating bot deal with a strategy in which the cheat has the subtlety to select an occasional sub-optimal move?

Thirdly, if our cheat is a reasonably strong player, it isn’t necessary to refer to an engine for every or indeed most moves. This is what John Nunn writes [“Secrets of Practical Chess” p.29 2007 Gambit Publications Ltd] in a section entitled: “When the tactics have to work”.

If you initiate tactics which involve a large commitment and no safety-net, then you have no margin of error…Thus you have to be absolutely sure that your idea works.

The strongest players can sense the crisis-point in their games, and reserve their energy and time on the clock for “double-checking everything” when these are reached. Likewise, your truly expert-level cheat will only turn on his engine when he realizes he is about to reach such a position. How will your algorithm deal with cheats with both the self-restraint and chess-playing ability to refer selectively to the engine only a handful of times in the game?

The reader will readily grasp that in order for an algorithm to trap a wide range of engine-assistance strategies, it will need to build in a fair degree of flexibility in the relationship between engine preferences and the move-set of the suspected cheat.  Computer scientists call this concept “fuzzy-matching”, whereby a program searches for a condition across a broad range of criteria, without requiring any specific criteria to be met.

However, this more subtle approach to cheat-identification carries with it a degree of danger. How so?

A flexible criteria for identifying cheats will pick out players in whose games the majority of their moves are among the top choices of leading engines (or recent versions of leading engines). The problem here is that human-players can quite legitimately play games which satisfy such criteria.

What sort of games am I thinking about here? I don’t just mean brilliancies: games in which not only is every move one plays good, but one’s opponent plays well too – the outcome getting decided by an imaginative sacrifice or a deep strategic plan. Every player can aspire to play such games, but a player like myself (a mid-range 2200) will count themselves fortunate to produce even a handful per decade.

A more common category is where intelligent (or lucky) preparation results in an overwhelming advantage for one player.  A good example of this is the OTB game Arnott-Thomas 4NCL 2020. [See Jon Arnott “The Numbers Game” pp.22-25 Chess September 2020]

A third category is where one player clearly does not have their mind on the game, performs below their normal standard, and allows a position in which a well-motivated player requires only routine moves to realize their advantage.  This situation of course occurs in OTB games, but I’d suggest is especially prevalent in online chess: where the two opponents’ respective focus on the game is hugely affected by their differentiated home playing environments.  Geoff Moore’s game [2] is a probable instance of this.

So where does this leaves us?  With a dilemma.

It certainly seems possible to write an anti-cheating bot which could accurately identify those using a specific engine-assistance strategy.  If your suspected cheat is found to have played several hundred games which evince such a strategy, you can be quite certain that you’ve got your man.  Meanwhile, your cheat will have pocketed First Prize in a number of tournaments that you were supposedly monitoring, and morphed-off onto his next strategy.

Alternatively, you could flex ‘n’ fuzz your anti-cheating criteria to include multiple versions of multiple engines & sub-primary ranked moves, and de-limit the number of games required for cheat-identification.  Then, you will certainly pick up a number of cheats.  But you risk labelling as “cheats”, players who are not actually cheating at all.

Which, funnily enough, is what actually happened to Anglia Avengers in 4NCL Online Season 1!

[to be continued]

(c) AP Lewis 2020

Leave a Reply

Your email address will not be published. Required fields are marked *