问题
I'm looking for some help modelling this machine learning problem.
A hand consists of three rows (containing 3, 5, and 5 cards respectively). Your goal is to build a hand that scores the most points. You receive the cards in intervals called streets, five cards in the first street, and three in the next four streets (you must discard one of the cards in the final four streets). Cards can't be moved once you place them. More details on scoring.
My goal is to build a system that, given a set of streets, plays the hand similar to our best players. It seem pretty clear that I'll need to build a neural network for each street, using features based on the existing hand and the set of cards in the street. I've got plenty of data (streets, placements, and final score), but I'm a little unsure how to model the problem given that the possible outputs are unique on the set of cards (although there are less than 3^5 placements in the first street, and 3^3 after). I've previously only dealt with classification problems with fixed categories.
Does anyone have an example of a similar problem or suggestions how to prepare the training data when you have unique outputs?
回答1:
A vague question gives a vague answer (which is my excuse for being too lazy to code ;-).
You wrote you have a lot of data, and it seems you want to map the game onto experience gained with supervised learning. But that is not the way game-optimization works. One usually does not perform supervised learning, but rather reinforcement learning. The differences are subtle, but reinforcement learning (with Markov decision processes as its theoretical basis) offers more a local view -- like optimize the decision given a specific state. Supervised learning rather corresponds to optimize several decisions at once.
Another show stopper for the usual supervised learning approach is that even if you have a lot of data, it will almost surely be too little. And it will not offer the "required paths".
The usual approach at least since Thesauro's backgammon player is rather: set up the basic rules of the game, possibly introduce human knowledge as heuristics, and then let the program play against itself as often as possible -- this is how google deep mind set up a master go player, for example. See also this interesting video.
In your case, the task should in principle be not that hard, as there is a comparatively small number of game states and, importantly, any issues involved by psychology like bluffing, consistent playing, and so on are completely absent.
So again: build a bot which can play against itself. One common basis is a function Q(S,a)
which assigns to any game state and possible action of the player a value -- this is called Q-learning. And this function is often implemented as a neural network ... although I would think it does not need to be that sophisticated here.
I'll stay that vague for now. But I would be glad to assist you further if necessary.
来源:https://stackoverflow.com/questions/36366318/modelling-card-game-for-machine-learning