2048游戏-人工智能平均得分不超过256

4

我正在尝试使用MiniMax和Alpha-Beta剪枝实现基于蛇策略(参见this论文)的2048人工智能,这似乎是单一启发式方法中最好的。

不幸的是,在大多数游戏中,AI只能做到256,这比空单元格启发式方法好不了多少。我已经阅读了相关主题,但无法自行解决问题。

以下是代码:

import math
from BaseAI_3 import BaseAI

INF_P = math.inf

class PlayerAI(BaseAI):
    move_str = {
        0: "UP",
        1: "DOWN",
        2: "LEFT",
        3: "RIGHT"
    }

    def __init__(self):
        super().__init__()
        self.depth_max = 4

    def getMove(self, grid):
        move_direction, state, utility = self.decision(grid)
        act_move = moves.index(move_direction)
        return moves[act_move] if moves else None

    def get_children(self, grid):
        grid.children = []
        for move_direction in grid.getAvailableMoves():
            gridCopy = grid.clone()
            gridCopy.path = grid.path[:]
            gridCopy.path.append(PlayerAI.move_str[move_direction])
            gridCopy.move(move_direction)
            gridCopy.depth_current = grid.depth_current + 1
            grid.children.append((move_direction, gridCopy))
        return grid.children

    def utility(self, state):

        def snake():
            poses = [
                [
                    [2 ** 15, 2 ** 14, 2 ** 13, 2 ** 12],
                    [2 ** 8, 2 ** 9, 2 ** 10, 2 ** 11],
                    [2 ** 7, 2 ** 6, 2 ** 5, 2 ** 4],
                    [2 ** 0, 2 ** 1, 2 ** 2, 2 ** 3]
                ]
                ,
                [
                   [2 ** 15, 2 ** 8, 2 ** 7, 2 ** 0],
                   [2 ** 14, 2 ** 9, 2 ** 6, 2 ** 1],
                   [2 ** 13, 2 ** 10, 2 ** 5, 2 ** 2],
                   [2 ** 12, 2 ** 11, 2 ** 4, 2 ** 3]
                ]
            ]

            poses.append([item for item in reversed(poses[0])])
            poses.append([list(reversed(item)) for item in reversed(poses[0])])
            poses.append([list(reversed(item)) for item in poses[0]])

            poses.append([item for item in reversed(poses[1])])
            poses.append([list(reversed(item)) for item in reversed(poses[1])])
            poses.append([list(reversed(item)) for item in poses[1]])

            max_value = -INF_P
            for pos in poses:
                value = 0
                for i in range(state.size):
                    for j in range(state.size):
                        value += state.map[i][j] * pos[i][j]

                if value > max_value:
                    max_value = value

            return max_value

        weight_snake = 1 / (2 ** 13)

        value = (
            weight_snake * snake(),
        )

        return value

    def decision(self, state):
        state.depth_current = 1
        state.path = []
        return self.maximize(state, -INF_P, INF_P)

    def terminal_state(self, state):
        return state.depth_current >= self.depth_max

    def maximize(self, state, alpha, beta):
        # terminal-state check
        if self.terminal_state(state):
            return (None, state, self.utility(state))

        max_move_direction, max_child, max_utility = None, None, (-INF_P, )
        for move_direction, child in self.get_children(state):
            _, state2, utility = self.minimize(child, alpha, beta)
            child.utility = utility

            if sum(utility) > sum(max_utility):
                max_move_direction, max_child, max_utility = move_direction, child, utility

            if sum(max_utility) >= beta:
                break

            if sum(max_utility) > alpha:
                alpha = sum(max_utility)

        state.utility = max_utility
        state.alpha = alpha
        state.beta = beta

        return max_move_direction, max_child, max_utility

    def minimize(self, state, alpha, beta):
        # terminal-state check
        if self.terminal_state(state):
            return (None, state, self.utility(state))

        min_move_direction, min_child, min_utility = None, None, (INF_P, )
        for move_direction, child in self.get_children(state):
            _, state2, utility = self.maximize(child, alpha, beta)
            child.utility = utility

            if sum(utility) < sum(min_utility):
                min_move_direction, min_child, min_utility = move_direction, child, utility

            if sum(min_utility) <= alpha:
                break

            if sum(min_utility) < beta:
                beta = sum(min_utility)

        state.utility = min_utility
        state.alpha = alpha
        state.beta = beta

        return min_move_direction, min_child, min_utility

grid 是一个对象,grid.map 是一个二维数组(列表的列表)。

我有任何错误吗?如何改进代码?

添加游戏日志:https://pastebin.com/eyzgU2dN


当我手动玩时,我会大部分时间向上和向左滑动,以保持最大值的块在左上方。我发现向下或向右滑动会更快地使块混乱,从而导致较低的得分。当我不能向上或向左滑动时,我只会向右滑动。如果这3个选项都不是可行的选择,那么我会立即向下滑动,然后再向上滑动。......这只是我的策略。 - Glenn Ferrie
1
如果这段代码能够正常工作,而你只是想改进它,那么codereview可能是更好的选择。 - depperm
@depperm,代码没问题,我认为已经检查了好几次。 - Keeper
@GlennFerrie,我明白你的策略,蛇的策略是一个问题。也许我实现得不太对? - Keeper
1
@depperm - 我完全理解你的想法,我猜在 Code Review 上这可能不会是错误的;但这可能是那些在多个网站上合理适用的问题之一(例如在 2048 标签中提到的 gamedev)。它适合成为 Stack Overflow 问题的原因是,也许代码没有按预期工作。也许算法没有正确实现,或者被错误地应用于问题。 - John Y
1个回答

1
在上个周末,我意识到算法没有正确实现。错误出现在minimize()函数中,在那里我以错误的方式搜索了子项 - 应该是这样的:
def get_opponent_children(self, grid):
    grid.children = []
    for x in range(grid.size):
        for y in range(grid.size):
            if grid.map[x][y] == 0:
                for c in (2, 4):
                    gridCopy = grid.clone()
                    gridCopy.path = grid.path[:]
                    gridCopy.deep_current = grid.deep_current + 1
                    gridCopy.map[x][y] = c
                    grid.children.append((None, gridCopy))

    return grid.children

和相应的更改:

for move_direction, child in self.get_opponent_children(state):

现在大多数时候打1024和2048都可以了。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接