Lru_cache(来自functools)是如何工作的?

113
特别是在使用递归代码时,使用lru_cache可以大大提高性能。我理解缓存是一个空间,用于存储需要快速服务的数据,并避免计算机重新计算。
Python的lru_cache从functools内部如何工作?
我正在寻找一个具体的答案,它是否像Python的其他部分一样使用字典?它只存储return值吗?
我知道Python在字典上有很强的基础,但是我找不到这个问题的具体答案。
3个回答

71

functools的源代码可以在此处找到:https://github.com/python/cpython/blob/master/Lib/functools.py

lru_cache使用_lru_cache_wrapper装饰器(带有参数模式的Python装饰器),该装饰器在上下文中具有一个cache字典,用于保存调用函数的返回值(每个装饰的函数都有自己的缓存字典)。字典键由参数生成的_make_key函数生成。以下是一些加粗注释:

# ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper
# DEFINES AND RETURNS ONE OF wrapper DECORATORS

def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo):
    # Constants shared by all lru cache instances:
    sentinel = object()      # unique object used to signal cache misses

    cache = {}                                # RESULTS SAVES HERE
    cache_get = cache.get    # bound method to lookup a key or return None

    # ... maxsize is None:

    def wrapper(*args, **kwds):
        # Simple caching without ordering or size limit
        nonlocal hits, misses
        key = make_key(args, kwds, typed)     # BUILD A KEY FROM ARGUMENTS
        result = cache_get(key, sentinel)     # TRYING TO GET PREVIOUS CALLS RESULT
        if result is not sentinel:            # ALREADY CALLED WITH PASSED ARGS
            hits += 1
            return result                     # RETURN SAVED RESULT
                                              # WITHOUT ACTUALLY CALLING FUNCTION
        misses += 1
        result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty
        cache[key] = result                   # SAVE RESULT

        return result
    # ...

    return wrapper

1
fib.cache_info() 打印出类似以下的内容: CacheInfo(hits=28, misses=16, maxsize=None, currsize=16) 这里的“miss”是什么意思? - Ambareesh
3
在装饰器开始时,这些计数器在上下文中被创建,并且都等于零:“hits = misses = 0”。每次调用经过装饰的函数时,它们中的一个会增加 - 如果缓存中已有数据,则“hits += 1”,如果数据不在缓存中,则“misses += 1”。请检查源代码。您可以使用这些计数器来分析缓存的有用性。 - ndpu
那么,如果第一次看到一个数字,未命中计数器就会增加1,而对于同一个数字的后续查看,命中计数器会增加? - Ambareesh
从我的测试代码来看,相较于使用自己的字典用作缓存,使用@lru_cache 显然更快。在自己的方案中,我使用了 cache.get(args, my_function(args)) 进行查询,并在最后将结果存储到 cache[args] = my_res 中。这是为什么呢? - onepiece
1
@onepiece lru_cache() 是基于 _lru_cache_wrapper() 实现的,它具有 C 实现,很可能比任何 Python 实现都要快。 - David Foster
1
嘿@npdu,我知道自从我问这个问题以来已经很长时间了。我得到了很多赞,如果你能用一个使用案例示例扩展答案,那就太好了。 我发现即使在谷歌上输入lru_cache,它也会成为2或3个结果,所以许多人可能会直接转到这篇文章,并从中受益。 - innicoder

26

Python 3.9 LRU缓存源代码:https://github.com/python/cpython/blob/3.9/Lib/functools.py#L429

斐波那契数列示例代码

@lru_cache(maxsize=2)
def fib(n):
    if n == 0:
        return 0
    if n == 1:
        return 1
    return fib(n - 1) + fib(n - 2)

LRU缓存装饰器检查一些基本情况,然后使用wrapper _lru_cache_wrapper包装用户函数。在wrapper内部,将项目添加到缓存的逻辑,LRU逻辑即将新项目添加到循环队列中,并从循环队列中删除项目。

def lru_cache(maxsize=128, typed=False):
...
    if isinstance(maxsize, int):
        # Negative maxsize is treated as 0
        if maxsize < 0:
            maxsize = 0
    elif callable(maxsize) and isinstance(typed, bool):
        # The user_function was passed in directly via the maxsize argument
        user_function, maxsize = maxsize, 128
        wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
        wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
        return update_wrapper(wrapper, user_function)
    elif maxsize is not None:
        raise TypeError(
         'Expected first argument to be an integer, a callable, or None')

    def decorating_function(user_function):
        wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
        wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
        return update_wrapper(wrapper, user_function)

    return decorating_function
函数会对maxsize(当为负数时)进行归一化处理,添加CacheInfo细节,最后添加包装器并更新装饰器文档和其他细节。

lru_cache_wrapper

  • Lru Cache wrapper has few book keeping variables.

     sentinel = object()          # unique object used to signal cache misses
     make_key = _make_key         # build a key from the function arguments
     PREV, NEXT, KEY, RESULT = 0, 1, 2, 3   # names for the link fields
    
     cache = {}
     hits = misses = 0
     full = False
     cache_get = cache.get    # bound method to lookup a key or return None
     cache_len = cache.__len__  # get cache size without calling len()
     lock = RLock()           # because linkedlist updates aren't threadsafe
     root = []                # root of the circular doubly linked list
     root[:] = [root, root, None, None]     # initialize by pointing to self
    
  • The wrapper acquires the lock before performing any operation.

  • A few important variables - root list contains all the items adhering to maxsize value. The important concept to remember root is self-referencing itself (root[:] = [root, root, None, None]) in the previous (0) and next position (1)

三种高级检查

  • The first case, when maxsize is 0, that means no cache functionality, the wrapper wraps the user function without any caching ability. The wrapper increments cache miss count and return the result.

     def wrapper(*args, **kwds):
         # No caching -- just a statistics update
         nonlocal misses
         misses += 1
         result = user_function(*args, **kwds)
         return result
    
  • The second case. when maxsize is None. In the section, there is no limit on the number of elements to store in the cache. So the wrapper checks for the key in the cache(dictionary). When the key is present, the wrapper returns the value and updates the cache hit info. And when the key is missing, the wrapper calls the user function with user passed arguments, updates the cache, updates the cache miss info, and returns the result.

     def wrapper(*args, **kwds):
         # Simple caching without ordering or size limit
         nonlocal hits, misses
         key = make_key(args, kwds, typed)
         result = cache_get(key, sentinel)
         if result is not sentinel:
             hits += 1
             return result
         misses += 1
         result = user_function(*args, **kwds)
         cache[key] = result
         return result
    
  • The third case, when maxsize is a default value (128) or user passed integer value. Here is the actual LRU cache implementation. The entire code in the wrapper in a thread-safe way. Before performing any operation, read/write/delete from the cache, the wrapper obtains RLock.

LRU缓存

  • The value in the cache is stored as a list of four items(remember root). The first item is the reference to the previous item, the second item is the reference to the next item, the third item is the key for the particular function call, the fourth item is a result. Here is an actual value for Fibonacci function argument 1 [[[...], [...], 1, 1], [[...], [...], 1, 1], None, None] . [...] means the reference to the self(list).

  • The first check is for the cache hit. If yes, the value in the cache is a list of four values.

     nonlocal root, hits, misses, full
     key = make_key(args, kwds, typed)
     with lock:
         link = cache_get(key)
          if link is not None:
              # Move the link to the front of the circular queue
              print(f'Cache hit for {key}, {root}')
              link_prev, link_next, _key, result = link
              link_prev[NEXT] = link_next
              link_next[PREV] = link_prev
              last = root[PREV]
              last[NEXT] = root[PREV] = link
              link[PREV] = last
              link[NEXT] = root
              hits += 1
              return result
    

    When the item is already in the cache, there is no need to check whether the circular queue is full or pop the item from the cache. Rather change the positions of the items in the circular queue. Since the recently used item is always on the top, the code moves to recent value to the top of the queue and the previous top item becomes next of the current item last[NEXT] = root[PREV] = link and link[PREV] = last and link[NEXT] = root. NEXT and PREV are initialized in the top which points to appropriate positions in the list PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 # names for the link fields. Finally, increment the cache hit info and return the result.

  • When it is cache miss, update the misses info and the code checks for three cases. All three operations happen after obtaining the RLock. Three cases in the source code in the following order - after acquiring the lock key is found in the cache, the cache is full, and the cache can take new items. For demonstration, let's follow the order, when the cache is not full, the cache is full, and the key is available in the cache after acquiring the lock.

当缓存未满时

    ...
    else:
        # Put result in a new link at the front of the queue.
        last = root[PREV]
        link = [last, root, key, result]
        last[NEXT] = root[PREV] = cache[key] = link
        # Use the cache_len bound method instead of the len() function
        # which could potentially be wrapped in an lru_cache itself.
        full = (cache_len() >= maxsize)
  • 当缓存未满时,准备最近的result(link=[last, root, key, result]),包含根节点的前一个引用、根节点、键和计算结果。

  • 然后将最近的结果(link)指向循环队列的顶部(root[PREV] = link),将根节点的上一个项的下一个指向最近的结果(last[NEXT]=link),并将最近的结果添加到缓存中(cache[key] = link)。

  • 最后,检查缓存是否已满(cache_len() >= maxsize and cache_len = cache.__len__ is declared in the top),并将状态设置为已满。

  • 对于斐波那契数列的示例,当函数收到第一个值1时,根节点为空,根值为[[...], [...], None, None],在将结果添加到循环队列后,根值为[[[...], [...], 1, 1], [[...], [...], 1, 1], None, None]。前一个和后一个都指向键1的结果。对于下一个值0,插入后根值为

    [[[[...], [...], 1, 1], [...], 0, 0], [[...], [[...], [...], 0, 0], 1, 1], None, None]。前一个是[[[[...], [...], None, None], [...], 1, 1], [[...], [[...], [...], 1, 1], None, None], 0, 0],后一个是[[[[...], [...], 0, 0], [...], None, None], [[...], [[...], [...], None, None], 0, 0], 1, 1]

当缓存已满时

    ...
    elif full:
        # Use the old root to store the new key and result.
        oldroot = root
        oldroot[KEY] = key
        oldroot[RESULT] = result
        # Empty the oldest link and make it the new root.
        # Keep a reference to the old key and old result to
        # prevent their ref counts from going to zero during the
        # update. That will prevent potentially arbitrary object
        # clean-up code (i.e. __del__) from running while we're
        # still adjusting the links.
        root = oldroot[NEXT]
        oldkey = root[KEY]
        oldresult = root[RESULT]
        root[KEY] = root[RESULT] = None
        # Now update the cache dictionary.
        del cache[oldkey]
        # Save the potentially reentrant cache[key] assignment
        # for last, after the root and links have been put in
        # a consistent state.
        cache[key] = oldroot
  • 当缓存已满时,将根节点用旧根节点(oldroot=root)替换,并更新键和结果。
  • 然后将旧根节点的下一个项目作为新的根节点(root=oldroot[NEXT]),复制新的根节点键和结果(oldkey = root[KEY] and oldresult = root[RESULT])。
  • 将新的根节点键和结果设置为None(root[KEY] = root[RESULT] = None)。
  • 从缓存中删除旧键的项(del cache[oldkey])并将计算出的结果添加到缓存中(cache[key] = oldroot)。
  • 对于斐波那契示例,当缓存已满且键为2时,根值为[[[[...], [...], 1, 1], [...], 0, 0], [[...], [[...], [...], 0, 0], 1, 1], None, None],块结束时的新根是[[[[...], [...], 0, 0], [...], 2, 1], [[...], [[...], [...], 2, 1], 0, 0], None, None]。可以看到,键1被替换为键2

当获取锁后键出现在缓存中时。

    if key in cache:
        # Getting here means that this same key was added to the
        # cache while the lock was released.  Since the link
        # update is already done, we need only return the
        # computed result and update the count of misses.
        pass

当密钥出现在缓存中时,在获取锁之后,另一个线程可能已经将值排队。因此,没有太多事情要做,包装器返回结果。

最后,代码返回结果。在执行缓存未命中部分之前,代码更新缓存未命中信息并调用make_key函数。

注意:我无法使嵌套列表缩进生效,因此答案的格式可能会有所减少。


请注意,这里的斐波那契示例基本上是不正确的:我使用了cache_info()进行了一些测试,并推断出需要交换顺序。然后我发现的这个答案证实了我的推断:https://dev59.com/Vbvoa4cB1Zd3GeqPyCVH#61953324 - Zack Light

12

你可以在这里查看源代码。

本质上,它使用两个数据结构:一个字典将函数参数映射到其结果,以及一个链表来跟踪函数调用历史记录。

缓存基本上是使用以下内容实现的,这很容易理解。

cache = {}
cache_get = cache.get
....
make_key = _make_key         # build a key from the function arguments
key = make_key(args, kwds, typed)
result = cache_get(key, sentinel)

更新链表的要点是:

elif full:

    oldroot = root
    oldroot[KEY] = key
    oldroot[RESULT] = result

    # update the linked list to pop out the least recent function call information        
    root = oldroot[NEXT]
    oldkey = root[KEY]
    oldresult = root[RESULT]
    root[KEY] = root[RESULT] = None
    ......                    

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接