Skip to content

gh-106865: Improve difflib.find_longest_match performance with cache #106877

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

gaogaotiantian
Copy link
Member

@gaogaotiantian gaogaotiantian commented Jul 18, 2023

difflib.SequenceMatcher does not perform well on large input. This patch tries to improve it by doing some micro optimization on hot path.

When dealing with large inputs, for example, large files, almost all the time is spent in find_longest_match, and the hottest code is the nested for loop:

      for i in range(alo, ahi):
          # look at all instances of a[i] in b; note that because
          # b2j has no junk keys, the loop is skipped if a[i] is junk
          j2lenget = j2len.get
          newj2len = {}
          for j in b2j.get(a[i], nothing):
              # a[i] matches b[j]
              if j < blo:
                  continue
              if j >= bhi:
                  break
              k = newj2len[j] = j2lenget(j-1, 0) + 1
              if k > bestsize:
                  besti, bestj, bestsize = i-k+1, j-k+1, k

The content in the for loop will execute (ahi - alo) * len(b2j[a[i]]) times. Any optimization in the loop could potentially be a large improvement.

The patch runs the same loop for the first time a[i] is found, except that it creates a cache for the valid b2j list - list with j inside the range (blo, bhi). If a[i] already appears before, then the cached list would be used - then the two if checks can be removed.

Also, if a[i] already appears before and there is at least one valid content in the list, bestsize would be already set to at least 1 - we can then only do the if k > bestsize check when k is greater than 1 - which happens to be already checked if we use if to check whether j-1 is in the dict.

In this way, we eliminated plenty of code on the cached path. This optimization is significant when the cached path is hit frequently, or the "item"s in a repeats frequently. This is the case for any large file (especially file with smaller vocabulary like ascii).

I tried this on some of files by hand - a c source file and a README file.

benchmark code
import difflib
import time

with open("../viztracer/src/viztracer/modules/snaptrace.c") as f:
    code = f.read()

with open("../viztracer/README.md") as f:
    readme = f.read()

s = difflib.SequenceMatcher(None, code, code)

start = time.time()
s.get_opcodes()
print(time.time() - start)

s = difflib.SequenceMatcher(None, code, readme)

start = time.time()
s.get_opcodes()
print(time.time() - start)

s = difflib.SequenceMatcher(None, readme, code)

start = time.time()
s.get_opcodes()
print(time.time() - start)

The time spent improvement was significant:

0.6531200408935547
0.21375417709350586
0.7097032070159912

to

0.20792889595031738
0.11065959930419922
0.28839802742004395

This patch will have a small negative performance impact on smaller test cases and cases where none of the items in a duplicates. For example, it is significantly slower in test_recursion_limit:

def test_recursion_limit(self):
        # Check if the problem described in patch #1413711 exists.
        limit = sys.getrecursionlimit()
        old = [(i%2 and "K:%d" or "V:A:%d") % i for i in range(limit*2)]
        new = [(i%2 and "K:%d" or "V:B:%d") % i for i in range(limit*2)]
        difflib.SequenceMatcher(None, old, new).get_opcodes()

However, in real world scenario, it does not make much sense to diff/match two sequences with a giant vocabulary, so in almost all actual use cases this should improve the performance. Especially on large sequences.

We can, however, check the vocabulary (set(a) / len(a)) and determine whether we want to do this optimization. Extra overhead, potentially better perf on worst case.

P.S. changing cache.append to cacheappend = cache.append does not have a observable impact, but I can do that if that's needed.

@gaogaotiantian gaogaotiantian changed the title Improve difflib.find_longest_match performance with cache gh-106865: Improve difflib.find_longest_match performance with cache Jul 18, 2023
@gaogaotiantian
Copy link
Member Author

@tim-one I know this is probably ancient code to you :) git blames you for working on the code 23 years ago. Could you maybe share some thoughts on the optimization? Thanks!

@tim-one
Copy link
Member

tim-one commented Jul 19, 2023

However, in real world scenario, it does not make much sense to diff/match two sequences with a giant vocabulary,

To the contrary, comparing lists of lines is by far the most common - and important intended - use case, where a "letter in the alphabet" is the entire content of a line. I don't know why you would want to compare text files as sequences of characters instead, but "so don't do that" comes to mind 😉.

Two common uses for comparing sequences of characters come to mind:

  1. When a higher-level diff (by lines) wants to find the character-level differences between two mismatching lines. Indeed, lots of code inside difflib does that, to support its higher-level file-differencing functions. In that case, the sequences are typically quite short (under a hundred characters each).
  2. Newbies comparing multi-megabyte DNA sequences, They'll never be happy with difflib, or with anything else short of highly specialized differencing algorithms designed for their domain.

So the patch looks good, but I'm skeptical of that it addresses a "real problem" that's common enough to be worth any slowdown in the actual common cases. The autojunk gimmick was added to address "very common" repeated lines, and it's good at speeding those cases, but it was A Big Mistake to enable it by default.

What would be welcome is a different algorithm specifically designed for "long sequences from a small alphabet" cases. SequenceMatcher works OK for those, provided the sequences aren't very long, but the core algorithm was really designed for "giant alphabet" cases (viewing text files as sequences of lines).

@tim-one
Copy link
Member

tim-one commented Jul 19, 2023

BTW, if you want to pursue this, there's an "optimization" I never bothered with because I didn't expect it would pay in the common cases (which I apparently have a different idea of than you have 😉): the one-at-a-time comparison of the j's to blo and bhi are obviously magnets for contrived bad cases. Instead, something along the lines of:

js = b2j[a[i]]  # this is assuming we know a[i] is in the dict
lo = bisect_left(js, blo)
hi = bisect_left(js, bhi, lo)
if lo or hi < len(js): # or maybe faster to always do the slice; it's correct either way
    js = js[lo : hi]

That finds all the j's in range efficiently regardless of how many are out of range, and regardless of how many are in range, materializes them in a list "at C speed" instead of appending one at a time "at Python speed". The result is the value of your cache, but computed in advance without Python-level looping.

@gaogaotiantian
Copy link
Member Author

Okay I might have misunderstood the usage of SequenceMatch. I agree that using lines would be a better way to do diffs.

If we do not expect duplicate items, then even the bisection may not be able to compensate the extra cost of creating lists and doing the bisection right? We might still slow down the cases where there's no duplicate.

I'll leave the decision to you, I'm okay if things keep the way they are. If you think this could be helpful when users use strings as the sequence instead of list of strings, I'll do the bisect optimization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants