The Boyer–Moore string search algorithm is a particularly efficient string searching algorithm, and it has been the standard benchmark for the practical string search literature. It was developed by Bob Boyer and J Strother Moore in 1977. The algorithm preprocesses the target string (key) that is being searched for, but not the string being searched in (unlike some algorithms that preprocess the string to be searched and can then amortize the expense of the preprocessing by searching repeatedly). The execution time of the Boyer-Moore algorithm can be sub-linear: it doesn't need to check every character of the string to be searched, but rather skips over some of them. Generally the algorithm gets faster as the key being searched for becomes longer. Its efficiency derives from the fact that with each unsuccessful attempt to find a match between the search string and the text it's searching, it uses the information gained from that attempt to rule out as many positions of the text as possible where the string cannot match.
The X in position 8 excludes all 8 of the possible starting positions shown.
What people frequently find surprising about the Boyer-Moore algorithm, when they first encounter it, is that its verifications—its attempts to check whether a match exists at a particular position—work backwards. If it starts a search at the beginning of a text for the word ANPANMAN, for instance, it checks the eighth position of the text to see if it contains an "N". If it finds the "N", it moves to the seventh position to see if that contains the last "A" of the word, and so on until it checks the first position of the text for a "A".
Why Boyer-Moore takes this backward approach is clearer when we consider what happens if the verification fails—for instance, if instead of an "N" in the eighth position, we find an "X". The "X" doesn't appear anywhere in "ANPANMAN", and this means there is no match for the search string at the very start of the text—or at the next seven positions following it, since those would all fall across the "X" as well. After checking the eight characters of the word "ANPANMAN" for just one character "X", we're able to skip ahead and start looking for a match ending at the sixteenth position of the text.
This explains why the average-case performance of the algorithm, for a text of length N and a fixed pattern of length M, is N/M: in the best case, only one in M characters needs to be checked. This also explains the somewhat counter-intuitive result that the longer the pattern we are looking for, the faster the algorithm will usually be able to find it.
The algorithm precomputes two tables to process the information it obtains in each failed verification: one table calculates how many positions ahead to start the next search based on the identity of the character that caused the match attempt to fail; the other makes a similar calculation based on how many characters were matched successfully before the match attempt failed. (Because these two tables return results indicating how far ahead in the text to "jump", they are sometimes called "jump tables", which should not be confused with the more common meaning of jump tables in computer science.) The algorithm will shift the larger of the two jump values when a mismatch occurs.
The first table answers to the following question: if at the given position there is a mismatch and the text being searched contains the character X there, how much can we shift the pattern without losing potential matches? If there is no X is our pattern, we can shift so that the start of the pattern is at the first character after X. If X does exist somewhere inside the pattern, we can shift it to the right until the last X in the pattern is directly above the previously unmatched X in the text.
This table is easy to calculate: Start at the last character of the sought string and move towards the first character. Each time you move left, if the character you are on is not in the table already, add it; its Shift value is its distance from the rightmost character. All other characters receive a count equal to the length of the search string.
Example: For the string ANPANMAN, the first table would be as shown (for clarity, entries are shown in the order they would be added to the table): (The N which is supposed to be zero is based on the 2nd N from the right because we only calculate from letters m-1)
|all other characters||8|
The amount of shift calculated by the first table is sometimes called the "bad character shift"..
The algorithm that uses this first table only is called Boyer-Moore-Horspool algorithm.
The second table answers to the following question: if we are at position p in the pattern (counting to the right), where else in the pattern do we have the subsequence that is the same as the current suffix?
The mismatch "A" in position 5 (3 back from the last letter of the needle) excludes the first 6 of the possible starting positions shown.
The second table: for each value of i less than the length of the search string, we must first calculate the pattern consisting of the last i characters of the search string, preceded by a mis-match for the character before it; then we initially line it up with the search pattern and determine the least number of characters the partial pattern must be shifted left before the two patterns match. For instance, for the search string ANPANMAN, the table would be as follows: (N signifies any character that is not N)
The amount of shift calculated by the second table is sometimes called the "good suffix shift" or "(strong) good suffix rule". The original published Boyer-Moore algorithm uses a simpler, weaker, version of the good suffix rule in which each entry in the above table did not require a mis-match for the left-most character. This is sometimes called the "weak good suffix rule" and is not sufficient for proving that Boyer-Moore runs in linear worst-case time.
The worst-case to find all occurrences in a text needs approximately 3*N comparisons, hence the complexity is O(n), regardless whether the text contains a match or not. This proof took some years to determine. In the year the algorithm was devised, 1977, the maximum number of comparisons was shown to be no more than 6*N; in 1980 it was shown to be no more than 4*N, until Cole's result in Sep 1991.
This web page reuses material from Wikipedia page http://en.wikipedia.org/wiki/Boyer_moore under the rights of CC-BY-SA license. As a result, the content of this page is and will stay available under the rights of this license regardless of restrictions that apply to other pages of this website.