Skip to content

211. Design Add and Search Words Data Structure (Medium)

Problem

Design a data structure supporting:

  • addWord(word), add a word.
  • search(word), true iff any added word matches. word may contain . which matches any single letter.

Example

wd = WordDictionary();
wd.addWord("bad"); wd.addWord("dad"); wd.addWord("mad");
wd.search("pad"); // false
wd.search("bad"); // true
wd.search(".ad"); // true
wd.search("b.."); // true

LeetCode 211 · Link · Medium

Approach 1: Brute force, list of words, regex match

Store words in a list; on search, compile . as regex.

import re
class WordDictionary:
def __init__(self):
self.words = []
def addWord(self, word):
self.words.append(word) # L1: O(1) list append
def search(self, word):
pattern = re.compile("^" + word + "$") # L2: O(L) compile regex
return any(pattern.match(w) for w in self.words) # L3: O(W · L) scan

Where the time goes, line by line

Variables: L = len(word), W = total words added.

LinePer-call costTimes executedContribution
L1 (addWord)O(1)1 per callO(1) per addWord
L2 (compile)O(L)1 per searchO(L)
L3 (scan + match)O(L)WO(W · L) per search ← dominates

Complexity

  • addWord: O(1).
  • search: O(W · L), driven by L3 (scanning all words). Degrades as words accumulate.
  • Space: O(total chars).

Approach 2: Hash map bucketed by length + per-position scan

Group words by length; on search, compare character-by-character only against same-length words. Faster pruning but still linear in group size.

from collections import defaultdict
class WordDictionary:
def __init__(self):
self.by_len = defaultdict(list)
def addWord(self, word):
self.by_len[len(word)].append(word) # L1: O(1) amortized append
def search(self, word):
for w in self.by_len.get(len(word), []): # L2: scan same-length words, O(W_L)
if all(p == '.' or p == c for p, c in zip(word, w)): # L3: O(L) char compare
return True
return False

Where the time goes, line by line

Variables: L = len(word), W_L = number of stored words of length L.

LinePer-call costTimes executedContribution
L1 (addWord)O(1) amortized1 per callO(1) per addWord
L2 (group scan)O(1)W_LO(W_L)
L3 (char compare)O(L)W_LO(W_L · L) per search ← dominates

Bucketing by length prunes same-length candidates immediately, but within the bucket we still scan linearly.

Complexity

  • addWord: O(L), driven by hashing the length key.
  • search: O(W_L · L), driven by L3 (per-word character comparison within the same-length bucket).

Reasonable for small L, small W_L.

Approach 3: Trie with DFS wildcard (canonical)

Store words in a trie. On search, follow the trie; for ., try every child.

class TrieNode:
def __init__(self):
self.children = {}
self.is_end = False
class WordDictionary:
def __init__(self):
self.root = TrieNode()
def addWord(self, word):
node = self.root
for ch in word: # L1: iterate L characters
if ch not in node.children:
node.children[ch] = TrieNode() # L2: O(1) create node
node = node.children[ch] # L3: O(1) descend
node.is_end = True # L4: O(1) mark end
def search(self, word):
def dfs(i, node):
if i == len(word): # L5: base case
return node.is_end
ch = word[i]
if ch == '.': # L6: wildcard: try all children
return any(dfs(i + 1, child) for child in node.children.values()) # L7: up to 26 branches
if ch in node.children: # L8: O(1) exact match
return dfs(i + 1, node.children[ch]) # L9: recurse one path
return False
return dfs(0, self.root)

Where the time goes, line by line

Variables: L = len(word), k = number of distinct child characters per node (≤ 26).

LinePer-call costTimes executedContribution
L1-L4 (addWord loop)O(1) per charLO(L) per addWord ← dominates for addWord
L5-L6/L8 (exact match path)O(1) per levelL levelsO(L) avg search
L7 (wildcard branch)O(k) per levelup to L levelsO(k^L) worst case ← dominates for all-wildcard

For exact character matches, each DFS call moves one level deeper and is O(1). For . wildcards, each level branches to up to k = 26 children, giving O(26^L) in the worst case (all wildcards, dense trie).

Complexity

  • addWord: O(L), driven by L1-L4.
  • search: O(L) average (no wildcards), O(26^L) worst (all wildcards in a dense trie).
  • Space: O(total chars).

The wildcard branches are where performance can degrade, a prompt like "......" against a dense dictionary is the pathological case.

Summary

ApproachaddWordsearchNotes
List + regexO(1)O(W · L)Naive
Length-bucketed listO(L)O(W_L · L)Better pruning
Trie + DFS wildcardO(L)O(L) avgCanonical

The trie approach generalizes to 212 Word Search II (the next problem) where we DFS a grid against a trie of candidate words.

Test cases

# Quick smoke tests - paste into a REPL or save as test_211.py and run.
# Uses the canonical Approach 3 implementation (trie + DFS wildcard).
class TrieNode:
def __init__(self):
self.children = {}
self.is_end = False
class WordDictionary:
def __init__(self):
self.root = TrieNode()
def addWord(self, word):
node = self.root
for ch in word:
if ch not in node.children:
node.children[ch] = TrieNode()
node = node.children[ch]
node.is_end = True
def search(self, word):
def dfs(i, node):
if i == len(word):
return node.is_end
ch = word[i]
if ch == '.':
return any(dfs(i + 1, child) for child in node.children.values())
if ch in node.children:
return dfs(i + 1, node.children[ch])
return False
return dfs(0, self.root)
def _run_tests():
wd = WordDictionary()
wd.addWord("bad")
wd.addWord("dad")
wd.addWord("mad")
assert wd.search("pad") == False
assert wd.search("bad") == True
assert wd.search(".ad") == True # matches bad, dad, mad
assert wd.search("b..") == True # matches bad
assert wd.search("...") == True # matches any 3-letter word
assert wd.search("....") == False # no 4-letter words
print("all tests pass")
if __name__ == "__main__":
_run_tests()
  • Tries, trie with wildcard DFS