The master theorem is a mathematical tool used to determine the time complexity of a certain type of recursive algorithms. It has three cases and is typically used to analyze divide-and-conquer algorithms. Here is an explanation of the theorem and 20 examples:

The Master Theorem: Suppose we have a recursive algorithm with the following recurrence relation:

T(n) = a T(n/b) + f(n)

where a is the number of subproblems, n/b is the size of each subproblem, and f(n) is the time it takes to divide the problem and combine the results. The Master Theorem states that the time complexity of the algorithm can be determined by comparing f(n) to n^(log_b(a)) in the following three cases:

Case 1: If f(n) = O(n^(log_b(a) - ε)) for some ε > 0, then T(n) = Θ(n^(log_b(a))).

Case 2: If f(n) = Θ(n^(log_b(a))), then T(n) = Θ(n^(log_b(a)) log n).

Case 3: If f(n) = Ω(n^(log_b(a) + ε)) for some ε > 0, and if a f(n/b) ≤ c f(n) for some constant c < 1 and sufficiently large n, then T(n) = Θ(f(n)).

Example 1: T(n) = 4T(n/2) + n 
a = 4, b = 2, f(n) = n 
log_b(a) = log_2(4) = 2 
f(n) = Θ(n) = Θ(n^(log_b(a) - ε)) for ε = 1 
Therefore, T(n) = Θ(n^2).

Example 2: T(n) = 2T(n/3) + n^2 
a = 2, b = 3, f(n) = n^2 
log_b(a) = log_3(2) < 1 
f(n) = Θ(n^2) = Θ(n^(log_b(a))) 
Therefore, T(n) = Θ(n^2 log n).

Example 3: T(n) = 3T(n/4) + n log n 
a = 3, b = 4, f(n) = n log n 
log_b(a) = log_4(3) < 1 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.5 
3 f(n/4) = 3(n/4 log(n/4)) = 3/4 (n log n - n log 4) < 3/4 (n log n - n) < c f(n) for c = 1/2 and n > 8 
Therefore, T(n) = Θ(n log n).

Example 4: T(n) = 2T(n/2) + n/log n 
a = 2, b = 2, f(n) = n/log n 
log_b(a) = 1 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.1 
2 f(n/2) = 2(n/2 log(n/2)) / log(n/2) = n log n / log 2n > c f(n) for c = 1/2 and sufficiently large n 
Therefore, T(n) = Θ(n log n).

Example 5: T(n) = 3T(n/3) + n/log n
a = 3, b = 3, f(n) = n/log n 
log_b(a) = 1 f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.1 
3 f(n/3) = 3(n/3 log(n/3)) / log(n/3) = n log n / log 3n < c f(n) for c = 1 and sufficiently large n Therefore, T(n) = Θ(n log n).

Example 6: T(n) = 2T(n/4) + √n
a = 2, b = 4, f(n) = √n
log_b(a) = log_4(2) = 1/2 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.4
2 f(n/4) = 2(√(n/4)) = √n/2 < c f(n) for c = 1/2 and sufficiently large n 
Therefore, T(n) = Θ(√n log n).

Example 7: T(n) = 5T(n/3) + n^3 
a = 5, b = 3, f(n) = n^3 
log_b(a) = log_3(5) > 1 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.5 
5 f(n/3) = 5(n/3)^3 > c f(n) for c = 1/2 and sufficiently large n Therefore, T(n) = Θ(n^log_3(5)).

Example 8: T(n) = 3T(n/2) + n^2/log n 
a = 3, b = 2, f(n) = n^2/log n 
log_b(a) = log_2(3) > 1 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.5 
3 f(n/2) = 3(n/2)^2 / log(n/2) < c f(n) for c = 1 and sufficiently large n Therefore, T(n) = Θ(n^2/log n).

Example 9: T(n) = 4T(n/4 - 1) + n 
a = 4, b = 4, f(n) = n 
log_b(a) = 1 
f(n) = Θ(n) = Θ(n^(log_b(a) + ε)) for ε = 0 
Therefore, T(n) = Θ(n log n).

Example 10: T(n) = 8T(n/2) + n^2 log n
a = 8, b = 2, f(n) = n^2 log n 
log_b(a) = log_2(8) = 3 
f(n) = Ω(n^(log_b(a) + ε)) for ε = 0.5 
8 f(n/2) = 8(n/2)^2 log(n/2) > c f(n) for c = 1/2 and sufficiently large n 
Therefore, T(n) = Θ(n^3 log n).