Fine-grained reduction
In-game article clicks load inline without leaving the challenge.
In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem A {\displaystyle A} can be solved in time a ( n ) {\displaystyle a(n)} and problem B {\displaystyle B} can be solved in time b ( n ) {\displaystyle b(n)}, then the existence of an ( a , b ) {\displaystyle (a,b)}-reduction from problem A {\displaystyle A} to problem B {\displaystyle B} implies that any significant speedup for problem B {\displaystyle B} would also lead to a speedup for problem A {\displaystyle A}.
Definition
Let A {\displaystyle A} and B {\displaystyle B} be computational problems, specified as the desired output for each possible input. Let a {\displaystyle a} and b {\displaystyle b} both be time-constructible functions that take an integer argument n {\displaystyle n} and produce an integer result. Usually, a {\displaystyle a} and b {\displaystyle b} are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as n 2 {\displaystyle n^{2}}.
Then A {\displaystyle A} is said to be ( a , b ) {\displaystyle (a,b)}-reducible to B {\displaystyle B} if, for every real number ϵ > 0 {\displaystyle \epsilon >0}, there exists a real number δ > 0 {\displaystyle \delta >0} and an algorithm that solves instances of problem A {\displaystyle A} by transforming it into a sequence of instances of problem B {\displaystyle B}, taking time O ( a ( n ) 1 − δ ) {\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}} for the transformation on instances of size n {\displaystyle n}, and producing a sequence of instances whose sizes n i {\displaystyle n_{i}} are bounded by ∑ i b ( n i ) 1 − ϵ < a ( n ) 1 − δ {\displaystyle \sum _{i}b(n_{i})^{1-\epsilon }<a(n)^{1-\delta }}.
An ( a , b ) {\displaystyle (a,b)}-reduction is given by the mapping from ϵ {\displaystyle \epsilon } to the pair of an algorithm and δ {\displaystyle \delta }.
Speedup implication
Suppose A {\displaystyle A} is ( a , b ) {\displaystyle (a,b)}-reducible to B {\displaystyle B}, and there exists ϵ > 0 {\displaystyle \epsilon >0} such that B {\displaystyle B} can be solved in time O ( b ( n ) 1 − ϵ ) {\displaystyle O{\bigl (}b(n)^{1-\epsilon }{\bigr )}}. Then, with these assumptions, there also exists δ > 0 {\displaystyle \delta >0} such that A {\displaystyle A} can be solved in time O ( a ( n ) 1 − δ ) {\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}}. Namely, let δ {\displaystyle \delta } be the value given by the ( a , b ) {\displaystyle (a,b)}-reduction, and solve A {\displaystyle A} by applying the transformation of the reduction and using the fast algorithm for B {\displaystyle B} for each resulting subproblem.
Equivalently, if A {\displaystyle A} cannot be solved in time significantly faster than a ( n ) {\displaystyle a(n)}, then B {\displaystyle B} cannot be solved in time significantly faster than b ( n ) {\displaystyle b(n)}.
History
Fine-grained reductions were defined, in the special case that a {\displaystyle a} and b {\displaystyle b} are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of ( n 3 , n 3 ) {\displaystyle (n^{3},n^{3})}-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.
The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.
Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.