Heuristics
A bitesized post on Heuristics.
Heuristics are shortcuts for solving problems in a quick way that delivers a result that is sufficient enough to be useful given some constraints (eg. time, cost)^{1}. In computing, heuristic algorithm refers to an approximate solution by loosely defined rules, usually is used to solve complex problems.
Heuristic algorithms might not result 100% correctness, but it converges to the ground truth (close to optimal solution). Most importantly, the solution is costeffective. In contrast, nonheuristic algorithms (ie. exact solutions) to solve highly complicated problems are usually very expensive (ie. require very long time, massive computing power), to the point that it is no longer worth it to aim for 100% accuracy.
A few complex problems that usually leverage heuristic algorithms: chess game, Hamiltonian cycle problem, travelling salesman problem. An example of heuristic algorithm: A* search. Learn more: wikipedia.
Heuristics in mathematics
We use heuristics in many other areas as well. For example, in mathematics, there is a heuristic called auxiliary problem. An auxiliary problem is a problem which we consider, not for its own sake, but because we hope that its consideration may help us to solve another problem, our original problem^{2}.
For example, consider the following problem: find \(x\), to satisfy the equation of \(x^4  13x^2 + 36 = 0\) (ie. our original problem). If we observe carefully, \(x^4 = {(x^2)}^2\). In that case, we may see an advantage of introducing another equation: \(y = x^2\) to solve our original problem.
Now, from the additional equation, we can obtain a new problem (ie. our auxiliary problem): find \(y\), to satisfy the equation of \(y^2  13y + 36 = 0\), which is probably easier to solve than our original problem (ie. quadratic vs. exponential).

Adapted with modifications from Investopedia ↩

Polya, G. (n.d.). How to Solve It: A New Aspect of Mathematical Method. Princeton University Press. ↩