A very general method for proving inequalities. Too good to be true?

Update I ‘repaired’ this method, but it changed a lot and I have some different questions, so I posted it separately here.

As training for the olympiad, I have to solve a lot of inequalities. Recently, I found a very general method to solve inequalities. I think it’s too good to be true, but if it is true, I think an implementation of the algorithm in a computer program could be used to very quickly prove all kinds of inequalities.

First, I’ll explain the method. Then I’ll ask my questions. At the very bottom, I proved a simple inequality using my method as sort of a demonstration.

The method / algorithm

Say we have to prove a certain inequality that looks like:

for all $a,b,c\ldots\in \Bbb{R}$. Here $a,b,c\ldots$ are simply de independent variables. There might be just one, there might be $21$. It doesn’t matter.

First, we rearrange:

Now, in order to prove our inequality, we have to prove that the LHS has a minimum value and that that minimum value is greater than or equal to $0$. Say that the inequality is true. Let $a_0,b_0,c_0\ldots$ be the values of $a,b,c\ldots$ for which the LHS is minimal. Now, we must have:

because otherwise, we could increase or decrease $a$ and get a lower value for the LHS, contradicting our definition of $a_0$. In fact, we have:

So we get as many equations as variables. Now we have a nice system of equations to work with. There are three options.

1. The system has no solutions. This means a minimum does not exist and the inequality doesn’t hold for all values of $a,b,c\ldots$.
2. The system has exactly one solution. If we plug this in and also some other random values for $a,b,v\ldots$, we can see whether it’s a maximum or a minimum. If it’s a maximum, there are no minima, because the system only had one solution. If it’s a minimum, simply plug in the values to check whether the inequality holds. If it does, it holds for all $a,b,c\ldots\in\Bbb{R}$.
3. The system has infinitely many solutions. In this case, we can express these solutions using less variables than we had in the original inequality. Plugging in those solutions gives a new inequality with less variables, so if we repeat the proces, we eventually get a system of the first or second type or an inequality with just one variable and those are generally pretty easy to solve.

Questions

I have two questions.

1. Does the method theoretically work? I think so, but there could be something I missed. As I said before, it looks too good to be true.
2. If it does work, why don’t people use it that often? I understand that proofs using things like AM-GM-HM are shorter, but this method is guaranteed to work. It either gives a full proof or a concrete counterexample.

Example

Prove that for all $a,b,c\in\mathbb{R}$, we have:

Proof: First, rearrange:

Now, take the derivatives with respect to $a$, $b$ and $c$ and set them equal to $0$:

Some algebra to give us a nicer system:

So $c = 2b-a$. Plugging it into the first equation gives:

plugging that into the third equation gives:

and that’s the solution to our system; $b=a-1$, $c=a-2$. Plugging this into the original inequality gives:

Which always holds and therefore the original inequality always holds. Q.E.D.

Actually, this is one approach to doing inequalities, and sometimes it is used. But there are some caveats that need to be paid attention to:

1. It is not always the case that those multi-variable functions are differentiable at all points that you are interested in. (And in some cases, the function is differentiable a.e., but the function is so bizarre that its derivative does not contain too much information about the function’s variation – this is rare though, usually rises theoretically but not in practical use cases.)

2. Even if 1 is not an issue, sometimes it is not easy to get the differentiation of the function – it could be tricky to calculate differential values for certain points, or the derivative is very involving for the function expressed in nested combinations of elementary operations/functions, or the derivative may not be in closed-form if the function is not elementary, etc. (Plus if the differentiation is in a complicated form, it is also difficult to solve the equations of which derivatives equal to zero.)

3. Even if 2 is not an issue, it is not guaranteed that the point you get by having first-order partial derivatives equaling to zero is local minimum/maximum, you might just get a saddle point – thus second-order derivatives need to be checked as well (notice also there is no guarantee that second-order derivatives exist)

4. Even if 3 is not an issue, you still cannot be sure that the local maximum/minimum is global maximum/minimum – e.g. it could be possible that points at the boundary are larger than local maximum point you get. So check more before you make the final conclusion – For example, when finding the global maximum:

• Compare all local maximum points (if there is any, and make sure they are
local max);
• Compare all points at the boundary (if there is any);
• Check function’s behavior as it approaches infinity (if applicable);
• Check where first-order derivatives do not exist (if any) – they could
be maximum points;
• and so on..