Do we rely on certain intuition or is there an unofficial general crude checklist I should follow?

I had a friend telling me that if the sum of the powers on the numerator is smaller then the denominator, there is a higher chance that it may not exists.

And if the sum of powers on the numerator is higher then the denominator, most likely, it exists.

Also if there is a sin or cos or e-constant, it most likely exists.

How can I know what to do at the first glance given so little time exists for me in exam to ponder? If I spend all my time on figuring out a two-path test when the limit exists, that would be a huge disaster.

Is this one of those cases where practice makes perfect?

Example: lim

Please give me a hint and where do you get the hint.

Example: \lim_{(x,y)\to(0,0)}\exp\left(-\frac{x^2+y^2}{4x^4+y^6}\right)

I need a hint for this too.

Common methods I have learnt for reference:

Two-Path test, Polar Coordinates, Spherical Coordinates, Mean Value Theorem using inequalities.

**Answer**

I wouldn’t say there’s a “step by step” method for all limits, as many require individual analysis and sometimes a clever observation, but I’ve assembeled a list of general techniques (I also used this to answer this question).

In general, it is much easier to show that a limit does not exist than it is to show a limit does exist, and either case might require a clever insight or tricky manipulation. There are a few common ways of working with multi-variable functions to obtain the existence or nonexistence of a limit:

- Try different paths. That is, parameterize x and y as x = x(t), y = y(t) such that (x(0),y(0)) = (a,b), where (a,b) is the point you want to approach in the limit. This is usually the first resort, and if the paths are chosen judiciously, you will obtain two different answers, which implies the nonexistence of the limit, because for the limit to exist, it must have the same value along every possible path. Note that this test can only be used to show nonexistence: to prove a limit exists requires more work.
- Use polar or spherical coordinates. This approach can prove that the limit exists in special cases, and it can also show that limits do not exist, because they may depend on the path (\theta). It’s a good idea to use when you have something that looks like x^2 + y^2 or x^2 + y^2 + z^2 that is troublesome, as these simply become r^2 after the substitution. We write x = r\cos\theta, y = r\sin\theta, and as the limits are usually taken as (x,y)\to (0,0), we now must look at what happens as r\to 0^+. Sometimes, this will depend on \theta, which corresponds to a specific path (\theta controls the direction), and sometimes, the r will dominate and leave you with an expression where \theta does not matter – in this case, the limit exists. However, one must be careful, because there are some expressions that might seem to be independent of \theta as r\to 0^+, but are not: for example, take

r\frac{\cos^2\theta\sin^2\theta}{\cos^3\theta + \sin^3\theta}.

For any constant \theta such that the denominator exists (and is nonzero), the limit as r\to 0^+ is 0, but there are certain paths \theta = \theta(r) along which the value of the limit will not be 0. - \delta – \epsilon proofs: When correct, these show the existence of a limit. However, one must already know what the limit is before this type of proof is possible. If you are unfamiliar with \delta – \epsilon arguments, the statement is:

Given a function f : \mathbb{R}^n\to\mathbb{R}, we say

\lim_{\vec{x}\to\vec{x}_0} f(\vec{x}) = L

if for every \epsilon > 0, there exists a \delta > 0 such that \left|\,f(\vec{x}) – L\right| < \epsilon whenever d(\vec{x}, \vec{x}_0) < \delta. Here, d(\vec{x}, \vec{x}_0) refers to the distance in Euclidean n-space (for example, with n = 2 we have d((a,b), (c,d)) = \sqrt{(a - c)^2 + (b - d)^2}.) This approach should be used if you are already convinced that the limit exists and is equal to L. - Use algebra and theory. You have probably seen that the product and sum of continuous functions are continuous, and that for continuous functions, the limit can be evaluated by "plugging in." Furthermore, if f is continuous, then you can move a limit on the outside to a limit on the inside: \lim_{p\to a} f(g(p)) = f(\lim_{p\to a} g(p)). If you can identify that your function is continuous, or at least becomes continuous after algebraic manipulation (e.g. canceling a "bad factor" from the denominator), you can use the theorems to say that the limit exists. Another useful theorem is the squeeze theorem: if you can cleverly bound your function on both sides by two functions tending to the same limit, you know the limit exists.
- As Norbert notes, another useful technique is expanding your non-elementary functions in Taylor series and using big O notation. You can usually convert the troublesome limit into a quotient and expand the non-elementary functions near the point in question to get a polynomial of sufficiently high degree in both the numerator and the denominator (plus an error term that doesn't play much of a role): i.e. your function \frac{f(t)}{g(t)}, where f and g are combinations of functions whose growth you don't know much about, becomes \frac{P(t) + O(t^n)}{Q(t) + O(t^n)}, where P and Q are polynomials, and

\lim_{t\to 0}\frac{P(t) + O(t^n)}{Q(t) + O(t^n)} = \lim_{t\to 0}\frac{P(t)}{Q(t)},

because the error terms (O terms) are negligible. Some examples for the one variable case can be found here, and the multivariable case works the same, only using the multivariable version of Taylor's theorem.

**Attribution***Source : Link , Question Author : Yellow Skies , Answer Author : Community*