Every day you walk from point A to point B, which are 3 miles apart. There is a 50% chance each walk that there is an invisible wall somewhere strictly between the two points (never at A or B). The wall extends 1 mile in each direction perpendicular to the line segment (direct path) between A and B, and its position can be at any random location between the two points. That is, it can be any distance between A and B such as 0.1 miles away from A, 1.5 miles away, 2.9 miles away…. You don’t know if the wall is present or where it is until you walk into it. You must walk around the wall if present as there is no other way to circumvent it. Assume the wall is negligible thickness (like a force field), all ground is perfectly flat, and the y coordinates at both A and B are 0 (although I don’t think the optimal path will change much if they weren’t).

What strategy minimizes the average expected walk distance between A and B? How do we know for certain this strategy yields the shortest average walking distance?

To get the 100 bounty points, I would like a convincing argument from you as to why you feel your solution is optimal and cannot be beaten. For those of you using computer simulation, anything reasonably close to optimal is a candidate for the bounty.

Can anyone beat the optimal solutions submitted so far? There are still a few days to try. Even a slight beat (if not due to roundoff error or some other error) would help prove previous solutions are non-optimal. You can just use the table of 31 points (x-y coordinates) and compute that path length and then if you can find better then I will accept that answer. I think by Sunday night I may have to award the bounty otherwise it may get auto-awarded.

**Answer**

Using this, and the fact the shortest Euclidean distance between two points is a straight line, we can see that the minimum distance between the points A and B is. (Note: that the green segments denote the shortest lengths from A to the wall and the wall to B)

**Intro**

This is a long answer, if you’re interested in specifics here’s a layout. In the Lower Bound section, I establish the best possible scenario for the situation. The Linear Solution section assumes that the optimal path is linear. The Best Solution section finds a extremal path by using Variational Calculus. Everything in the solution is exact. Proving The Soltution is The Global Minima is a section on using the Second Variational Derivative to prove we have the solution. The Computer Algorithm section discusses how we might approximate solutions.

**Lower Bound**

Dm(L,X)=√H(X)+L2+√H(X)+(3−L)2

Where, H(X) is the Heaviside step function and X is either 0 or 1 with equal probability. Note that I’m assuming that L is a uniform random variable on the interval (0,3). The expected value of Dm(L) is,

E(Dm(L,X))=16⋅∫30Dm(L,0)+Dm(L,1) dL

(1)⇒E(Dm(L,X))=ln(√10+3√10−3)+6⋅(√10+3)12=3.3842…

So, the minimum possible value is (1).

**The Linear Solution**

Referencing the above diagram, we see that this essentially non-deterministic problem, has a very deterministic property associated with it. Namely, after the wall encounter, the rest of the optimal path can be determined. However, the optimal path before the wall encounter is indeterminate. In other words, we can only know what path we should’ve taken; we can’t actually take that path.

Imagine we have some optimal strategy that picks an optimal path f(x). Since the problem has no memory, the optimal path is unique. In addition, this memory property rules out the possibility of a mixed strategy solution.

Putting these facts together, we see that f(x) is deterministic for x<L and random for x>L. However, some things about f(x) are known,

(a)f(0)=0

(b)limx→L+f(x)=1

(c) \quad f(3)=0

(d) \quad f(x) \le 1

If we now consider all possible paths p(x) satisfying the conditions (a-d) we can investigate the path p_m(x) that minimizes the length. The length of a path p(x) from x=0 to x=L is given by,

L_p=\int_0^3 \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx+(1-p(L))

By inspection, we see that the right hand side is minimized for p(L), evaluated from the left, closer to 1. However, this means that the average slope of p(x) will therefore have to be equal to \cfrac{p(L)}{L}. Since the shortest curve with an average slope equal to that amount is a line, we know that p_m(x) must be linear*. This means we can derive an explicit formula for L_p using \cfrac{dp_m}{dx}=\lambda. Note that if a path overshoots the wall, it can only keep going in the same direction. There’s no knowledge that it over shot the wall, until the instant before it reaches point B.

If we take the expected value of L_p we can find the value of \lambda that minimizes the length. This value of \lambda determines the slope of the optimal path. The other conditions determine p_m(x) and thus f(x). Note, that the resulting equations are explicit, they are just too complicated to effectively represent here. In fact, I actually didn’t even solve it right on the first try! (I still might not have, i.e physical answer)

I get that \lambda=0 solves the problem. This gives an expected path length of,

E(L_p)=\cfrac{3 \cdot (\sqrt{10}+11)-\ln(\sqrt{10}-3)}{12}=3.6921

**Intuition:**

Here’s why the \lambda=0 solution should work. Since a wall is just as likely to be present as to not be present, we can analyze the total vertical distances traveled in both cases. In the case with a wall, the total distance is 1-\lambda \cdot L. In the case without a wall, the total distance is 3 \cdot \lambda. Thus the average vertical distance travelled is 1/2+\lambda \cdot (3-L). Which is minimized for \lambda=0.

*Bonus points if you realize this only guarantees optimal solutions, within the linear space.

**The Best Solution**

I’ve been motivated to provide a full solution so here we’ll take a look at better paths that aren’t linear.

Realizing that the argument for a linear path is rather contrived, and only possibly locally optimal, we’ll see what happens if look at all possible functions in an unbiased manner. Then the average length of an arbitrary path, p(x), is given by,

F(L,p(x))=\cfrac{1}{2} \cdot \left (\int_0^L \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx+(1-p(L))+\sqrt{1+(3-L)^2} \right)+\cfrac{1}{2} \cdot \int_0^3 \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx

The expected path length E(F(L)) is therefore given by,

E(p(x))=\cfrac{1}{3} \cdot \int_0^3 F(L,p(x)) \ dL

I have to thank @user5713492 for the idea to change the order of integration, here’s the quick sketch of why

\int_0^3 \int_0^L \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx dL =\int_0^3 \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \int_x^3 dL dx

=\int_0^3 (3-x) \cdot \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx

The second double integral is much easier, and I’ll leave it to the reader to verify that it can be contracted to, 3 \cdot \int_0^3 \sqrt{1+\left(\cfrac{dp}{dx} \right)^2} \ dx. Substituting this information in and simplifying we get,

E(p(x))=\frac{1}{6} \int_0^3 (6-x) \cdot \sqrt{1+\left(p'(x) \right)^2}+(1-p(x))+\sqrt{1+(3-x)^2} \ dx

=\frac{1}{6} \int_0^3 L \left(x, p(x), p'(x) \right) \ dx

Note that we switched L to x so that we could combine the integrals. The fact that E(p(x)) can be written as integral over a kernel, occasionally called the lagrangian, is very important. Now we have the problem framed in such a way it can be solved using the mathematical subject known as ‘Calculus of Variations’. Effectivley, this subject studies how to minimize/maximize ‘functionals’, essentially functions of functions. We’ll be using Euler-Lagrange’s Equation essentially. First, we want to see how E(p(x)) changes if we change p(x). We’ll make a small perturbation \epsilon \cdot \eta(x) and demand that this small change in the path disappear at the boundaries. In other words \eta(0)=\eta(3)=0. It’s essential to note \eta(x) is the perturbation while \epsilon is just a control parameter.

Using this we change the function undergoes a transformation,

p(x) \rightarrow p(x)+\epsilon \cdot \eta(x)=p_{\eta}(x)

Now essentially, we want to know the change in the expected length between the perturbed and unperturbed case. Essentially we want to know,

\delta E=\int_0^3 \cfrac{\delta E}{\delta p} \cdot \eta(x) \ dx=\cfrac{E(p_{\eta}(x))-E(p(x))}{\epsilon}=\cfrac{d}{d \epsilon} \left[ E(p_{\eta}(x)) \right]

Note that we take the limit as \epsilon goes to zero. Essentially, in the above, \delta E, or the first variation, is expressed in terms of summing over each part of the perturbation \eta(x) against the functional derivative. The other two equalities, follow naturally. Now we have,

E(p_{\eta}(x))=\int_0^3 L \left(x, p(x), p'(x) \right) \ dx

We can find the first variation by using the above formulas. After integrating by parts and noting boundary conditions, we can also obtain the functional derivative. Since we want an optimal solution, we want the functional derivative to be equal to zero*. The functional derivative for \int L \left(x, p(x), p'(x) \right) has been derived countless times so I’ll simply redirect you to Euler-Lagrange. After substituting our particular form for the lagrangian we get,

\cfrac{dE}{dp}-\cfrac{d}{dt} \left[ \cfrac{dE}{dp’} \right]=0

\Rightarrow \cfrac{d}{dx} \left[ \cfrac{(6-x) \cdot p'(x)}{\sqrt{1+(p'(x))^2}} \right]=-1

This equation can be solved by demanding p(0)=p(3)=0 for the boundary conditions. We can immediately integrate this equation with respect to x and obtain,

\cfrac{(6-x) \cdot p'(x)}{\sqrt{1+(p'(x))^2}}=-x+c_0

After solving for the velocity, we obtain,

p'(x)=-(x+c_0) \cdot \sqrt{\cfrac{1}{(c_0+6) \cdot (6-2 \cdot u-c_0)}}

We can directly integrate this equation and obtain the optimal path,

p(x)=c_1-\int (x+c_0) \cdot \sqrt{\cfrac{1}{(c_0+6) \cdot (6-2 \cdot x-c_0)}} \ dx

\Rightarrow p(x)=c_1-\cfrac{1}{3} \cdot (2 \cdot c_0+x+6) \cdot (c_0+2 \cdot x-6) \cdot \sqrt{\cfrac{1}{(c_0+6) \cdot (6-2 \cdot x-c_0)}}

p(0)=p(3)=0

Since we have two boundary conditions and two constants, this solution is perfectly valid. In fact we can explicitly solve for c_0 and c_1.

c_0=\cfrac{\sqrt{57}-21}{8}=-1.68127

c_1=-\cfrac{\sqrt{14 \cdot (\sqrt{57}+85)} \cdot (3 \cdot \sqrt{19}+49 \cdot \sqrt{3})}{2688}=-1.17247

The value for E(p(x)) for this path is and a plot is shown below. The length of the arc itself is about 3.065. I don’t want to do the integral, sorry. Luckily, @Trefox did the computation which results in a path length of 3/2 \cdot

\sqrt{1/2 \cdot (31 – 3 \sqrt{57})}

E(p(x))=\cfrac{\operatorname{sinh^{-1}}(3)}{12}+\cfrac{5 \cdot \sqrt{42} \cdot (3235-279) \cdot \sqrt{57})}{672}+\cfrac{\sqrt{14 \cdot (\sqrt{57}+85)} \cdot (3 \cdot \sqrt{19}+43 \cdot \sqrt{3})}{5376}+\cfrac{2+\sqrt{10}}{4}

=3.64827…

**Proving the Solution is the Global Minima**

The optimal solution is also the function that minimizes E(p(x)), which is called a functional by the way. This can be proven by taking the second variation and solving an eigenvalue problem. Physically this is obvious as the maximum solution doesn’t exist. However, proving non-existence of saddle points is difficult.

If you’re interested in the proof of this, we have the second variation provided enter link description here, see page 18. Using this, can we set up the eigenvalue problem with,

\int dx’ \cfrac{\delta E}{\delta p(x) \delta p(x’)} \cdot \eta(x’)= \lambda \cdot \eta(x’)

Similarly to the case with functions, if the second derivative is positive, we have a minimum. So, to do this, we must prove that for all possible perturbations \eta, \lambda \gt 0. If this is true, then the second functional derivative must be positive, and thus we have a minimum. If use the reference, we find out just how lucky we are! The only term that survives is the third coefficient term,

C(x)=\cfrac{6-x}{(1+(p'(x))^2)^{3/2}} \gt 0

Hopefully it’s clear that this term is strictly larger than zero. Since, we can construct an arbitrarily high value for E(p(x)) by picking non-differentiable continuous paths, and the solution to the differential equation is unique, there are no other paths that have a vanishing functional derivative. Therefore, this local minima must also be a global minima for the functional E(p(x))).

There was someone who suggested that there might suggest an infinum for E that doesn’t have a vanishing derivative. The analogy with f(x)=\frac{1}{x} was presented. However, taking the derivative yields f'(x)=-\cfrac{1}{x^2} which clearly goes to zero as x approaches infinity. In addition since there is a lower bound for E, which was presented above, it’s impossible for such a infinum, which is non-existent, to be unboundedly better than our solution. In other words my argument above is still valid, I just thought it was important to highlight some of the details involved in the argument.

**Computer Algorithms**

This is a math site, so forgive my lack of expertise on specifics…

If I was completely oblivious to how to solve the problem exactly, I’d need to investigate the problem numerically. My first choice would be to setup a neural network with inputs being the various lengths along the x-axis and the outputs being the y coordinates of a path. The Fitness function would be the expected value of the length.

However, not everyone just has neural networks lying around. My second choice would be to use gradient descent. I’d pick the initial path to be 0.15 \cdot x \cdot (3-x) and discretized into an arbitrary number of chunks. Then I’d raise or lower a random part of the path by an amount \Delta. If there was improvement to E(p(x)) I’d keep the change, otherwise it’d have to go. Since, its fairly trivial to probe that there’s only one extrema in the path space, there’s no risk of settling on local extrema.

**Attribution***Source : Link , Question Author : David , Answer Author : Zach466920*