## Matrices with almost constant coefficient have a simple eigenvalue

As a by-product of a general result for bounded operators of a Banach space, I have the following: A matrix $L=(\ell_{ij})_{ij}$ that has almost constant coefficients in the sense that for some $c$, on all row $i$ it holds $$\frac{1}{n}\sum_k \lvert \ell_{ik}-c\rvert\le \frac{|c|}{6}, \label{eq:Perron}$$ must have a simple eigenvalue. Under a slightly stronger … Read more

## Stationary distribution of mixture of Markov Chain with “complete” Markov Chain

I already asked this question in StackExchange, but found little attention. So I’m just going to copy-paste my original question here. Let P be a stochastic matrix (of an irreducible Markov Chain) with stationary distribution πT (i.e. πTP=πT) and let further E be the matrix of all 1‘s. Given an α∈[0,1], is it possible to … Read more

## How to find eigenvalues of following block matrices?

Is there a procedure to find the eigenvalues of A? ‎ A=[XI0I0PPt0II0PPt0⋱⋱⋱⋱⋱0II0PPt0I0k×y−kI00y−k×kYy×y] (All rows and columns except the last are k×k blocks.) where A is adjacency matrix of cubic graphs and X,P are circulant matrices of order k, Y is a matrix of order y, k≠y and I is a identity matrix and 0 is … Read more

## dominant eigenvector

Hi, everyone! Is there any efficient way to simplify the following tensor product X⊗X+XT⊗XT, where X is a square n×n matrix. My goal is to efficiently compute the dominant eigenvector of X⊗X+XT⊗XT. However, the direct way is computationally expensive. Is it possible to simplify it avoid tensor computation. For example, to compute the dominant eigenvector … Read more

## Helmholtz equation Poynting vector integral

The Maxwell’s equation for harmonic time dependent field in vacuum is ∇×B+iωE=0∇×E−iωB=0∇⋅B=0∇⋅E=0 where ω is a real number, i is the unit pure imaginary number, E and B are both complex valued 3-d vector function of 3-d space variables: R3→Z3. As a consequence, we have the Helmholtz equation (∇2+ω2)E=0(∇2+ω2)B=0 Suppose these holds in the interior … Read more

## Real eigenvectors of complex matrices

Let $A$ be a nonsingular complex $(3 \times 3)$-matrix (that is, an element of $\mathrm{GL}_3(\mathbb{C})$). Then what are some of the best-known criteria which guarantee $A$ to have real eigenvectors ? (I am also interested in the same question for nonsingular complex $(n \times n)$-matrices with $n \geq 2$, but my main target is the … Read more

## complexity of computing the singular vector corresponding to the smallest singular value

It is known that the singular value decomposition of an m×n matrix A is in general of complexity of the order mn2, assuming that m≥n. But what if we only want to compute say the singular vector corresponding to the smallest singular value? Can we do this significantly faster than mn2 operations? Please note that … Read more

## Differentiability of eigenvalue and eigenvector on the non-simple case

Let $h:\mathbb{R}^n\to\mathbb{R}^m, n>1$ be a twice continuously differentiable function and $J_h:\mathbb{R}^n\to\mathbb{R}^{m\times n}$ be its jacobian matrix. Let us consider the functions $A(x):=J_h^\mathtt{T}(x)J_h(x)\in\mathbb{R}^{n\times n}$ and $B(x):=J_h(x)J_h(x)^\mathtt{T}\in\mathbb{R}^{m\times m}$. I’m interested in sufficient conditions ensuring differentiability of the functions $U(x)$, $\Sigma(x)$ and $V(x)$ in a singular value decomposition of $J_h(x)=U(x)\Sigma(x)V(x)^\mathtt{T}$ when there is at least one repeating zero … Read more

## Why are 1 and -1 eigenvalues of this matrix?

This is a subject I’ve been working on for a very long time now, but still did not manage to fully understand the interesting properties of this matrix $\mathbf{A}$. First, let’s define two matrices: $\mathbf{N}$ is the following matrix: \mathbf{N}=\begin{bmatrix} \mathbf{I}_n & \mathbf{0}_n \\ \mathbf{0}_n & \mathbf{P}^{-1}\begin{bmatrix}1 & && \\ & \ddots && \\ … Read more

## Linearization of a gradient field

Setup: Suppose we are given a smooth function ϕ that has a nondegenerate minimum at x=0. Then we can choose a coordinate system x such that the gradient is given by X=gradϕ=∑iaixi∂∂xi+O(|x|2) where the ai>0 are the eigenvalues of the Hessian of ϕ at x=0. Now we look for smooth functions a and numbers λ … Read more